Sample records for automatic relevance determination

  1. Semi-Automatic Determination of Citation Relevancy: User Evaluation.

    ERIC Educational Resources Information Center

    Huffman, G. David

    1990-01-01

    Discussion of online bibliographic database searches focuses on a software system, SORT-AID/SABRE, that ranks retrieved citations in terms of relevance. Results of a comprehensive user evaluation of the relevance ranking procedure to determine its effectiveness are presented, and implications for future work are suggested. (10 references) (LRW)

  2. Automatic segmentation of relevant structures in DCE MR mammograms

    NASA Astrophysics Data System (ADS)

    Koenig, Matthias; Laue, Hendrik; Boehler, Tobias; Peitgen, Heinz-Otto

    2007-03-01

    The automatic segmentation of relevant structures such as skin edge, chest wall, or nipple in dynamic contrast enhanced MR imaging (DCE MRI) of the breast provides additional information for computer aided diagnosis (CAD) systems. Automatic reporting using BI-RADS criteria benefits of information about location of those structures. Lesion positions can be automatically described relatively to such reference structures for reporting purposes. Furthermore, this information can assist data reduction for computation expensive preprocessing such as registration, or for visualization of only the segments of current interest. In this paper, a novel automatic method for determining the air-breast boundary resp. skin edge, for approximation of the chest wall, and locating of the nipples is presented. The method consists of several steps which are built on top of each other. Automatic threshold computation leads to the air-breast boundary which is then analyzed to determine the location of the nipple. Finally, results of both steps are starting point for approximation of the chest wall. The proposed process was evaluated on a large data set of DCE MRI recorded by T1 sequences and yielded reasonable results in all cases.

  3. Techniques for Automatically Generating Biographical Summaries from News Articles

    DTIC Science & Technology

    2007-09-01

    non-trivial because of the many NLP areas that must be used to efficiently extract the relevant facts. Yet, no study has been done to determine how...also non-trivial because of the many NLP areas that must be used to efficiently extract the relevant facts. Yet, no study has been done to determine...AI) research is called Natural Language Processing ( NLP ). NLP seeks to find ways for computers to read and write documents in as human a way as

  4. Application of Multilayer Perceptron with Automatic Relevance Determination on Weed Mapping Using UAV Multispectral Imagery

    PubMed Central

    Tamouridou, Afroditi A.; Lagopodi, Anastasia L.; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios

    2017-01-01

    Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery. PMID:29019957

  5. Application of Multilayer Perceptron with Automatic Relevance Determination on Weed Mapping Using UAV Multispectral Imagery.

    PubMed

    Tamouridou, Afroditi A; Alexandridis, Thomas K; Pantazi, Xanthoula E; Lagopodi, Anastasia L; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios

    2017-10-11

    Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery.

  6. Automatic summarization of soccer highlights using audio-visual descriptors.

    PubMed

    Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc

    2015-01-01

    Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.

  7. Determining the relative importance of figures in journal articles to find representative images

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Foncubierta-Rodríguez, Antonio; Lin, Chang; Eggel, Ivan

    2013-03-01

    When physicians are searching for articles in the medical literature, images of the articles can help determining relevance of the article content for a specific information need. The visual image representation can be an advantage in effectiveness (quality of found articles) and also in efficiency (speed of determining relevance or irrelevance) as many articles can likely be excluded much quicker by looking at a few representative images. In domains such as medical information retrieval, allowing to determine relevance quickly and accurately is an important criterion. This becomes even more important when small interfaces are used as it is frequently the case on mobile phones and tablets to access scientific data whenever information needs arise. In scientific articles many figures are used and particularly in the biomedical literature only a subset may be relevant for determining the relevance of a specific article to an information need. In many cases clinical images can be seen as more important for visual appearance than graphs or histograms that require looking at the context for interpretation. To get a clearer idea of image relevance in articles, a user test with a physician was performed who classified images of biomedical research articles into categories of importance that can subsequently be used to evaluate algorithms that automatically select images as representative examples. The manual sorting of images of 50 journal articles of BioMedCentral with each containing more than 8 figures by importance also allows to derive several rules that determine how to choose images and how to develop algorithms for choosing the most representative images of specific texts. This article describes the user tests and can be a first important step to evaluate automatic tools to select representative images for representing articles and potentially also images in other contexts, for example when representing patient records or other medical concepts when selecting images to represent RadLex terms in tutorials or interactive interfaces for example. This can help to make the image retrieval process more efficient and effective for physicians.

  8. Automatic Term Class Construction Using Relevance--A Summary of Work in Automatic Pseudoclassification.

    ERIC Educational Resources Information Center

    Salton, G.

    1980-01-01

    Summarizes studies of pseudoclassification, a process of utilizing user relevance assessments of certain documents with respect to certain queries to build term classes designed to retrieve relevant documents. Conclusions are reached concerning the effectiveness and feasibility of constructing term classifications based on human relevance…

  9. Automatic three-dimensional measurement of large-scale structure based on vision metrology.

    PubMed

    Zhu, Zhaokun; Guan, Banglei; Zhang, Xiaohu; Li, Daokui; Yu, Qifeng

    2014-01-01

    All relevant key techniques involved in photogrammetric vision metrology for fully automatic 3D measurement of large-scale structure are studied. A new kind of coded target consisting of circular retroreflective discs is designed, and corresponding detection and recognition algorithms based on blob detection and clustering are presented. Then a three-stage strategy starting with view clustering is proposed to achieve automatic network orientation. As for matching of noncoded targets, the concept of matching path is proposed, and matches for each noncoded target are found by determination of the optimal matching path, based on a novel voting strategy, among all possible ones. Experiments on a fixed keel of airship have been conducted to verify the effectiveness and measuring accuracy of the proposed methods.

  10. Evaluating automatic attentional capture by self-relevant information.

    PubMed

    Ocampo, Brenda; Kahan, Todd A

    2016-01-01

    Our everyday decisions and memories are inadvertently influenced by self-relevant information. For example, we are faster and more accurate at making perceptual judgments about stimuli associated with ourselves, such as our own face or name, as compared with familiar non-self-relevant stimuli. Humphreys and Sui propose a "self-attention network" to account for these effects, wherein self-relevant stimuli automatically capture our attention and subsequently enhance the perceptual processing of self-relevant information. We propose that the masked priming paradigm and continuous flash suppression represent two ways to experimentally examine these controversial claims.

  11. Automatic information timeliness assessment of diabetes web sites by evidence based medicine.

    PubMed

    Sağlam, Rahime Belen; Taşkaya Temizel, Tuğba

    2014-11-01

    Studies on health domain have shown that health websites provide imperfect information and give recommendations which are not up to date with the recent literature even when their last modified dates are quite recent. In this paper, we propose a framework which assesses the timeliness of the content of health websites automatically by evidence based medicine. Our aim is to assess the accordance of website contents with the current literature and information timeliness disregarding the update time stated on the websites. The proposed method is based on automatic term recognition, relevance feedback and information retrieval techniques in order to generate time-aware structured queries. We tested the framework on diabetes health web sites which were archived between 2006 and 2013 by Archive-it using American Diabetes Association's (ADA) guidelines. The results showed that the proposed framework achieves 65% and 77% accuracy in detecting the timeliness of the web content according to years and pre-determined time intervals respectively. Information seekers and web site owners may benefit from the proposed framework in finding relevant and up-to-date diabetes web sites. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Eye movements in pedophiles: automatic and controlled attentional processes while viewing prepubescent stimuli.

    PubMed

    Fromberger, Peter; Jordan, Kirsten; Steinkrauss, Henrike; von Herder, Jakob; Stolpmann, Georg; Kröner-Herwig, Birgit; Müller, Jürgen Leo

    2013-05-01

    Recent theories in sexuality highlight the importance of automatic and controlled attentional processes in viewing sexually relevant stimuli. The model of Spiering and Everaerd (2007) assumes that sexually relevant features of a stimulus are preattentively selected and automatically induce focal attention to these sexually relevant aspects. Whether this assumption proves true for pedophiles is unknown. It is aim of this study to test this assumption empirically for people suffering from pedophilic interests. Twenty-two pedophiles, 8 nonpedophilic forensic controls, and 52 healthy controls simultaneously viewed the picture of a child and the picture of an adult while eye movements were measured. Entry time was assessed as a measure of automatic attentional processes and relative fixation time in order to assess controlled attentional processes. Pedophiles demonstrated significantly shorter entry time to child stimuli than to adult stimuli. The opposite was the case for nonpedophiles, as they showed longer relative fixation time for adult stimuli, and, against all expectations, pedophiles also demonstrated longer relative fixation time for adult stimuli. The results confirmed the hypothesis that pedophiles automatically selected sexually relevant stimuli (children). Contrary to all expectations, this automatic selection did not trigger the focal attention to these sexually relevant pictures. Furthermore, pedophiles were first and longest attracted by faces and pubic regions of children; nonpedophiles were first and longest attracted by faces and breasts of adults. The results demonstrated, for the first time, that the face and pubic region are the most attracting regions in children for pedophiles. © 2013 American Psychological Association

  13. 47 CFR 87.5 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... station providing communication between a control tower and aircraft. Automatic dependent surveillance... relevant information about the aircraft. Automatic terminal information service-broadcast (ATIS-B). The automatic provision of current, routine information to arriving and departing aircraft throughout a 24-hour...

  14. 47 CFR 87.5 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... station providing communication between a control tower and aircraft. Automatic dependent surveillance... relevant information about the aircraft. Automatic terminal information service-broadcast (ATIS-B). The automatic provision of current, routine information to arriving and departing aircraft throughout a 24-hour...

  15. Recommending images of user interests from the biomedical literature

    NASA Astrophysics Data System (ADS)

    Clukey, Steven; Xu, Songhua

    2013-03-01

    Every year hundreds of thousands of biomedical images are published in journals and conferences. Consequently, finding images relevant to one's interests becomes an ever daunting task. This vast amount of literature creates a need for intelligent and easy-to-use tools that can help researchers effectively navigate through the content corpus and conveniently locate materials of their interests. Traditionally, literature search tools allow users to query content using topic keywords. However, manual query composition is often time and energy consuming. A better system would be one that can automatically deliver relevant content to a researcher without having the end user manually manifest one's search intent and interests via search queries. Such a computer-aided assistance for information access can be provided by a system that first determines a researcher's interests automatically and then recommends images relevant to the person's interests accordingly. The technology can greatly improve a researcher's ability to stay up to date in their fields of study by allowing them to efficiently browse images and documents matching their needs and interests among the vast amount of the biomedical literature. A prototype system implementation of the technology can be accessed via http://www.smartdataware.com.

  16. Automatic Modulation Classification of Common Communication and Pulse Compression Radar Waveforms using Cyclic Features

    DTIC Science & Technology

    2013-03-01

    intermediate frequency LFM linear frequency modulation MAP maximum a posteriori MATLAB® matrix laboratory ML maximun likelihood OFDM orthogonal frequency...spectrum, frequency hopping, and orthogonal frequency division multiplexing ( OFDM ) modulations. Feature analysis would be a good research thrust to...determine feature relevance and decide if removing any features improves performance. Also, extending the system for simulations using a MIMO receiver or

  17. Precision about the automatic emotional brain.

    PubMed

    Vuilleumier, Patrik

    2015-01-01

    The question of automaticity in emotion processing has been debated under different perspectives in recent years. Satisfying answers to this issue will require a better definition of automaticity in terms of relevant behavioral phenomena, ecological conditions of occurrence, and a more precise mechanistic account of the underlying neural circuits.

  18. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams

    PubMed Central

    Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804

  19. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    PubMed

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  20. Automatic Approach Tendencies toward High and Low Caloric Food in Restrained Eaters: Influence of Task-Relevance and Mood

    PubMed Central

    Neimeijer, Renate A. M.; Roefs, Anne; Ostafin, Brian D.; de Jong, Peter J.

    2017-01-01

    Objective: Although restrained eaters are motivated to control their weight by dieting, they are often unsuccessful in these attempts. Dual process models emphasize the importance of differentiating between controlled and automatic tendencies to approach food. This study investigated the hypothesis that heightened automatic approach tendencies in restrained eaters would be especially prominent in contexts where food is irrelevant for their current tasks. Additionally, we examined the influence of mood on the automatic tendency to approach food as a function of dietary restraint. Methods: An Affective Simon Task-manikin was administered to measure automatic approach tendencies where food is task-irrelevant, and a Stimulus Response Compatibility task (SRC) to measure automatic approach in contexts where food is task-relevant, in 92 female participants varying in dietary restraint. Prior to the task, sad, stressed, neutral, or positive mood was induced. Food intake was measured during a bogus taste task after the computer tasks. Results: Consistent with their diet goals, participants with a strong tendency to restrain their food intake showed a relatively weak approach bias toward food when food was task-relevant (SRC) and this effect was independent of mood. Restrained eaters showed a relatively strong approach bias toward food when food was task-irrelevant in the positive condition and a relatively weak approach in the sad mood. Conclusion: The weak approach bias in contexts where food is task-relevant may help high-restrained eaters to comply with their diet goal. However, the strong approach bias in contexts where food is task-irrelevant and when being in a positive mood may interfere with restrained eaters’ goal of restricting food-intake. PMID:28443045

  1. Automatic detection of confusion in elderly users of a web-based health instruction video.

    PubMed

    Postma-Nilsenová, Marie; Postma, Eric; Tates, Kiek

    2015-06-01

    Because of cognitive limitations and lower health literacy, many elderly patients have difficulty understanding verbal medical instructions. Automatic detection of facial movements provides a nonintrusive basis for building technological tools supporting confusion detection in healthcare delivery applications on the Internet. Twenty-four elderly participants (70-90 years old) were recorded while watching Web-based health instruction videos involving easy and complex medical terminology. Relevant fragments of the participants' facial expressions were rated by 40 medical students for perceived level of confusion and analyzed with automatic software for facial movement recognition. A computer classification of the automatically detected facial features performed more accurately and with a higher sensitivity than the human observers (automatic detection and classification, 64% accuracy, 0.64 sensitivity; human observers, 41% accuracy, 0.43 sensitivity). A drill-down analysis of cues to confusion indicated the importance of the eye and eyebrow region. Confusion caused by misunderstanding of medical terminology is signaled by facial cues that can be automatically detected with currently available facial expression detection technology. The findings are relevant for the development of Web-based services for healthcare consumers.

  2. Bayesian Covariate Selection in Mixed-Effects Models For Longitudinal Shape Analysis

    PubMed Central

    Muralidharan, Prasanna; Fishbaugh, James; Kim, Eun Young; Johnson, Hans J.; Paulsen, Jane S.; Gerig, Guido; Fletcher, P. Thomas

    2016-01-01

    The goal of longitudinal shape analysis is to understand how anatomical shape changes over time, in response to biological processes, including growth, aging, or disease. In many imaging studies, it is also critical to understand how these shape changes are affected by other factors, such as sex, disease diagnosis, IQ, etc. Current approaches to longitudinal shape analysis have focused on modeling age-related shape changes, but have not included the ability to handle covariates. In this paper, we present a novel Bayesian mixed-effects shape model that incorporates simultaneous relationships between longitudinal shape data and multiple predictors or covariates to the model. Moreover, we place an Automatic Relevance Determination (ARD) prior on the parameters, that lets us automatically select which covariates are most relevant to the model based on observed data. We evaluate our proposed model and inference procedure on a longitudinal study of Huntington's disease from PREDICT-HD. We first show the utility of the ARD prior for model selection in a univariate modeling of striatal volume, and next we apply the full high-dimensional longitudinal shape model to putamen shapes. PMID:28090246

  3. A Robust Automatic Ionospheric O/X Mode Separation Technique for Vertical Incidence Sounders

    NASA Astrophysics Data System (ADS)

    Harris, T. J.; Pederick, L. H.

    2017-12-01

    The sounding of the ionosphere by a vertical incidence sounder (VIS) is the oldest and most common technique for determining the state of the ionosphere. The automatic extraction of relevant ionospheric parameters from the ionogram image, referred to as scaling, is important for the effective utilization of data from large ionospheric sounder networks. Due to the Earth's magnetic field, the ionosphere is birefringent at radio frequencies, so a VIS will typically see two distinct returns for each frequency. For the automatic scaling of ionograms, it is highly desirable to be able to separate the two modes. Defence Science and Technology Group has developed a new VIS solution which is based on direct digital receiver technology and includes an algorithm to separate the O and X modes. This algorithm can provide high-quality separation even in difficult ionospheric conditions. In this paper we describe the algorithm and demonstrate its consistency and reliability in successfully separating 99.4% of the ionograms during a 27 day experimental campaign under sometimes demanding ionospheric conditions.

  4. RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials.

    PubMed

    Marshall, Iain J; Kuiper, Joël; Wallace, Byron C

    2016-01-01

    To develop and evaluate RobotReviewer, a machine learning (ML) system that automatically assesses bias in clinical trials. From a (PDF-formatted) trial report, the system should determine risks of bias for the domains defined by the Cochrane Risk of Bias (RoB) tool, and extract supporting text for these judgments. We algorithmically annotated 12,808 trial PDFs using data from the Cochrane Database of Systematic Reviews (CDSR). Trials were labeled as being at low or high/unclear risk of bias for each domain, and sentences were labeled as being informative or not. This dataset was used to train a multi-task ML model. We estimated the accuracy of ML judgments versus humans by comparing trials with two or more independent RoB assessments in the CDSR. Twenty blinded experienced reviewers rated the relevance of supporting text, comparing ML output with equivalent (human-extracted) text from the CDSR. By retrieving the top 3 candidate sentences per document (top3 recall), the best ML text was rated more relevant than text from the CDSR, but not significantly (60.4% ML text rated 'highly relevant' v 56.5% of text from reviews; difference +3.9%, [-3.2% to +10.9%]). Model RoB judgments were less accurate than those from published reviews, though the difference was <10% (overall accuracy 71.0% with ML v 78.3% with CDSR). Risk of bias assessment may be automated with reasonable accuracy. Automatically identified text supporting bias assessment is of equal quality to the manually identified text in the CDSR. This technology could substantially reduce reviewer workload and expedite evidence syntheses. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  5. Intelligent Weather Agent

    NASA Technical Reports Server (NTRS)

    Spirkovska, Liljana (Inventor)

    2006-01-01

    Method and system for automatically displaying, visually and/or audibly and/or by an audible alarm signal, relevant weather data for an identified aircraft pilot, when each of a selected subset of measured or estimated aviation situation parameters, corresponding to a given aviation situation, has a value lying in a selected range. Each range for a particular pilot may be a default range, may be entered by the pilot and/or may be automatically determined from experience and may be subsequently edited by the pilot to change a range and to add or delete parameters describing a situation for which a display should be provided. The pilot can also verbally activate an audible display or visual display of selected information by verbal entry of a first command or a second command, respectively, that specifies the information required.

  6. Incorporating Non-Relevance Information in the Estimation of Query Models

    DTIC Science & Technology

    2008-11-01

    experiments in relevance feedback. In Salton , G., editor, The SMART Retrieval System – Exper- iments in Automatic Document Processing, pages 337– 354...W. (2001). Relevance based lan- guage models. In SIGIR ’01. Rocchio, J. (1971). Relevance feedback in information re- trieval. In Salton , G., editor

  7. Accurate expectancies diminish perceptual distraction during visual search

    PubMed Central

    Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry

    2014-01-01

    The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374

  8. Military applications of automatic speech recognition and future requirements

    NASA Technical Reports Server (NTRS)

    Beek, Bruno; Cupples, Edward J.

    1977-01-01

    An updated summary of the state-of-the-art of automatic speech recognition and its relevance to military applications is provided. A number of potential systems for military applications are under development. These include: (1) digital narrowband communication systems; (2) automatic speech verification; (3) on-line cartographic processing unit; (4) word recognition for militarized tactical data system; and (5) voice recognition and synthesis for aircraft cockpit.

  9. Towards parsimony in habit measurement: Testing the convergent and predictive validity of an automaticity subscale of the Self-Report Habit Index

    PubMed Central

    2012-01-01

    Background The twelve-item Self-Report Habit Index (SRHI) is the most popular measure of energy-balance related habits. This measure characterises habit by automatic activation, behavioural frequency, and relevance to self-identity. Previous empirical research suggests that the SRHI may be abbreviated with no losses in reliability or predictive utility. Drawing on recent theorising suggesting that automaticity is the ‘active ingredient’ of habit-behaviour relationships, we tested whether an automaticity-specific SRHI subscale could capture habit-based behaviour patterns in self-report data. Methods A content validity task was undertaken to identify a subset of automaticity indicators within the SRHI. The reliability, convergent validity and predictive validity of the automaticity item subset was subsequently tested in secondary analyses of all previous SRHI applications, identified via systematic review, and in primary analyses of four raw datasets relating to energy‐balance relevant behaviours (inactive travel, active travel, snacking, and alcohol consumption). Results A four-item automaticity subscale (the ‘Self-Report Behavioural Automaticity Index’; ‘SRBAI’) was found to be reliable and sensitive to two hypothesised effects of habit on behaviour: a habit-behaviour correlation, and a moderating effect of habit on the intention-behaviour relationship. Conclusion The SRBAI offers a parsimonious measure that adequately captures habitual behaviour patterns. The SRBAI may be of particular utility in predicting future behaviour and in studies tracking habit formation or disruption. PMID:22935297

  10. An evaluation of Bayesian techniques for controlling model complexity and selecting inputs in a neural network for short-term load forecasting.

    PubMed

    Hippert, Henrique S; Taylor, James W

    2010-04-01

    Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation. Copyright 2009 Elsevier Ltd. All rights reserved.

  11. K-Nearest Neighbors Relevance Annotation Model for Distance Education

    ERIC Educational Resources Information Center

    Ke, Xiao; Li, Shaozi; Cao, Donglin

    2011-01-01

    With the rapid development of Internet technologies, distance education has become a popular educational mode. In this paper, the authors propose an online image automatic annotation distance education system, which could effectively help children learn interrelations between image content and corresponding keywords. Image automatic annotation is…

  12. Automaticity of phasic alertness: evidence for a three-component model of visual cueing

    PubMed Central

    Lin, Zhicheng; Lu, Zhong-Lin

    2017-01-01

    The automaticity of phasic alertness is investigated using the attention network test. Results show that the cueing effect from the alerting cue—double cue—is strongly enhanced by the task relevance of visual cues, as determined by the informativeness of the orienting cue—single cue—that is being mixed (80% vs. 50% valid in predicting where the target will appear). Counterintuitively, the cueing effect from the alerting cue can be negatively affected by its visibility, such that masking the cue from awareness can reveal a cueing effect that is otherwise absent when the cue is visible. Evidently, top-down influences—in the form of contextual relevance and cue awareness—can have opposite influences on the cueing effect by the alerting cue. These findings lead us to the view that a visual cue can engage three components of attention—orienting, alerting, and inhibition—to determine the behavioral cueing effect. We propose that phasic alertness, particularly in the form of specific response readiness, is regulated by both internal, top-down expectation and external, bottom-up stimulus properties. In contrast to some existing views, we advance the perspective that phasic alertness is strongly tied to temporal orienting, attentional capture, and spatial orienting. Finally, we discuss how translating attention research to clinical applications would benefit from an improved ability to measure attention. To this end, controlling the degree of intraindividual variability in the attentional components and improving the precision of the measurement tools may prove vital. PMID:27173487

  13. Automaticity of phasic alertness: Evidence for a three-component model of visual cueing.

    PubMed

    Lin, Zhicheng; Lu, Zhong-Lin

    2016-10-01

    The automaticity of phasic alertness is investigated using the attention network test. Results show that the cueing effect from the alerting cue-double cue-is strongly enhanced by the task relevance of visual cues, as determined by the informativeness of the orienting cue-single cue-that is being mixed (80 % vs. 50 % valid in predicting where the target will appear). Counterintuitively, the cueing effect from the alerting cue can be negatively affected by its visibility, such that masking the cue from awareness can reveal a cueing effect that is otherwise absent when the cue is visible. Evidently, then, top-down influences-in the form of contextual relevance and cue awareness-can have opposite influences on the cueing effect from the alerting cue. These findings lead us to the view that a visual cue can engage three components of attention-orienting, alerting, and inhibition-to determine the behavioral cueing effect. We propose that phasic alertness, particularly in the form of specific response readiness, is regulated by both internal, top-down expectation and external, bottom-up stimulus properties. In contrast to some existing views, we advance the perspective that phasic alertness is strongly tied to temporal orienting, attentional capture, and spatial orienting. Finally, we discuss how translating attention research to clinical applications would benefit from an improved ability to measure attention. To this end, controlling the degree of intraindividual variability in the attentional components and improving the precision of the measurement tools may prove vital.

  14. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    PubMed

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  15. Self-Learning Adaptive Umbrella Sampling Method for the Determination of Free Energy Landscapes in Multiple Dimensions

    PubMed Central

    Wojtas-Niziurski, Wojciech; Meng, Yilin; Roux, Benoit; Bernèche, Simon

    2013-01-01

    The potential of mean force describing conformational changes of biomolecules is a central quantity that determines the function of biomolecular systems. Calculating an energy landscape of a process that depends on three or more reaction coordinates might require a lot of computational power, making some of multidimensional calculations practically impossible. Here, we present an efficient automatized umbrella sampling strategy for calculating multidimensional potential of mean force. The method progressively learns by itself, through a feedback mechanism, which regions of a multidimensional space are worth exploring and automatically generates a set of umbrella sampling windows that is adapted to the system. The self-learning adaptive umbrella sampling method is first explained with illustrative examples based on simplified reduced model systems, and then applied to two non-trivial situations: the conformational equilibrium of the pentapeptide Met-enkephalin in solution and ion permeation in the KcsA potassium channel. With this method, it is demonstrated that a significant smaller number of umbrella windows needs to be employed to characterize the free energy landscape over the most relevant regions without any loss in accuracy. PMID:23814508

  16. Automatic evaluation of the Valsalva sinuses from cine-MRI

    NASA Astrophysics Data System (ADS)

    Blanchard, Cédric; Sliwa, Tadeusz; Lalande, Alain; Mohan, Pauliah; Bouchot, Olivier; Voisin, Yvon

    2011-03-01

    MRI appears to be particularly attractive for the study of the Sinuses of Valsalva (SV), however there is no global consensus on their suitable measurements. In this paper, we propose a new method, based on the mathematical morphology and combining a numerical geodesic reconstruction with an area estimation, to automatically evaluate the SV from a cine-MRI in a cross-sectional orientation. It consists in the extraction of the shape of the SV, the detection of relevant points (commissures, cusps and the centre of the SV), the measurement of relevant distances and in a classification of the valve as bicuspid or tricuspid by a metric evaluation of the SV. Our method was tested on 23 patient examinations and radii calculations were compared with a manual measurement. The classification of the valve as tricuspid or bicuspid was correct for all the cases. Moreover, there are an excellent correlation and an excellent concordance between manual and automatic measurements for images at diastolic phase (r= 0.97; y = x - 0.02; p=NS; mean of differences = -0.1 mm; standard deviation of differences = 2.3 mm) and at systolic phase (r= 0.96; y = 0.97 x + 0.80; p=NS ; mean of differences = -0.1 mm; standard deviation of differences = 2.4 mm). The cross-sectional orientation of the image acquisition plane conjugated with our automatic method provides a reliable morphometric evaluation of the SV, based on the automatic location of the centre of the SV, the commissure and the cusp positions. Measurements of distances between relevant points allow a precise evaluation of the SV.

  17. Automatic Screening and Grading of Age-Related Macular Degeneration from Texture Analysis of Fundus Images

    PubMed Central

    Phan, Thanh Vân; Seoud, Lama; Chakor, Hadi; Cheriet, Farida

    2016-01-01

    Age-related macular degeneration (AMD) is a disease which causes visual deficiency and irreversible blindness to the elderly. In this paper, an automatic classification method for AMD is proposed to perform robust and reproducible assessments in a telemedicine context. First, a study was carried out to highlight the most relevant features for AMD characterization based on texture, color, and visual context in fundus images. A support vector machine and a random forest were used to classify images according to the different AMD stages following the AREDS protocol and to evaluate the features' relevance. Experiments were conducted on a database of 279 fundus images coming from a telemedicine platform. The results demonstrate that local binary patterns in multiresolution are the most relevant for AMD classification, regardless of the classifier used. Depending on the classification task, our method achieves promising performances with areas under the ROC curve between 0.739 and 0.874 for screening and between 0.469 and 0.685 for grading. Moreover, the proposed automatic AMD classification system is robust with respect to image quality. PMID:27190636

  18. Automatic Carbon Dioxide-Methane Gas Sensor Based on the Solubility of Gases in Water

    PubMed Central

    Cadena-Pereda, Raúl O.; Rivera-Muñoz, Eric M.; Herrera-Ruiz, Gilberto; Gomez-Melendez, Domingo J.; Anaya-Rivera, Ely K.

    2012-01-01

    Biogas methane content is a relevant variable in anaerobic digestion processing where knowledge of process kinetics or an early indicator of digester failure is needed. The contribution of this work is the development of a novel, simple and low cost automatic carbon dioxide-methane gas sensor based on the solubility of gases in water as the precursor of a sensor for biogas quality monitoring. The device described in this work was used for determining the composition of binary mixtures, such as carbon dioxide-methane, in the range of 0–100%. The design and implementation of a digital signal processor and control system into a low-cost Field Programmable Gate Array (FPGA) platform has permitted the successful application of data acquisition, data distribution and digital data processing, making the construction of a standalone carbon dioxide-methane gas sensor possible. PMID:23112626

  19. Automatic carbon dioxide-methane gas sensor based on the solubility of gases in water.

    PubMed

    Cadena-Pereda, Raúl O; Rivera-Muñoz, Eric M; Herrera-Ruiz, Gilberto; Gomez-Melendez, Domingo J; Anaya-Rivera, Ely K

    2012-01-01

    Biogas methane content is a relevant variable in anaerobic digestion processing where knowledge of process kinetics or an early indicator of digester failure is needed. The contribution of this work is the development of a novel, simple and low cost automatic carbon dioxide-methane gas sensor based on the solubility of gases in water as the precursor of a sensor for biogas quality monitoring. The device described in this work was used for determining the composition of binary mixtures, such as carbon dioxide-methane, in the range of 0-100%. The design and implementation of a digital signal processor and control system into a low-cost Field Programmable Gate Array (FPGA) platform has permitted the successful application of data acquisition, data distribution and digital data processing, making the construction of a standalone carbon dioxide-methane gas sensor possible.

  20. A Study of Adaptive Relevance Feedback - UIUC TREC-2008 Relevance Feedback Experiments

    DTIC Science & Technology

    2008-11-01

    terms. Journal of the American Society for Information Science, 27(3):129–146, 1976. [7] J . J . Rocchio. Relevance feedback in information retrieval. In...In The SMART Retrieval System: Experiments in Automatic Document Processing, pages 313–323. Prentice-Hall Inc., 1971. [8] Gerard Salton and Chris

  1. Features: Real-Time Adaptive Feature and Document Learning for Web Search.

    ERIC Educational Resources Information Center

    Chen, Zhixiang; Meng, Xiannong; Fowler, Richard H.; Zhu, Binhai

    2001-01-01

    Describes Features, an intelligent Web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Explains how Features learns from users' document relevance feedback and automatically extracts and suggests indexing keywords relevant to a search query, and learns from users' keyword relevance feedback…

  2. Effective biomedical document classification for identifying publications relevant to the mouse Gene Expression Database (GXD).

    PubMed

    Jiang, Xiangying; Ringwald, Martin; Blake, Judith; Shatkay, Hagit

    2017-01-01

    The Gene Expression Database (GXD) is a comprehensive online database within the Mouse Genome Informatics resource, aiming to provide available information about endogenous gene expression during mouse development. The information stems primarily from many thousands of biomedical publications that database curators must go through and read. Given the very large number of biomedical papers published each year, automatic document classification plays an important role in biomedical research. Specifically, an effective and efficient document classifier is needed for supporting the GXD annotation workflow. We present here an effective yet relatively simple classification scheme, which uses readily available tools while employing feature selection, aiming to assist curators in identifying publications relevant to GXD. We examine the performance of our method over a large manually curated dataset, consisting of more than 25 000 PubMed abstracts, of which about half are curated as relevant to GXD while the other half as irrelevant to GXD. In addition to text from title-and-abstract, we also consider image captions, an important information source that we integrate into our method. We apply a captions-based classifier to a subset of about 3300 documents, for which the full text of the curated articles is available. The results demonstrate that our proposed approach is robust and effectively addresses the GXD document classification. Moreover, using information obtained from image captions clearly improves performance, compared to title and abstract alone, affirming the utility of image captions as a substantial evidence source for automatically determining the relevance of biomedical publications to a specific subject area. www.informatics.jax.org. © The Author(s) 2017. Published by Oxford University Press.

  3. New Term Weighting Formulas for the Vector Space Method in Information Retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chisholm, E.; Kolda, T.G.

    The goal in information retrieval is to enable users to automatically and accurately find data relevant to their queries. One possible approach to this problem i use the vector space model, which models documents and queries as vectors in the term space. The components of the vectors are determined by the term weighting scheme, a function of the frequencies of the terms in the document or query as well as throughout the collection. We discuss popular term weighting schemes and present several new schemes that offer improved performance.

  4. Automatic control of tracheal tube cuff pressure in ventilated patients in semirecumbent position: a randomized trial.

    PubMed

    Valencia, Mauricio; Ferrer, Miquel; Farre, Ramon; Navajas, Daniel; Badia, Joan Ramon; Nicolas, Josep Maria; Torres, Antoni

    2007-06-01

    The aspiration of subglottic secretions colonized by bacteria pooled around the tracheal tube cuff due to inadvertent deflation (<20 cm H2O) of the cuff plays a relevant role in the pathogenesis of ventilator-associated pneumonia. We assessed the efficacy of an automatic, validated device for the continuous regulation of tracheal tube cuff pressure in preventing ventilator-associated pneumonia. Prospective randomized controlled trial. Respiratory intensive care unit and general medical intensive care unit. One hundred and forty-two mechanically ventilated patients (age, 64 +/- 17 yrs; Acute Physiology and Chronic Health Evaluation II score, 18 +/- 6) without pneumonia or aspiration at admission. Within 24 hrs of intubation, patients were randomly allocated to undergo continuous regulation of the cuff pressure with the automatic device (n = 73) or routine care of the cuff pressure (control group, n = 69). Patients remained in a semirecumbent position in bed. The primary end point variable was the incidence of ventilator-associated pneumonia. Main causes for intubation were decreased consciousness (43, 30%) and exacerbation of chronic respiratory diseases (38, 27%). Cuff pressure <20 cm H2O was more frequently observed in the control than the automatic group (45.3 vs. 0.7% determinations, p < .001). However, the rate of ventilator-associated pneumonia with clinical criteria (16, 22% vs. 20, 29%) and microbiological confirmation (11, 15% vs. 10, 15%), the distribution of early and late onset, the causative microorganisms, and intensive care unit (20, 27% vs. 16, 23%) and hospital mortality (30, 41% vs. 23, 33%) were similar for the automatic and control groups, respectively. Cuff pressure is better controlled with the automatic device. However, it did not result in additional benefits to the semirecumbent position in preventing ventilator-associated pneumonia.

  5. A Compositional Relevance Model for Adaptive Information Retrieval

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Chen, James; Lu, Henry, Jr. (Technical Monitor)

    1994-01-01

    There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.

  6. Young Children's Automatic Encoding of Social Categories

    ERIC Educational Resources Information Center

    Weisman, Kara; Johnson, Marissa V.; Shutts, Kristin

    2015-01-01

    The present research investigated young children's automatic encoding of two social categories that are highly relevant to adults: gender and race. Three- to 6-year-old participants learned facts about unfamiliar target children who varied in either gender or race and were asked to remember which facts went with which targets. When participants…

  7. Sequential Analysis of the Numerical Stroop Effect Reveals Response Suppression

    ERIC Educational Resources Information Center

    Kadosh, Roi Cohen; Gevers, Wim; Notebaert, Wim

    2011-01-01

    Automatic processing of irrelevant stimulus dimensions has been demonstrated in a variety of tasks. Previous studies have shown that conflict between relevant and irrelevant dimensions can be reduced when a feature of the irrelevant dimension is repeated. The specific level at which the automatic process is suppressed (e.g., perceptual repetition,…

  8. Feasibility Study on Fully Automatic High Quality Translation: Volume II. Final Technical Report.

    ERIC Educational Resources Information Center

    Lehmann, Winifred P.; Stachowitz, Rolf

    This second volume of a two-volume report on a fully automatic high quality translation (FAHQT) contains relevant papers contributed by specialists on the topic of machine translation. The papers presented here cover such topics as syntactical analysis in transformational grammar and in machine translation, lexical features in translation and…

  9. Information fusion for diabetic retinopathy CAD in digital color fundus photographs.

    PubMed

    Niemeijer, Meindert; Abramoff, Michael D; van Ginneken, Bram

    2009-05-01

    The purpose of computer-aided detection or diagnosis (CAD) technology has so far been to serve as a second reader. If, however, all relevant lesions in an image can be detected by CAD algorithms, use of CAD for automatic reading or prescreening may become feasible. This work addresses the question how to fuse information from multiple CAD algorithms, operating on multiple images that comprise an exam, to determine a likelihood that the exam is normal and would not require further inspection by human operators. We focus on retinal image screening for diabetic retinopathy, a common complication of diabetes. Current CAD systems are not designed to automatically evaluate complete exams consisting of multiple images for which several detection algorithm output sets are available. Information fusion will potentially play a crucial role in enabling the application of CAD technology to the automatic screening problem. Several different fusion methods are proposed and their effect on the performance of a complete comprehensive automatic diabetic retinopathy screening system is evaluated. Experiments show that the choice of fusion method can have a large impact on system performance. The complete system was evaluated on a set of 15,000 exams (60,000 images). The best performing fusion method obtained an area under the receiver operator characteristic curve of 0.881. This indicates that automated prescreening could be applied in diabetic retinopathy screening programs.

  10. An optimal transportation approach for nuclear structure-based pathology.

    PubMed

    Wang, Wei; Ozolek, John A; Slepčev, Dejan; Lee, Ann B; Chen, Cheng; Rohde, Gustavo K

    2011-03-01

    Nuclear morphology and structure as visualized from histopathology microscopy images can yield important diagnostic clues in some benign and malignant tissue lesions. Precise quantitative information about nuclear structure and morphology, however, is currently not available for many diagnostic challenges. This is due, in part, to the lack of methods to quantify these differences from image data. We describe a method to characterize and contrast the distribution of nuclear structure in different tissue classes (normal, benign, cancer, etc.). The approach is based on quantifying chromatin morphology in different groups of cells using the optimal transportation (Kantorovich-Wasserstein) metric in combination with the Fisher discriminant analysis and multidimensional scaling techniques. We show that the optimal transportation metric is able to measure relevant biological information as it enables automatic determination of the class (e.g., normal versus cancer) of a set of nuclei. We show that the classification accuracies obtained using this metric are, on average, as good or better than those obtained utilizing a set of previously described numerical features. We apply our methods to two diagnostic challenges for surgical pathology: one in the liver and one in the thyroid. Results automatically computed using this technique show potentially biologically relevant differences in nuclear structure in liver and thyroid cancers.

  11. An optimal transportation approach for nuclear structure-based pathology

    PubMed Central

    Wang, Wei; Ozolek, John A.; Slepčev, Dejan; Lee, Ann B.; Chen, Cheng; Rohde, Gustavo K.

    2012-01-01

    Nuclear morphology and structure as visualized from histopathology microscopy images can yield important diagnostic clues in some benign and malignant tissue lesions. Precise quantitative information about nuclear structure and morphology, however, is currently not available for many diagnostic challenges. This is due, in part, to the lack of methods to quantify these differences from image data. We describe a method to characterize and contrast the distribution of nuclear structure in different tissue classes (normal, benign, cancer, etc.). The approach is based on quantifying chromatin morphology in different groups of cells using the optimal transportation (Kantorovich-Wasserstein) metric in combination with the Fisher discriminant analysis and multidimensional scaling techniques. We show that the optimal transportation metric is able to measure relevant biological information as it enables automatic determination of the class (e.g. normal vs. cancer) of a set of nuclei. We show that the classification accuracies obtained using this metric are, on average, as good or better than those obtained utilizing a set of previously described numerical features. We apply our methods to two diagnostic challenges for surgical pathology: one in the liver and one in the thyroid. Results automatically computed using this technique show potentially biologically relevant differences in nuclear structure in liver and thyroid cancers. PMID:20977984

  12. Optimal shifting control strategy in inertia phase of an automatic transmission for automotive applications

    NASA Astrophysics Data System (ADS)

    Meng, Fei; Tao, Gang; Zhang, Tao; Hu, Yihuai; Geng, Peng

    2015-08-01

    Shifting quality is a crucial factor in all parts of the automobile industry. To ensure an optimal gear shifting strategy with best fuel economy for a stepped automatic transmission, the controller should be designed to meet the challenge of lacking of a feedback sensor to measure the relevant variables. This paper focuses on a new kind of automatic transmission using proportional solenoid valve to control the clutch pressure, a speed difference of the clutch based control strategy is designed for the shift control during the inertia phase. First, the mechanical system is shown and the system dynamic model is built. Second, the control strategy is designed based on the characterization analysis of models which are derived from dynamics of the drive line and electro-hydraulic actuator. Then, the controller uses conventional Proportional-Integral-Derivative control theory, and a robust two-degree-of-freedom controller is also carried out to determine the optimal control parameters to further improve the system performance. Finally, the designed control strategy with different controller is implemented on a simulation model. The compared results show that the speed difference of clutch can track the desired trajectory well and improve the shift quality effectively.

  13. Automatic video summarization driven by a spatio-temporal attention model

    NASA Astrophysics Data System (ADS)

    Barland, R.; Saadane, A.

    2008-02-01

    According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.

  14. Automatic affective appraisal of sexual penetration stimuli in women with vaginismus or dyspareunia.

    PubMed

    Huijding, Jorg; Borg, Charmaine; Weijmar-Schultz, Willibrord; de Jong, Peter J

    2011-03-01

    Current psychological views are that negative appraisals of sexual stimuli lie at the core of sexual dysfunctions. It is important to differentiate between deliberate appraisals and more automatic appraisals, as research has shown that the former are most relevant to controllable behaviors, and the latter are most relevant to reflexive behaviors. Accordingly, it can be hypothesized that in women with vaginismus, the persistent difficulty to allow vaginal entry is due to global negative automatic affective appraisals that trigger reflexive pelvic floor muscle contraction at the prospect of penetration. To test whether sexual penetration pictures elicited global negative automatic affective appraisals in women with vaginismus or dyspareunia and to examine whether deliberate appraisals and automatic appraisals differed between the two patient groups. Women with persistent vaginismus (N = 24), dyspareunia (N = 23), or no sexual complaints (N = 30) completed a pictorial Extrinsic Affective Simon Task (EAST), and then made a global affective assessment of the EAST stimuli using visual analogue scales (VAS). The EAST assessed global automatic affective appraisals of sexual penetration stimuli, while the VAS assessed global deliberate affective appraisals of these stimuli. Automatic affective appraisals of sexual penetration stimuli tended to be positive, independent of the presence of sexual complaints. Deliberate appraisals of the same stimuli were significantly more negative in the women with vaginismus than in the dyspareunia group and control group, while the latter two groups did not differ in their appraisals. Unexpectedly, deliberate appraisals seemed to be most important in vaginismus, whereas dyspareunia did not seem to implicate negative deliberate or automatic affective appraisals. These findings dispute the view that global automatic affect lies at the core of vaginismus and indicate that a useful element in therapeutic interventions may be the modification of deliberate global affective appraisals of sexual penetration (e.g., via counter-conditioning). © 2010 International Society for Sexual Medicine.

  15. Dissociating Working Memory Updating and Automatic Updating: The Reference-Back Paradigm

    ERIC Educational Resources Information Center

    Rac-Lubashevsky, Rachel; Kessler, Yoav

    2016-01-01

    Working memory (WM) updating is a controlled process through which relevant information in the environment is selected to enter the gate to WM and substitute its contents. We suggest that there is also an automatic form of updating, which influences performance in many tasks and is primarily manifested in reaction time sequential effects. The goal…

  16. The Measurement of Term Importance in Automatic Indexing.

    ERIC Educational Resources Information Center

    Salton, G.; And Others

    1981-01-01

    Reviews major term-weighting theories, presents methods for estimating the relevance properties of terms based on their frequency characteristics in a document collection, and compares weighting systems using term relevance properties with more conventional frequency-based methodologies. Eighteen references are cited. (Author/FM)

  17. Historical maintenance relevant information road-map for a self-learning maintenance prediction procedural approach

    NASA Astrophysics Data System (ADS)

    Morales, Francisco J.; Reyes, Antonio; Cáceres, Noelia; Romero, Luis M.; Benitez, Francisco G.; Morgado, Joao; Duarte, Emanuel; Martins, Teresa

    2017-09-01

    A large percentage of transport infrastructures are composed of linear assets, such as roads and rail tracks. The large social and economic relevance of these constructions force the stakeholders to ensure a prolonged health/durability. Even though, inevitable malfunctioning, breaking down, and out-of-service periods arise randomly during the life cycle of the infrastructure. Predictive maintenance techniques tend to diminish the appearance of unpredicted failures and the execution of needed corrective interventions, envisaging the adequate interventions to be conducted before failures show up. This communication presents: i) A procedural approach, to be conducted, in order to collect the relevant information regarding the evolving state condition of the assets involved in all maintenance interventions; this reported and stored information constitutes a rich historical data base to train Machine Learning algorithms in order to generate reliable predictions of the interventions to be carried out in further time scenarios. ii) A schematic flow chart of the automatic learning procedure. iii) Self-learning rules from automatic learning from false positive/negatives. The description, testing, automatic learning approach and the outcomes of a pilot case are presented; finally some conclusions are outlined regarding the methodology proposed for improving the self-learning predictive capability.

  18. Learning to rank-based gene summary extraction.

    PubMed

    Shang, Yue; Hao, Huihui; Wu, Jiajin; Lin, Hongfei

    2014-01-01

    In recent years, the biomedical literature has been growing rapidly. These articles provide a large amount of information about proteins, genes and their interactions. Reading such a huge amount of literature is a tedious task for researchers to gain knowledge about a gene. As a result, it is significant for biomedical researchers to have a quick understanding of the query concept by integrating its relevant resources. In the task of gene summary generation, we regard automatic summary as a ranking problem and apply the method of learning to rank to automatically solve this problem. This paper uses three features as a basis for sentence selection: gene ontology relevance, topic relevance and TextRank. From there, we obtain the feature weight vector using the learning to rank algorithm and predict the scores of candidate summary sentences and obtain top sentences to generate the summary. ROUGE (a toolkit for summarization of automatic evaluation) was used to evaluate the summarization result and the experimental results showed that our method outperforms the baseline techniques. According to the experimental result, the combination of three features can improve the performance of summary. The application of learning to rank can facilitate the further expansion of features for measuring the significance of sentences.

  19. Sparse Bayesian Learning for Identifying Imaging Biomarkers in AD Prediction

    PubMed Central

    Shen, Li; Qi, Yuan; Kim, Sungeun; Nho, Kwangsik; Wan, Jing; Risacher, Shannon L.; Saykin, Andrew J.

    2010-01-01

    We apply sparse Bayesian learning methods, automatic relevance determination (ARD) and predictive ARD (PARD), to Alzheimer’s disease (AD) classification to make accurate prediction and identify critical imaging markers relevant to AD at the same time. ARD is one of the most successful Bayesian feature selection methods. PARD is a powerful Bayesian feature selection method, and provides sparse models that is easy to interpret. PARD selects the model with the best estimate of the predictive performance instead of choosing the one with the largest marginal model likelihood. Comparative study with support vector machine (SVM) shows that ARD/PARD in general outperform SVM in terms of prediction accuracy. Additional comparison with surface-based general linear model (GLM) analysis shows that regions with strongest signals are identified by both GLM and ARD/PARD. While GLM P-map returns significant regions all over the cortex, ARD/PARD provide a small number of relevant and meaningful imaging markers with predictive power, including both cortical and subcortical measures. PMID:20879451

  20. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information.

    PubMed

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Yan, Bin; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition.

  1. Automatic Artifact Removal from Electroencephalogram Data Based on A Priori Artifact Information

    PubMed Central

    Zhang, Chi; Tong, Li; Zeng, Ying; Jiang, Jingfang; Bu, Haibing; Li, Jianxin

    2015-01-01

    Electroencephalogram (EEG) is susceptible to various nonneural physiological artifacts. Automatic artifact removal from EEG data remains a key challenge for extracting relevant information from brain activities. To adapt to variable subjects and EEG acquisition environments, this paper presents an automatic online artifact removal method based on a priori artifact information. The combination of discrete wavelet transform and independent component analysis (ICA), wavelet-ICA, was utilized to separate artifact components. The artifact components were then automatically identified using a priori artifact information, which was acquired in advance. Subsequently, signal reconstruction without artifact components was performed to obtain artifact-free signals. The results showed that, using this automatic online artifact removal method, there were statistical significant improvements of the classification accuracies in both two experiments, namely, motor imagery and emotion recognition. PMID:26380294

  2. Culture, attribution and automaticity: a social cognitive neuroscience view

    PubMed Central

    Morris, Michael W.

    2010-01-01

    A fundamental challenge facing social perceivers is identifying the cause underlying other people’s behavior. Evidence indicates that East Asian perceivers are more likely than Western perceivers to reference the social context when attributing a cause to a target person’s actions. One outstanding question is whether this reflects a culture’s influence on automatic or on controlled components of causal attribution. After reviewing behavioral evidence that culture can shape automatic mental processes as well as controlled reasoning, we discuss the evidence in favor of cultural differences in automatic and controlled components of causal attribution more specifically. We contend that insights emerging from social cognitive neuroscience research can inform this debate. After introducing an attribution framework popular among social neuroscientists, we consider findings relevant to the automaticity of attribution, before speculating how one could use a social neuroscience approach to clarify whether culture affects automatic, controlled or both types of attribution processes. PMID:20460302

  3. Relevance of Google-customized search engine vs. CISMeF quality-controlled health gateway.

    PubMed

    Gehanno, Jean-François; Kerdelhue, Gaétan; Sakji, Saoussen; Massari, Philippe; Joubert, Michel; Darmoni, Stéfan J

    2009-01-01

    CISMeF (acronym for Catalog and Index of French Language Health Resources on the Internet) is a quality-controlled health gateway conceived to catalog and index the most important and quality-controlled sources of institutional health information in French. The goal of this study is to compare the relevance of results provided by this gateway from a small set of documents selected and described by human experts to those provided by a search engine from a large set of automatically indexed and ranked resources. The Google-Customized search engine (CSE) was used. The evaluation was made using the 10th first results of 15 queries and two blinded physician evaluators. There was no significant difference between the relevance of information retrieval in CISMeF and Google CSE. In conclusion, automatic indexing does not lead to lower relevance than a manual MeSH indexing and may help to cope with the increasing number of references to be indexed in a controlled health quality gateway.

  4. Simple and efficient machine learning frameworks for identifying protein-protein interaction relevant articles and experimental methods used to study the interactions.

    PubMed

    Agarwal, Shashank; Liu, Feifan; Yu, Hong

    2011-10-03

    Protein-protein interaction (PPI) is an important biomedical phenomenon. Automatically detecting PPI-relevant articles and identifying methods that are used to study PPI are important text mining tasks. In this study, we have explored domain independent features to develop two open source machine learning frameworks. One performs binary classification to determine whether the given article is PPI relevant or not, named "Simple Classifier", and the other one maps the PPI relevant articles with corresponding interaction method nodes in a standardized PSI-MI (Proteomics Standards Initiative-Molecular Interactions) ontology, named "OntoNorm". We evaluated our system in the context of BioCreative challenge competition using the standardized data set. Our systems are amongst the top systems reported by the organizers, attaining 60.8% F1-score for identifying relevant documents, and 52.3% F1-score for mapping articles to interaction method ontology. Our results show that domain-independent machine learning frameworks can perform competitively well at the tasks of detecting PPI relevant articles and identifying the methods that were used to study the interaction in such articles. Simple Classifier is available at http://sourceforge.net/p/simpleclassify/home/ and OntoNorm at http://sourceforge.net/p/ontonorm/home/.

  5. Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gupta, Manish

    1992-01-01

    Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of scientific application programs with regular computations.

  6. Automatic Detection of Galaxy Type From Datasets of Galaxies Image Based on Image Retrieval Approach.

    PubMed

    Abd El Aziz, Mohamed; Selim, I M; Xiong, Shengwu

    2017-06-30

    This paper presents a new approach for the automatic detection of galaxy morphology from datasets based on an image-retrieval approach. Currently, there are several classification methods proposed to detect galaxy types within an image. However, in some situations, the aim is not only to determine the type of galaxy within the queried image, but also to determine the most similar images for query image. Therefore, this paper proposes an image-retrieval method to detect the type of galaxies within an image and return with the most similar image. The proposed method consists of two stages, in the first stage, a set of features is extracted based on shape, color and texture descriptors, then a binary sine cosine algorithm selects the most relevant features. In the second stage, the similarity between the features of the queried galaxy image and the features of other galaxy images is computed. Our experiments were performed using the EFIGI catalogue, which contains about 5000 galaxies images with different types (edge-on spiral, spiral, elliptical and irregular). We demonstrate that our proposed approach has better performance compared with the particle swarm optimization (PSO) and genetic algorithm (GA) methods.

  7. Artificial Intelligence/Robotics Applications to Navy Aircraft Maintenance.

    DTIC Science & Technology

    1984-06-01

    other automatic machinery such as presses, molding machines , and numerically-controlled machine tools, just as people do. A-36...Robotics Technologies 3 B. Relevant AI Technologies 4 1. Expert Systems 4 2. Automatic Planning 4 3. Natural Language 5 4. Machine Vision...building machines that imitate human behavior. Artificial intelligence is concerned with the functions of the brain, whereas robotics include, in

  8. Terminologies for text-mining; an experiment in the lipoprotein metabolism domain

    PubMed Central

    Alexopoulou, Dimitra; Wächter, Thomas; Pickersgill, Laura; Eyre, Cecilia; Schroeder, Michael

    2008-01-01

    Background The engineering of ontologies, especially with a view to a text-mining use, is still a new research field. There does not yet exist a well-defined theory and technology for ontology construction. Many of the ontology design steps remain manual and are based on personal experience and intuition. However, there exist a few efforts on automatic construction of ontologies in the form of extracted lists of terms and relations between them. Results We share experience acquired during the manual development of a lipoprotein metabolism ontology (LMO) to be used for text-mining. We compare the manually created ontology terms with the automatically derived terminology from four different automatic term recognition (ATR) methods. The top 50 predicted terms contain up to 89% relevant terms. For the top 1000 terms the best method still generates 51% relevant terms. In a corpus of 3066 documents 53% of LMO terms are contained and 38% can be generated with one of the methods. Conclusions Given high precision, automatic methods can help decrease development time and provide significant support for the identification of domain-specific vocabulary. The coverage of the domain vocabulary depends strongly on the underlying documents. Ontology development for text mining should be performed in a semi-automatic way; taking ATR results as input and following the guidelines we described. Availability The TFIDF term recognition is available as Web Service, described at PMID:18460175

  9. Color normalization for robust evaluation of microscopy images

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2015-09-01

    This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.

  10. Prediction of biomechanical parameters of the proximal femur using statistical appearance models and support vector regression.

    PubMed

    Fritscher, Karl; Schuler, Benedikt; Link, Thomas; Eckstein, Felix; Suhm, Norbert; Hänni, Markus; Hengg, Clemens; Schubert, Rainer

    2008-01-01

    Fractures of the proximal femur are one of the principal causes of mortality among elderly persons. Traditional methods for the determination of femoral fracture risk use methods for measuring bone mineral density. However, BMD alone is not sufficient to predict bone failure load for an individual patient and additional parameters have to be determined for this purpose. In this work an approach that uses statistical models of appearance to identify relevant regions and parameters for the prediction of biomechanical properties of the proximal femur will be presented. By using Support Vector Regression the proposed model based approach is capable of predicting two different biomechanical parameters accurately and fully automatically in two different testing scenarios.

  11. Visual Benefits in Apparent Motion Displays: Automatically Driven Spatial and Temporal Anticipation Are Partially Dissociated

    PubMed Central

    Ahrens, Merle-Marie; Veniero, Domenica; Gross, Joachim; Harvey, Monika; Thut, Gregor

    2015-01-01

    Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic) prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing) at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing). Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design) and task-irrelevant (by instruction), and by creating instead endogenous (orthogonal) expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech. PMID:26623650

  12. Automatic optical inspection system design for golf ball

    NASA Astrophysics Data System (ADS)

    Wu, Hsien-Huang; Su, Jyun-Wei; Chen, Chih-Lin

    2016-09-01

    ith the growing popularity of golf sport all over the world, the quantities of relevant products are increasing year by year. To create innovation and improvement in quality while reducing production cost, automation of manufacturing become a necessary and important issue. This paper reflect the trend of this production automa- tion. It uses the AOI (Automated Optical Inspection) technology to develop a system which can automatically detect defects on the golf ball. The current manual quality-inspection is not only error-prone but also very man- power demanding. Taking into consideration the competition of this industry in the near future, the development of related AOI equipment must be conducted as soon as possible. Due to the strong reflective property of the ball surface, as well as its surface dimples and subtle flaws, it is very difficult to take good quality image for automatic inspection. Based on the surface properties and shape of the ball, lighting has been properly design for image-taking environment and structure. Area-scan cameras have been used to acquire images with good contrast between defects and background to assure the achievement of the goal of automatic defect detection on the golf ball. The result obtained is that more than 973 of the NG balls have be detected, and system maintains less than 103 false alarm rate. The balls which are determined by the system to be NG will be inspected by human eye again. Therefore, the manpower spent in the inspection has been reduced by 903.

  13. Effect of automatic record keeping on vigilance and record keeping time.

    PubMed

    Allard, J; Dzwonczyk, R; Yablok, D; Block, F E; McDonald, J S

    1995-05-01

    We have evaluated the effect of an automatic anaesthesia record keeper (AARK) on record keeping time and vigilance. With informed patient consent and institutional approval, we videotaped the attending anaesthetist and his/her immediate surroundings during 66 surgical procedures. Thirty-seven cases were charted manually and the remaining 29 were charted with a commercially available AARK. In order to evaluate vigilance, a physician examiner entered the operating room unannounced once during 33 of the manually charted cases and during 22 of the automatically charted cases and asked the anaesthetist to turn away from the monitors and recall the current value of eight patient physiological variables. The examiner recorded the recalled values and also the actual current monitor values of these variables. The videotapes were reviewed and the anaesthetist's intraoperative time was categorized into 15 predefined activities, including intraoperative anaesthesia record keeping time. We compared recalled and actual variable values to determine if the recalled values were within clinically relevant error limits. There was no statistical difference between the mean percentage case time spent recording manually (14.11 (SD 3.98)%) and automatically (12.39 (3.92)%). Moreover, use of the AARK did not significantly affect vigilance. Despite major advances in monitoring technology over the past 14 years, record keeping still occupies 10-15% of the anaesthetist's intraoperative time. It appears that in using an AARK, the anaesthetist reallocates intraoperative record keeping time from manual charting to dealing with problems in the anaesthetist machine interface caused by inadequate design.

  14. A Comparison of Two Methods for Boolean Query Relevancy Feedback.

    ERIC Educational Resources Information Center

    Salton, G.; And Others

    1984-01-01

    Evaluates and compares two recently proposed automatic methods for relevance feedback of Boolean queries (Dillon method, which uses probabilistic approach as basis, and disjunctive normal form method). Conclusions are drawn concerning the use of effective feedback methods in a Boolean query environment. Nineteen references are included. (EJS)

  15. SU-F-T-65: AutomaticTreatment Planning for High-Dose Rate (HDR) Brachytherapy with a VaginalCylinder Applicator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Y; Tan, J; Jiang, S

    Purpose: High dose rate (HDR) brachytherapy treatment planning is conventionally performed in a manual fashion. Yet it is highly desirable to perform computerized automated planning to improve treatment planning efficiency, eliminate human errors, and reduce plan quality variation. The goal of this research is to develop an automatic treatment planning tool for HDR brachytherapy with a cylinder applicator for vaginal cancer. Methods: After inserting the cylinder applicator into the patient, a CT scan was acquired and was loaded to an in-house developed treatment planning software. The cylinder applicator was automatically segmented using image-processing techniques. CTV was generated based on user-specifiedmore » treatment depth and length. Locations of relevant points (apex point, prescription point, and vaginal surface point), central applicator channel coordinates, and dwell positions were determined according to their geometric relations with the applicator. Dwell time was computed through an inverse optimization process. The planning information was written into DICOM-RT plan and structure files to transfer the automatically generated plan to a commercial treatment planning system for plan verification and delivery. Results: We have tested the system retrospectively in nine patients treated with vaginal cylinder applicator. These cases were selected with different treatment prescriptions, lengths, depths, and cylinder diameters to represent a large patient population. Our system was able to generate treatment plans for these cases with clinically acceptable quality. Computation time varied from 3–6 min. Conclusion: We have developed a system to perform automated treatment planning for HDR brachytherapy with a cylinder applicator. Such a novel system has greatly improved treatment planning efficiency and reduced plan quality variation. It also served as a testbed to demonstrate the feasibility of automatic HDR treatment planning for more complicated cases.« less

  16. Query Expansion for Noisy Legal Documents

    DTIC Science & Technology

    2008-11-01

    9] G. Salton (ed). The SMART retrieval system experiments in automatic document processing. 1971. [10] H. Schutze and J . Pedersen. A cooccurrence...Language Modeling and Information Retrieval. http://www.lemurproject.org. [2] J . Baron, D. Lewis, and D. Oard. TREC 2006 legal track overview. In...Retrieval, 1993. [8] J . Rocchio. Relevance feedback in information retrieval. In The SMART retrieval system experiments in automatic document processing, 1971

  17. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  18. Automatic guidance of attention during real-world visual search.

    PubMed

    Seidl-Rathkopf, Katharina N; Turk-Browne, Nicholas B; Kastner, Sabine

    2015-08-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, because the features, locations, and times of appearance of relevant objects often are not known in advance. Thus, a mechanism by which attention is automatically biased toward information that is potentially relevant may be helpful. We tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of nonmatching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty.

  19. Evolving Spiking Neural Networks for Recognition of Aged Voices.

    PubMed

    Silva, Marco; Vellasco, Marley M B R; Cataldo, Edson

    2017-01-01

    The aging of the voice, known as presbyphonia, is a natural process that can cause great change in vocal quality of the individual. This is a relevant problem to those people who use their voices professionally, and its early identification can help determine a suitable treatment to avoid its progress or even to eliminate the problem. This work focuses on the development of a new model for the identification of aging voices (independently of their chronological age), using as input attributes parameters extracted from the voice and glottal signals. The proposed model, named Quantum binary-real evolving Spiking Neural Network (QbrSNN), is based on spiking neural networks (SNNs), with an unsupervised training algorithm, and a Quantum-Inspired Evolutionary Algorithm that automatically determines the most relevant attributes and the optimal parameters that configure the SNN. The QbrSNN model was evaluated in a database composed of 120 records, containing samples from three groups of speakers. The results obtained indicate that the proposed model provides better accuracy than other approaches, with fewer input attributes. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  20. G-Bean: an ontology-graph based web tool for biomedical literature retrieval

    PubMed Central

    2014-01-01

    Background Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. Methods G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Results Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. Conclusions G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user. PMID:25474588

  1. G-Bean: an ontology-graph based web tool for biomedical literature retrieval.

    PubMed

    Wang, James Z; Zhang, Yuanyuan; Dong, Liang; Li, Lin; Srimani, Pradip K; Yu, Philip S

    2014-01-01

    Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user.

  2. Generating Poetry Title Based on Semantic Relevance with Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Li, Z.; Niu, K.; He, Z. Q.

    2017-09-01

    Several approaches have been proposed to automatically generate Chinese classical poetry (CCP) in the past few years, but automatically generating the title of CCP is still a difficult problem. The difficulties are mainly reflected in two aspects. First, the words used in CCP are very different from modern Chinese words and there are no valid word segmentation tools. Second, the semantic relevance of characters in CCP not only exists in one sentence but also exists between the same positions of adjacent sentences, which is hard to grasp by the traditional text summarization models. In this paper, we propose an encoder-decoder model for generating the title of CCP. Our model encoder is a convolutional neural network (CNN) with two kinds of filters. To capture the commonly used words in one sentence, one kind of filters covers two characters horizontally at each step. The other covers two characters vertically at each step and can grasp the semantic relevance of characters between adjacent sentences. Experimental results show that our model is better than several other related models and can capture the semantic relevance of CCP more accurately.

  3. Automatic assembly of micro-optical components

    NASA Astrophysics Data System (ADS)

    Gengenbach, Ulrich K.

    1996-12-01

    Automatic assembly becomes an important issue as hybrid micro systems enter industrial fabrication. Moving from a laboratory scale production with manual assembly and bonding processes to automatic assembly requires a thorough re- evaluation of the design, the characteristics of the individual components and of the processes involved. Parts supply for automatic operation, sensitive and intelligent grippers adapted to size, surface and material properties of the microcomponents gain importance when the superior sensory and handling skills of a human are to be replaced by a machine. This holds in particular for the automatic assembly of micro-optical components. The paper outlines these issues exemplified at the automatic assembly of a micro-optical duplexer consisting of a micro-optical bench fabricated by the LIGA technique, two spherical lenses, a wavelength filter and an optical fiber. Spherical lenses, wavelength filter and optical fiber are supplied by third party vendors, which raises the question of parts supply for automatic assembly. The bonding processes for these components include press fit and adhesive bonding. The prototype assembly system with all relevant components e.g. handling system, parts supply, grippers and control is described. Results of first automatic assembly tests are presented.

  4. Semi-Supervised Data Summarization: Using Spectral Libraries to Improve Hyperspectral Clustering

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Shu, H. P.; Mazzoni, D.; Castano, R.

    2005-01-01

    Hyperspectral imagers produce very large images, with each pixel recorded at hundreds or thousands of different wavelengths. The ability to automatically generate summaries of these data sets enables several important applications, such as quickly browsing through a large image repository or determining the best use of a limited bandwidth link (e.g., determining which images are most critical for full transmission). Clustering algorithms can be used to generate these summaries, but traditional clustering methods make decisions based only on the information contained in the data set. In contrast, we present a new method that additionally leverages existing spectral libraries to identify materials that are likely to be present in the image target area. We find that this approach simultaneously reduces runtime and produces summaries that are more relevant to science goals.

  5. 42 CFR 436.909 - Automatic entitlement to Medicaid following a determination of eligibility under other programs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Automatic entitlement to Medicaid following a... & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS... Islands § 436.909 Automatic entitlement to Medicaid following a determination of eligibility under other...

  6. Learning the Creative Potential of Students by Mining a Word Association Task

    ERIC Educational Resources Information Center

    Olivares-Rodríguez, Cristian; Guenaga, Mariluz

    2015-01-01

    Creativity is a relevant skill for human beings in order to overcome complex problems and reach novel solutions based on unexpected associations of concepts. Thus, the education of creativity becomes relevant, but there are not tools to automatically track the creative potential of learners over time. This work provides a novel set of behavioural…

  7. User Feedback Procedures; Part III of Scientific Report No. ISR-18, Information Storage and Retrieval...

    ERIC Educational Resources Information Center

    Cornell Univ., Ithaca, NY. Dept. of Computer Science.

    Part Three of this five part report on Salton's Magical Automatic Retriever of Texts (SMART) project contains four papers. The first: "Variations on the Query Splitting Technique with Relevance Feedback" by T. P. Baker discusses some experiments in relevance feedback performed with variations on the technique of query splitting. The…

  8. University of Glasgow at TREC 2008: Experiments in Blog, Enterprise, and Relevance Feedback Tracks with Terrier

    DTIC Science & Technology

    2008-11-01

    improves our TREC 2007 dictionary -based approach by automatically building an internal opinion dictionary from the collection itself. We measure the opin...detecting opinionated documents. The first approach improves our TREC 2007 dictionary -based approach by automat- ically building an internal opinion... dictionary from the collection itself. The second approach is based on the OpinionFinder tool, which identifies subjective sentences in text. In particular

  9. A Critical Review of the Literature on Attentional Bias in Cocaine Use Disorder and Suggestions for Future Research

    PubMed Central

    Leeman, Robert F.; Robinson, Cendrine D.; Waters, Andrew J.; Sofuoglu, Mehmet

    2014-01-01

    Cocaine use disorder (CUD) continues to be an important public health problem and novel approaches are needed to improve the effectiveness of treatments for CUD. Recently, there has been increased interest in the role of automatic cognition such as attentional bias (AB) in addictive behaviors and AB has been proposed to be a cognitive marker for addictions. Automatic cognition may be particularly relevant to CUD as there is evidence for particularly robust AB to cocaine cues and strong relationships to craving for cocaine and other illicit drugs. Further, the wide-ranging cognitive deficits (e.g., in response inhibition and working memory) evinced by many cocaine users enhance the potential importance of interventions targeting automatic cognition in this population. In the current paper, we discuss relevant addiction theories, followed by a review of studies that examined AB in CUD. We then consider the neural substrates of attentional bias including human neuroimaging, neurobiological and pharmacological studies. We conclude with a discussion of research gaps and future directions for attentional bias in CUD. PMID:25222545

  10. Toward Routine Automatic Pathway Discovery from On-line Scientific Text Abstracts.

    PubMed

    Ng; Wong

    1999-01-01

    We are entering a new era of research where the latest scientific discoveries are often first reported online and are readily accessible by scientists worldwide. This rapid electronic dissemination of research breakthroughs has greatly accelerated the current pace in genomics and proteomics research. The race to the discovery of a gene or a drug has now become increasingly dependent on how quickly a scientist can scan through voluminous amount of information available online to construct the relevant picture (such as protein-protein interaction pathways) as it takes shape amongst the rapidly expanding pool of globally accessible biological data (e.g. GENBANK) and scientific literature (e.g. MEDLINE). We describe a prototype system for automatic pathway discovery from on-line text abstracts, combining technologies that (1) retrieve research abstracts from online sources, (2) extract relevant information from the free texts, and (3) present the extracted information graphically and intuitively. Our work demonstrates that this framework allows us to routinely scan online scientific literature for automatic discovery of knowledge, giving modern scientists the necessary competitive edge in managing the information explosion in this electronic age.

  11. Automatic guidance of attention during real-world visual search

    PubMed Central

    Seidl-Rathkopf, Katharina N.; Turk-Browne, Nicholas B.; Kastner, Sabine

    2015-01-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, as the features, locations, and times of appearance of relevant objects are often not known in advance. A mechanism by which attention is automatically biased toward information that is potentially relevant may thus be helpful. Here we tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of non-matching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty. PMID:25898897

  12. Kernel-Based Relevance Analysis with Enhanced Interpretability for Detection of Brain Activity Patterns

    PubMed Central

    Alvarez-Meza, Andres M.; Orozco-Gutierrez, Alvaro; Castellanos-Dominguez, German

    2017-01-01

    We introduce Enhanced Kernel-based Relevance Analysis (EKRA) that aims to support the automatic identification of brain activity patterns using electroencephalographic recordings. EKRA is a data-driven strategy that incorporates two kernel functions to take advantage of the available joint information, associating neural responses to a given stimulus condition. Regarding this, a Centered Kernel Alignment functional is adjusted to learning the linear projection that best discriminates the input feature set, optimizing the required free parameters automatically. Our approach is carried out in two scenarios: (i) feature selection by computing a relevance vector from extracted neural features to facilitating the physiological interpretation of a given brain activity task, and (ii) enhanced feature selection to perform an additional transformation of relevant features aiming to improve the overall identification accuracy. Accordingly, we provide an alternative feature relevance analysis strategy that allows improving the system performance while favoring the data interpretability. For the validation purpose, EKRA is tested in two well-known tasks of brain activity: motor imagery discrimination and epileptic seizure detection. The obtained results show that the EKRA approach estimates a relevant representation space extracted from the provided supervised information, emphasizing the salient input features. As a result, our proposal outperforms the state-of-the-art methods regarding brain activity discrimination accuracy with the benefit of enhanced physiological interpretation about the task at hand. PMID:29056897

  13. Resolving Quasi-Synonym Relationships in Automatic Thesaurus Construction Using Fuzzy Rough Sets and an Inverse Term Frequency Similarity Function

    ERIC Educational Resources Information Center

    Davault, Julius M., III.

    2009-01-01

    One of the problems associated with automatic thesaurus construction is with determining the semantic relationship between word pairs. Quasi-synonyms provide a type of equivalence relationship: words are similar only for purposes of information retrieval. Determining such relationships in a thesaurus is hard to achieve automatically. The term…

  14. Automatic query formulations in information retrieval.

    PubMed

    Salton, G; Buckley, C; Fox, E A

    1983-07-01

    Modern information retrieval systems are designed to supply relevant information in response to requests received from the user population. In most retrieval environments the search requests consist of keywords, or index terms, interrelated by appropriate Boolean operators. Since it is difficult for untrained users to generate effective Boolean search requests, trained search intermediaries are normally used to translate original statements of user need into useful Boolean search formulations. Methods are introduced in this study which reduce the role of the search intermediaries by making it possible to generate Boolean search formulations completely automatically from natural language statements provided by the system patrons. Frequency considerations are used automatically to generate appropriate term combinations as well as Boolean connectives relating the terms. Methods are covered to produce automatic query formulations both in a standard Boolean logic system, as well as in an extended Boolean system in which the strict interpretation of the connectives is relaxed. Experimental results are supplied to evaluate the effectiveness of the automatic query formulation process, and methods are described for applying the automatic query formulation process in practice.

  15. Exploiting the systematic review protocol for classification of medical abstracts.

    PubMed

    Frunza, Oana; Inkpen, Diana; Matwin, Stan; Klement, William; O'Blenis, Peter

    2011-01-01

    To determine whether the automatic classification of documents can be useful in systematic reviews on medical topics, and specifically if the performance of the automatic classification can be enhanced by using the particular protocol of questions employed by the human reviewers to create multiple classifiers. The test collection is the data used in large-scale systematic review on the topic of the dissemination strategy of health care services for elderly people. From a group of 47,274 abstracts marked by human reviewers to be included in or excluded from further screening, we randomly selected 20,000 as a training set, with the remaining 27,274 becoming a separate test set. As a machine learning algorithm we used complement naïve Bayes. We tested both a global classification method, where a single classifier is trained on instances of abstracts and their classification (i.e., included or excluded), and a novel per-question classification method that trains multiple classifiers for each abstract, exploiting the specific protocol (questions) of the systematic review. For the per-question method we tested four ways of combining the results of the classifiers trained for the individual questions. As evaluation measures, we calculated precision and recall for several settings of the two methods. It is most important not to exclude any relevant documents (i.e., to attain high recall for the class of interest) but also desirable to exclude most of the non-relevant documents (i.e., to attain high precision on the class of interest) in order to reduce human workload. For the global method, the highest recall was 67.8% and the highest precision was 37.9%. For the per-question method, the highest recall was 99.2%, and the highest precision was 63%. The human-machine workflow proposed in this paper achieved a recall value of 99.6%, and a precision value of 17.8%. The per-question method that combines classifiers following the specific protocol of the review leads to better results than the global method in terms of recall. Because neither method is efficient enough to classify abstracts reliably by itself, the technology should be applied in a semi-automatic way, with a human expert still involved. When the workflow includes one human expert and the trained automatic classifier, recall improves to an acceptable level, showing that automatic classification techniques can reduce the human workload in the process of building a systematic review. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. The Astronomy Workshop

    NASA Astrophysics Data System (ADS)

    Hamilton, Douglas P.

    2012-05-01

    The Astronomy Workshop (http://janus.astro.umd.edu) is a collection of interactive online educational tools developed for use by students, educators, professional astronomers, and the general public. The more than 20 tools in the Astronomy Workshop are rated for ease-of-use, and have been extensively tested in large university survey courses as well as more specialized classes for undergraduate majors and graduate students. Here we briefly describe the tools most relevant for the Professional Dynamical Astronomer. Solar Systems Visualizer: The orbital motions of planets, moons, and asteroids in the Solar System as well as many of the planets in exoplanetary systems are animated at their correct relative speeds in accurate to-scale drawings. Zoom in from the chaotic outer satellite systems of the giant planets all the way to their innermost ring systems. Orbital Integrators: Determine the orbital evolution of your initial conditions for a number of different scenarios including motions subject to general central forces, the classic three-body problem, and satellites of planets and exoplanets. Zero velocity curves are calculated and automatically included on relevant plots. Orbital Elements: Convert quickly and easily between state vectors and orbital elements with Changing the Elements. Use other routines to visualize your three-dimensional orbit and to convert between the different commonly used sets of orbital elements including the true, mean, and eccentric anomalies. Solar System Calculators: These tools calculate a user-defined mathematical expression simultaneously for all of the Solar System's planets (Planetary Calculator) or moons (Satellite Calculator). Key physical and orbital data are automatically accessed as needed.

  17. ARES v2: new features and improved performance

    NASA Astrophysics Data System (ADS)

    Sousa, S. G.; Santos, N. C.; Adibekyan, V.; Delgado-Mena, E.; Israelian, G.

    2015-05-01

    Aims: We present a new upgraded version of ARES. The new version includes a series of interesting new features such as automatic radial velocity correction, a fully automatic continuum determination, and an estimation of the errors for the equivalent widths. Methods: The automatic correction of the radial velocity is achieved with a simple cross-correlation function, and the automatic continuum determination, as well as the estimation of the errors, relies on a new approach to evaluating the spectral noise at the continuum level. Results: ARES v2 is totally compatible with its predecessor. We show that the fully automatic continuum determination is consistent with the previous methods applied for this task. It also presents a significant improvement on its performance thanks to the implementation of a parallel computation using the OpenMP library. Automatic Routine for line Equivalent widths in stellar Spectra - ARES webpage: http://www.astro.up.pt/~sousasag/ares/Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 075.D-0800(A).

  18. Reflective and Non-conscious Responses to Exercise Images

    PubMed Central

    Cope, Kathryn; Vandelanotte, Corneel; Short, Camille E.; Conroy, David E.; Rhodes, Ryan E.; Jackson, Ben; Dimmock, James A.; Rebar, Amanda L.

    2018-01-01

    Images portraying exercise are commonly used to promote exercise behavior and to measure automatic associations of exercise (e.g., via implicit association tests). The effectiveness of these promotion efforts and the validity of measurement techniques partially rely on the untested assumption that the images being used are perceived by the general public as portrayals of exercise that is pleasant and motivating. The aim of this study was to investigate how content of images impacted people's automatic and reflective evaluations of exercise images. Participants (N = 90) completed a response time categorization task (similar to the implicit association test) to capture how automatically people perceived each image as relevant to Exercise or Not exercise. Participants also self-reported their evaluations of the images using visual analog scales with the anchors: Exercise/Not exercise, Does not motivate me to exercise/Motivates me to exercise, Pleasant/Unpleasant, and Energizing/Deactivating. People tended to more strongly automatically associate images with exercise if the images were of an outdoor setting, presented sport (as opposed to active labor or gym-based) activities, and included young (as opposed to middle-aged) adults. People tended to reflectively find images of young adults more motivating and relevant to exercise than images of older adults. The content of exercise images is an often overlooked source of systematic variability that may impact measurement validity and intervention effectiveness. PMID:29375419

  19. Reflective and Non-conscious Responses to Exercise Images.

    PubMed

    Cope, Kathryn; Vandelanotte, Corneel; Short, Camille E; Conroy, David E; Rhodes, Ryan E; Jackson, Ben; Dimmock, James A; Rebar, Amanda L

    2017-01-01

    Images portraying exercise are commonly used to promote exercise behavior and to measure automatic associations of exercise (e.g., via implicit association tests). The effectiveness of these promotion efforts and the validity of measurement techniques partially rely on the untested assumption that the images being used are perceived by the general public as portrayals of exercise that is pleasant and motivating. The aim of this study was to investigate how content of images impacted people's automatic and reflective evaluations of exercise images. Participants ( N = 90) completed a response time categorization task (similar to the implicit association test) to capture how automatically people perceived each image as relevant to Exercise or Not exercise . Participants also self-reported their evaluations of the images using visual analog scales with the anchors: Exercise / Not exercise, Does not motivate me to exercise / Motivates me to exercise, Pleasant / Unpleasant , and Energizing/Deactivating . People tended to more strongly automatically associate images with exercise if the images were of an outdoor setting, presented sport (as opposed to active labor or gym-based) activities, and included young (as opposed to middle-aged) adults. People tended to reflectively find images of young adults more motivating and relevant to exercise than images of older adults. The content of exercise images is an often overlooked source of systematic variability that may impact measurement validity and intervention effectiveness.

  20. Analysis of the Relevance of Posts in Asynchronous Discussions

    ERIC Educational Resources Information Center

    Azevedo, Breno T.; Reategui, Eliseo; Behar, Patrícia A.

    2014-01-01

    This paper presents ForumMiner, a tool for the automatic analysis of students' posts in asynchronous discussions. ForumMiner uses a text mining system to extract graphs from texts that are given to students as a basis for their discussion. These graphs contain the most relevant terms found in the texts, as well as the relationships between them.…

  1. A framework for automatic information quality ranking of diabetes websites.

    PubMed

    Belen Sağlam, Rahime; Taskaya Temizel, Tugba

    2015-01-01

    Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).

  2. FUB at TREC 2008 Relevance Feedback Track: Extending Rocchio with Distributional Term Analysis

    DTIC Science & Technology

    2008-11-01

    starting point is the improved version [ Salton and Buckley 1990] of the original Rocchio’s formula [Rocchio 1971]: newQ = α ⋅ origQ + β R r r∈R ∑ − γR...earlier studies about the low effect of the main relevance feedback parameters on retrieval performance (e.g., Salton and Buckley 1990), while they seem...Relevance feedback in information retrieval. In The SMART retrieval system - experiments in automatic document processing, Salton , G., Ed., Prentice Hall

  3. Identifying relevant group of miRNAs in cancer using fuzzy mutual information.

    PubMed

    Pal, Jayanta Kumar; Ray, Shubhra Sankar; Pal, Sankar K

    2016-04-01

    MicroRNAs (miRNAs) act as a major biomarker of cancer. All miRNAs in human body are not equally important for cancer identification. We propose a methodology, called FMIMS, which automatically selects the most relevant miRNAs for a particular type of cancer. In FMIMS, miRNAs are initially grouped by using a SVM-based algorithm; then the group with highest relevance is determined and the miRNAs in that group are finally ranked for selection according to their redundancy. Fuzzy mutual information is used in computing the relevance of a group and the redundancy of miRNAs within it. Superiority of the most relevant group to all others, in deciding normal or cancer, is demonstrated on breast, renal, colorectal, lung, melanoma and prostate data. The merit of FMIMS as compared to several existing methods is established. While 12 out of 15 selected miRNAs by FMIMS corroborate with those of biological investigations, three of them viz., "hsa-miR-519," "hsa-miR-431" and "hsa-miR-320c" are possible novel predictions for renal cancer, lung cancer and melanoma, respectively. The selected miRNAs are found to be involved in disease-specific pathways by targeting various genes. The method is also able to detect the responsible miRNAs even at the primary stage of cancer. The related code is available at http://www.jayanta.droppages.com/FMIMS.html .

  4. Automatic Sleep Stage Determination by Multi-Valued Decision Making Based on Conditional Probability with Optimal Parameters

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi

    Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.

  5. Transient Oscilliations in Mechanical Systems of Automatic Control with Random Parameters

    NASA Astrophysics Data System (ADS)

    Royev, B.; Vinokur, A.; Kulikov, G.

    2018-04-01

    Transient oscillations in mechanical systems of automatic control with random parameters is a relevant but insufficiently studied issue. In this paper, a modified spectral method was applied to investigate the problem. The nature of dynamic processes and the phase portraits are analyzed depending on the amplitude and frequency of external influence. It is evident from the obtained results, that the dynamic phenomena occurring in the systems with random parameters under external influence are complex, and their study requires further investigation.

  6. Genetic Evolution of Shape-Altering Programs for Supersonic Aerodynamics

    NASA Technical Reports Server (NTRS)

    Kennelly, Robert A., Jr.; Bencze, Daniel P. (Technical Monitor)

    2002-01-01

    Two constrained shape optimization problems relevant to aerodynamics are solved by genetic programming, in which a population of computer programs evolves automatically under pressure of fitness-driven reproduction and genetic crossover. Known optimal solutions are recovered using a small, naive set of elementary operations. Effectiveness is improved through use of automatically defined functions, especially when one of them is capable of a variable number of iterations, even though the test problems lack obvious exploitable regularities. An attempt at evolving new elementary operations was only partially successful.

  7. Using the International Directory Network and connected information systems for research in the Earth and space sciences

    NASA Technical Reports Server (NTRS)

    Thieman, J. R.

    1994-01-01

    Many researchers are becoming aware of the International Directory Network (IDN), an interconnected federation of international directories to Earth and space science data. Are you aware, however, of the many Earth-science-relevant information systems which can be accessed automatically from the directories? After determining potentially useful data sets in various disciplines through directories such as the Global Change Master Directory, it is becoming increasingly possible to get detailed information about the correlative possibilities of these data sets through the connected guide/catalog and inventory systems. Such capabilities as data set browse, subsetting, analysis, etc. are available now and will be improving in the future.

  8. Automatic Line Calling Badminton System

    NASA Astrophysics Data System (ADS)

    Affandi Saidi, Syahrul; Adawiyah Zulkiplee, Nurabeahtul; Muhammad, Nazmizan; Sarip, Mohd Sharizan Md

    2018-05-01

    A system and relevant method are described to detect whether a projectile impact occurs on one side of a boundary line or the other. The system employs the use of force sensing resistor-based sensors that may be designed in segments or assemblies and linked to a mechanism with a display. An impact classification system is provided for distinguishing between various events, including a footstep, ball impact and tennis racquet contact. A sensor monitoring system is provided for determining the condition of sensors and providing an error indication if sensor problems exist. A service detection system is provided when the system is used for tennis that permits activation of selected groups of sensors and deactivation of others.

  9. Channel Measurements for Automatic Vehicle Monitoring Systems

    DOT National Transportation Integrated Search

    1974-03-01

    Co-channel and adjacent channel electromagnetic interference measurements were conducted on the Sierra Research Corp. and the Chicago Transit Authority automatic vehicle monitoring systems. These measurements were made to determine if the automatic v...

  10. Treating stereotypy in adolescents diagnosed with autism by refining the tactic of "using stereotypy as reinforcement".

    PubMed

    Potter, Jacqueline N; Hanley, Gregory P; Augustine, Matotopa; Clay, Casey J; Phelps, Meredith C

    2013-01-01

    Use of automatically reinforced stereotypy as reinforcement has been shown to be successful for increasing socially desirable behaviors in persons with intellectual disabilities (Charlop, Kurtz, & Casey, 1990; Hanley, Iwata, Thompson, & Lindberg, 2000; Hung, 1978). A component analysis of this treatment was conducted with 3 adolescents who had been diagnosed with autism, and then extended by (a) progressively increasing the quantitative and qualitative aspects of the response requirement to earn access to stereotypy, (b) arranging objective measures of client preference for contingent access to stereotypy compared to other relevant treatments for their automatically reinforced stereotypy, and (c) assessing the social validity of this treatment with other relevant stakeholders. Implications for addressing stereotypy and increasing the leisure skills of adolescents with autism are discussed. © Society for the Experimental Analysis of Behavior.

  11. To do it or to let an automatic tool do it? The priority of control over effort.

    PubMed

    Osiurak, François; Wagner, Clara; Djerbi, Sara; Navarro, Jordan

    2013-01-01

    The aim of the present study is to provide experimental data relevant to the issue of what leads humans to use automatic tools. Two answers can be offered. The first is that humans strive to minimize physical and/or cognitive effort (principle of least effort). The second is that humans tend to keep their perceived control over the environment (principle of more control). These two factors certainly play a role, but the question raised here is to what do people give priority in situations wherein both manual and automatic actions take the same time - minimizing effort or keeping perceived control? To answer that question, we built four experiments in which participants were confronted with a recurring choice between performing a task manually (physical effort) or in a semi-automatic way (cognitive effort) versus using an automatic tool that completes the task for them (no effort). In this latter condition, participants were required to follow the progression of the automatic tool step by step. Our results showed that participants favored the manual or semi-automatic condition over the automatic condition. However, when they were offered the opportunity to perform recreational tasks in parallel, the shift toward manual condition disappeared. The findings give support to the idea that people give priority to keeping control over minimizing effort.

  12. A Procedure for Extending Input Selection Algorithms to Low Quality Data in Modelling Problems with Application to the Automatic Grading of Uploaded Assignments

    PubMed Central

    Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis

    2014-01-01

    When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967

  13. On search guide phrase compilation for recommending home medical products.

    PubMed

    Luo, Gang

    2010-01-01

    To help people find desired home medical products (HMPs), we developed an intelligent personal health record (iPHR) system that can automatically recommend HMPs based on users' health issues. Using nursing knowledge, we pre-compile a set of "search guide" phrases that provides semantic translation from words describing health issues to their underlying medical meanings. Then iPHR automatically generates queries from those phrases and uses them and a search engine to retrieve HMPs. To avoid missing relevant HMPs during retrieval, the compiled search guide phrases need to be comprehensive. Such compilation is a challenging task because nursing knowledge updates frequently and contains numerous details scattered in many sources. This paper presents a semi-automatic tool facilitating such compilation. Our idea is to formulate the phrase compilation task as a multi-label classification problem. For each newly obtained search guide phrase, we first use nursing knowledge and information retrieval techniques to identify a small set of potentially relevant classes with corresponding hints. Then a nurse makes the final decision on assigning this phrase to proper classes based on those hints. We demonstrate the effectiveness of our techniques by compiling search guide phrases from an occupational therapy textbook.

  14. Controlled cooling of an electronic system for reduced energy consumption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.

    Energy efficient control of a cooling system cooling an electronic system is provided. The control includes automatically determining at least one adjusted control setting for at least one adjustable cooling component of a cooling system cooling the electronic system. The automatically determining is based, at least in part, on power being consumed by the cooling system and temperature of a heat sink to which heat extracted by the cooling system is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the coolingmore » system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on one or more experimentally obtained models relating the targeted temperature and power consumption of the one or more adjustable cooling components of the cooling system.« less

  15. Controlled cooling of an electronic system based on projected conditions

    DOEpatents

    David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.

    2016-05-17

    Energy efficient control of a cooling system cooling an electronic system is provided based, in part, on projected conditions. The control includes automatically determining an adjusted control setting(s) for an adjustable cooling component(s) of the cooling system. The automatically determining is based, at least in part, on projected power consumed by the electronic system at a future time and projected temperature at the future time of a heat sink to which heat extracted is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the cooling system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on an experimentally obtained model(s) relating the targeted temperature and power consumption of the adjustable cooling component(s) of the cooling system.

  16. Controlled cooling of an electronic system based on projected conditions

    DOEpatents

    David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.

    2015-08-18

    Energy efficient control of a cooling system cooling an electronic system is provided based, in part, on projected conditions. The control includes automatically determining an adjusted control setting(s) for an adjustable cooling component(s) of the cooling system. The automatically determining is based, at least in part, on projected power consumed by the electronic system at a future time and projected temperature at the future time of a heat sink to which heat extracted is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the cooling system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on an experimentally obtained model(s) relating the targeted temperature and power consumption of the adjustable cooling component(s) of the cooling system.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.

    Energy efficient control of a cooling system cooling an electronic system is provided. The control includes automatically determining at least one adjusted control setting for at least one adjustable cooling component of a cooling system cooling the electronic system. The automatically determining is based, at least in part, on power being consumed by the cooling system and temperature of a heat sink to which heat extracted by the cooling system is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the coolingmore » system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on one or more experimentally obtained models relating the targeted temperature and power consumption of the one or more adjustable cooling components of the cooling system.« less

  18. The Astronomy Workshop

    NASA Astrophysics Data System (ADS)

    Hamilton, Douglas P.

    2013-05-01

    Abstract (2,250 Maximum Characters): The Astronomy Workshop (http://janus.astro.umd.edu) is a collection of interactive online educational tools developed for use by students, educators, professional astronomers, and the general public. The more than 20 tools in the Astronomy Workshop are rated for ease-of-use, and have been extensively tested in large university survey courses as well as more specialized classes for undergraduate majors and graduate students. Here we briefly describe the tools most relevant for the Professional Dynamical Astronomer. Solar Systems Visualizer: The orbital motions of planets, moons, and asteroids in the Solar System as well as many of the planets in exoplanetary systems are animated at their correct relative speeds in accurate to-scale drawings. Zoom in from the chaotic outer satellite systems of the giant planets all the way to their innermost ring systems. Orbital Integrators: Determine the orbital evolution of your initial conditions for a number of different scenarios including motions subject to general central forces, the classic three-body problem, and satellites of planets and exoplanets. Zero velocity curves are calculated and automatically included on relevant plots. Orbital Elements: Convert quickly and easily between state vectors and orbital elements with Changing the Elements. Use other routines to visualize your three-dimensional orbit and to convert between the different commonly used sets of orbital elements including the true, mean, and eccentric anomalies. Solar System Calculators: These tools calculate a user-defined mathematical expression simultaneously for all of the Solar System's planets (Planetary Calculator) or moons (Satellite Calculator). Key physical and orbital data are automatically accessed as needed.

  19. Feasibility study ASCS remote sensing/compliance determination system

    NASA Technical Reports Server (NTRS)

    Duggan, I. E.; Minter, T. C., Jr.; Moore, B. H.; Nosworthy, C. T.

    1973-01-01

    A short-term technical study was performed by the MSC Earth Observations Division to determine the feasibility of the proposed Agricultural Stabilization and Conservation Service Automatic Remote Sensing/Compliance Determination System. For the study, the term automatic was interpreted as applying to an automated remote-sensing system that includes data acquisition, processing, and management.

  20. On the assessment of the nature of open star clusters and the determination of their basic parameters with limited data

    NASA Astrophysics Data System (ADS)

    Carraro, Giovanni; Baume, Gustavo; Seleznev, Anton F.; Costa, Edgardo

    2017-07-01

    Our knowledge of stellar evolution and of the structure and chemical evolution of the Galactic disk largely builds on the study of open star clusters. Because of their crucial role in these relevant topics, large homogeneous catalogues of open cluster parameters are highly desirable. Although efforts have been made to develop automatic tools to analyse large numbers of clusters, the results obtained so far vary from study to study, and sometimes are very contradictory when compared to dedicated studies of individual clusters. In this work we highlight the common causes of these discrepancies for some open clusters, and show that at present dedicated studies yield a much better assessment of the nature of star clusters, even in the absence of ideal data-sets. We make use of deep, wide-field, multi-colour photometry to discuss the nature of six strategically selected open star clusters: Trumpler 22, Lynga 6, Hogg 19, Hogg 21, Pismis 10 and Pismis 14. We have precisely derived their basic parameters by means of a combination of star counts and photometric diagrams. Trumpler 22 and Lynga 6 are included in our study because they are widely known, and thus provided a check of our data and methodology. The remaining four clusters are very poorly known, and their available parameters have been obtained using automatic tools only. Our results are in some cases in severe disagreement with those from automatic surveys.

  1. A simulator evaluation of an automatic terminal approach system

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.

    1983-01-01

    The automatic terminal approach system (ATAS) is a concept for improving the pilot/machine interface with cockpit automation. The ATAS can automatically fly a published instrument approach by using stored instrument approach data to automatically tune airplane avionics, control the airplane's autopilot, and display status information to the pilot. A piloted simulation study was conducted to determine the feasibility of an ATAS, determine pilot acceptance, and examine pilot/ATAS interaction. Seven instrument-rated pilots each flew four instrument approaches with a base-line heading select autopilot mode. The ATAS runs resulted in lower flight technical error, lower pilot workload, and fewer blunders than with the baseline autopilot. The ATAS status display enabled the pilots to maintain situational awareness during the automatic approaches. The system was well accepted by the pilots.

  2. Electrophysiological evidence for early perceptual facilitation and efficient categorization of self-related stimuli during an Implicit Association Test measuring neuroticism.

    PubMed

    Fleischhauer, Monika; Strobel, Alexander; Diers, Kersten; Enge, Sören

    2014-02-01

    The Implicit Association Test (IAT) is a widely used latency-based categorization task that indirectly measures the strength of automatic associations between target and attribute concepts. So far, little is known about the perceptual and cognitive processes underlying personality IATs. Thus, the present study examined event-related potential indices during the execution of an IAT measuring neuroticism (N  =  70). The IAT effect was strongly modulated by the P1 component indicating early facilitation of relevant visual input and by a P3b-like late positive component reflecting the efficacy of stimulus categorization. Both components covaried, and larger amplitudes led to faster responses. The results suggest a relationship between early perceptual and semantic processes operating at a more automatic, implicit level and later decision-related categorization of self-relevant stimuli contributing to the IAT effect. Copyright © 2013 Society for Psychophysiological Research.

  3. Validation of automatic segmentation of ribs for NTCP modeling.

    PubMed

    Stam, Barbara; Peulen, Heike; Rossi, Maddalena M G; Belderbos, José S A; Sonke, Jan-Jakob

    2016-03-01

    Determination of a dose-effect relation for rib fractures in a large patient group has been limited by the time consuming manual delineation of ribs. Automatic segmentation could facilitate such an analysis. We determine the accuracy of automatic rib segmentation in the context of normal tissue complication probability modeling (NTCP). Forty-one patients with stage I/II non-small cell lung cancer treated with SBRT to 54 Gy in 3 fractions were selected. Using the 4DCT derived mid-ventilation planning CT, all ribs were manually contoured and automatically segmented. Accuracy of segmentation was assessed using volumetric, shape and dosimetric measures. Manual and automatic dosimetric parameters Dx and EUD were tested for equivalence using the Two One-Sided T-test (TOST), and assessed for agreement using Bland-Altman analysis. NTCP models based on manual and automatic segmentation were compared. Automatic segmentation was comparable with the manual delineation in radial direction, but larger near the costal cartilage and vertebrae. Manual and automatic Dx and EUD were significantly equivalent. The Bland-Altman analysis showed good agreement. The two NTCP models were very similar. Automatic rib segmentation was significantly equivalent to manual delineation and can be used for NTCP modeling in a large patient group. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Automatic safety belt systems owner usage and attitudes in GM Chevettes and VW Rabbits

    DOT National Transportation Integrated Search

    1980-05-01

    Author's abstract: The study was designed to: (1) evaluate the effectiveness of automatic restraint systems in increasing belt usage, and (2) determine owner attitudes toward the system. Information gathered from owners of vehicles with automatic sys...

  5. Automatic safety belt systems owner usage and attitudes in GM Chevettes and VW Rabbits

    DOT National Transportation Integrated Search

    1981-02-01

    This study was designed to: (1) evaluate the effectiveness of automatic restraint systems in increasing belt usage, and (2) determine owner attitudes toward the systems. The information gathered from owners of vehicles with automatic systems will ass...

  6. Automatic Control of the Concrete Mixture Homogeneity in Cycling Mixers

    NASA Astrophysics Data System (ADS)

    Anatoly Fedorovich, Tikhonov; Drozdov, Anatoly

    2018-03-01

    The article describes the factors affecting the concrete mixture quality related to the moisture content of aggregates, since the effectiveness of the concrete mixture production is largely determined by the availability of quality management tools at all stages of the technological process. It is established that the unaccounted moisture of aggregates adversely affects the concrete mixture homogeneity and, accordingly, the strength of building structures. A new control method and the automatic control system of the concrete mixture homogeneity in the technological process of mixing components have been proposed, since the tasks of providing a concrete mixture are performed by the automatic control system of processing kneading-and-mixing machinery with operational automatic control of homogeneity. Theoretical underpinnings of the control of the mixture homogeneity are presented, which are related to a change in the frequency of vibrodynamic vibrations of the mixer body. The structure of the technical means of the automatic control system for regulating the supply of water is determined depending on the change in the concrete mixture homogeneity during the continuous mixing of components. The following technical means for establishing automatic control have been chosen: vibro-acoustic sensors, remote terminal units, electropneumatic control actuators, etc. To identify the quality indicator of automatic control, the system offers a structure flowchart with transfer functions that determine the ACS operation in transient dynamic mode.

  7. Improving the local wavenumber method by automatic DEXP transformation

    NASA Astrophysics Data System (ADS)

    Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni

    2014-12-01

    In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.

  8. Microsoft Research at TREC 2009. Web and Relevance Feedback Tracks

    DTIC Science & Technology

    2009-11-01

    Information Processing Systems, pages 193–200, 2006. [2] J . M. Kleinberg. Authoritative sources in a hyperlinked environment. In Proc. of the 9th...Walker, S. Jones, M. Hancock-Beaulieu, and M. Gatford. Okapi at TREC-3. In Proc. of the 3rd Text REtrieval Conference, 1994. [8] J . J . Rocchio. Relevance...feedback in information retrieval. In Gerard Salton , editor, The SMART Retrieval System - Experiments in Automatic Document Processing. Prentice Hall

  9. Better safe than sorry: simplistic fear-relevant stimuli capture attention.

    PubMed

    Forbes, Sarah J; Purkis, Helena M; Lipp, Ottmar V

    2011-08-01

    It has been consistently demonstrated that fear-relevant images capture attention preferentially over fear-irrelevant images. Current theory suggests that this faster processing could be mediated by an evolved module that allows certain stimulus features to attract attention automatically, prior to the detailed processing of the image. The present research investigated whether simplified images of fear-relevant stimuli would produce interference with target detection in a visual search task. In Experiment 1, silhouettes and degraded silhouettes of fear-relevant animals produced more interference than did the fear-irrelevant images. Experiment 2, compared the effects of fear-relevant and fear-irrelevant distracters and confirmed that the interference produced by fear-relevant distracters was not an effect of novelty. Experiment 3 suggested that fear-relevant stimuli produced interference regardless of whether participants were instructed as to the content of the images. The three experiments indicate that even very simplistic images of fear-relevant animals can divert attention.

  10. Flame analysis using image processing techniques

    NASA Astrophysics Data System (ADS)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  11. Automaticity in Anxiety Disorders and Major Depressive Disorder

    PubMed Central

    Teachman, Bethany A.; Joormann, Jutta; Steinman, Shari; Gotlib, Ian H.

    2012-01-01

    In this paper we examine the nature of automatic cognitive processing in anxiety disorders and Major Depressive Disorder (MDD). Rather than viewing automaticity as a unitary construct, we follow a social cognition perspective (Bargh, 1994) that argues for four theoretically independent features of automaticity: unconscious (processing of emotional stimuli occurs outside awareness), efficient (processing emotional meaning uses minimal attentional resources), unintentional (no goal is needed to engage in processing emotional meaning), and uncontrollable (limited ability to avoid, alter or terminate processing emotional stimuli). Our review of the literature suggests that most anxiety disorders are characterized by uncontrollable, and likely also unconscious and unintentional, biased processing of threat-relevant information. In contrast, MDD is most clearly typified by uncontrollable, but not unconscious or unintentional, processing of negative information. For the anxiety disorders and for MDD, there is not sufficient evidence to draw firm conclusions about efficiency of processing, though early indications are that neither anxiety disorders nor MDD are characterized by this feature. Clinical and theoretical implications of these findings are discussed and directions for future research are offered. In particular, it is clear that paradigms that more directly delineate the different features of automaticity are required to gain a more comprehensive and systematic understanding of the importance of automatic processing in emotion dysregulation. PMID:22858684

  12. Automatic indexing and retrieval of encounter-specific evidence for point-of-care support.

    PubMed

    O'Sullivan, Dympna M; Wilk, Szymon A; Michalowski, Wojtek J; Farion, Ken J

    2010-08-01

    Evidence-based medicine relies on repositories of empirical research evidence that can be used to support clinical decision making for improved patient care. However, retrieving evidence from such repositories at local sites presents many challenges. This paper describes a methodological framework for automatically indexing and retrieving empirical research evidence in the form of the systematic reviews and associated studies from The Cochrane Library, where retrieved documents are specific to a patient-physician encounter and thus can be used to support evidence-based decision making at the point of care. Such an encounter is defined by three pertinent groups of concepts - diagnosis, treatment, and patient, and the framework relies on these three groups to steer indexing and retrieval of reviews and associated studies. An evaluation of the indexing and retrieval components of the proposed framework was performed using documents relevant for the pediatric asthma domain. Precision and recall values for automatic indexing of systematic reviews and associated studies were 0.93 and 0.87, and 0.81 and 0.56, respectively. Moreover, precision and recall for the retrieval of relevant systematic reviews and associated studies were 0.89 and 0.81, and 0.92 and 0.89, respectively. With minor modifications, the proposed methodological framework can be customized for other evidence repositories. Copyright 2010 Elsevier Inc. All rights reserved.

  13. Automated detection of retinal landmarks for the identification of clinically relevant regions in fundus photography

    NASA Astrophysics Data System (ADS)

    Ometto, Giovanni; Calivá, Francesco; Al-Diri, Bashir; Bek, Toke; Hunter, Andrew

    2016-03-01

    Automatic, quick and reliable identification of retinal landmarks from fundus photography is key for measurements used in research, diagnosis, screening and treating of common diseases affecting the eyes. This study presents a fast method for the detection of the centre of mass of the vascular arcades, optic nerve head (ONH) and fovea, used in the definition of five clinically relevant areas in use for screening programmes for diabetic retinopathy (DR). Thirty-eight fundus photographs showing 7203 DR lesions were analysed to find the landmarks manually by two retina-experts and automatically by the proposed method. The automatic identification of the ONH and fovea were performed using template matching based on normalised cross correlation. The centre of mass of the arcades was obtained by fitting an ellipse on sample coordinates of the main vessels. The coordinates were obtained by processing the image with hessian filtering followed by shape analyses and finally sampling the results. The regions obtained manually and automatically were used to count the retinal lesions falling within, and to evaluate the method. 92.7% of the lesions were falling within the same regions based on the landmarks selected by the two experts. 91.7% and 89.0% were counted in the same areas identified by the method and the first and second expert respectively. The inter-repeatability of the proposed method and the experts is comparable, while the 100% intra-repeatability makes the algorithm a valuable tool in tasks like analyses in real-time, of large datasets and of intra-patient variability.

  14. Cognitive Function as a Trans-Diagnostic Treatment Target in Stimulant Use Disorders

    PubMed Central

    Sofuoglu, Mehmet; DeVito, Elise E.; Waters, Andrew J.; Carroll, Kathleen M.

    2016-01-01

    Stimulant use disorder is an important public health problem, with an estimated 2.1 million current users in the United States alone. No pharmacological treatments are approved by the U.S. Food and Drug Administration (FDA) for stimulant use disorder and behavioral treatments have variable efficacy and limited availability. Most individuals with stimulant use disorder have other comorbidities, most with overlapping symptoms and cognitive impairments. The goal of this article is to present a rationale for cognition as a treatment target in stimulant use disorder, and to outline potential treatment approaches. Rates of lifetime comorbid psychiatric disorders among people with stimulant use disorders are estimated at 65% - 73%, with the most common being mood disorders (13% - 64%) and anxiety disorders (21% - 50%), as well as non-substance induced psychotic disorders (under 10%). There are several models of addictive behavior, but the dual process model particularly highlights the relevance of cognitive impairments and biases to the development and maintenance of addiction. This model explains addictive behavior as a balance between automatic processes and executive control, which in turn are related to individual (genetics, comorbid disorders, psychosocial factors) and other (craving, triggers, drug use) factors. Certain cognitive impairments, such as attentional bias and approach bias, are most relevant to automatic processes, while sustained attention, response inhibition, and working memory are primarily related to executive control. These cognitive impairments and biases are also common in disorders frequently comorbid with stimulant use disorder, and predict poor treatment retention and clinical outcomes. As such, they may serve as feasible trans-diagnostic treatment targets. There are promising pharmacological, cognitive, and behavioral approaches that aim to enhance cognitive function. Pharmacotherapies target cognitive impairments associated with executive control and include cholinesterase inhibitors (e.g., galantamine, rivastigmine) and monoamine transporter inhibitors (e.g., modafinil, methylphenidate). Cognitive behavioral therapy and cognitive rehabilitation also enhance executive control, while cognitive bias modification targets impairments associated with automatic processes. Cognitive enhancements to improve treatment outcomes is a novel and promising strategy, but its clinical value for the treatment of stimulant use disorder, with or without other psychiatric comorbidities, remains to be determined in future studies. PMID:26828702

  15. ODISEES Availability and Feedback Request

    Atmospheric Science Data Center

    2014-09-06

    ... As a follow-up Action from the Atmospheric Science Data Center (ASDC) User Working Group (UWG) held on 24-25 June, we are ... for a common language to describe scientific terms so that a computer can scour the internet, automatically discover relevant information ...

  16. Game-powered machine learning

    PubMed Central

    Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert

    2012-01-01

    Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the “wisdom of the crowds.” Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., “funky jazz with saxophone,” “spooky electronica,” etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data. PMID:22460786

  17. Game-powered machine learning.

    PubMed

    Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert

    2012-04-24

    Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the "wisdom of the crowds." Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., "funky jazz with saxophone," "spooky electronica," etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data.

  18. Attention modifies sound level detection in young children.

    PubMed

    Sussman, Elyse S; Steinschneider, Mitchell

    2011-07-01

    Have you ever shouted your child's name from the kitchen while they were watching television in the living room to no avail, so you shout their name again, only louder? Yet, still no response. The current study provides evidence that young children process loudness changes differently than pitch changes when they are engaged in another task such as watching a video. Intensity level changes were physiologically detected only when they were behaviorally relevant, but frequency level changes were physiologically detected without task relevance in younger children. This suggests that changes in pitch rather than changes in volume may be more effective in evoking a response when sounds are unexpected. Further, even though behavioral ability may appear to be similar in younger and older children, attention-based physiologic responses differ from automatic physiologic processes in children. Results indicate that 1) the automatic auditory processes leading to more efficient higher-level skills continue to become refined through childhood; and 2) there are different time courses for the maturation of physiological processes encoding the distinct acoustic attributes of sound pitch and sound intensity. The relevance of these findings to sound perception in real-world environments is discussed.

  19. Automated Signal Processing Applied to Volatile-Based Inspection of Greenhouse Crops

    PubMed Central

    Jansen, Roel; Hofstee, Jan Willem; Bouwmeester, Harro; van Henten, Eldert

    2010-01-01

    Gas chromatograph–mass spectrometers (GC-MS) have been used and shown utility for volatile-based inspection of greenhouse crops. However, a widely recognized difficulty associated with GC-MS application is the large and complex data generated by this instrument. As a consequence, experienced analysts are often required to process this data in order to determine the concentrations of the volatile organic compounds (VOCs) of interest. Manual processing is time-consuming, labour intensive and may be subject to errors due to fatigue. The objective of this study was to assess whether or not GC-MS data can also be automatically processed in order to determine the concentrations of crop health associated VOCs in a greenhouse. An experimental dataset that consisted of twelve data files was processed both manually and automatically to address this question. Manual processing was based on simple peak integration while the automatic processing relied on the algorithms implemented in the MetAlign™ software package. The results of automatic processing of the experimental dataset resulted in concentrations similar to that after manual processing. These results demonstrate that GC-MS data can be automatically processed in order to accurately determine the concentrations of crop health associated VOCs in a greenhouse. When processing GC-MS data automatically, noise reduction, alignment, baseline correction and normalisation are required. PMID:22163594

  20. Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text

    NASA Astrophysics Data System (ADS)

    Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.

    2015-12-01

    We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction process. We will describe our experience and implementation of our system and share lessons learned from our development. We will also discuss ways in which this could be adapted to other science fields. [1] Funk et al., 2014. [2] Kang et al., 2014. [3] Utopia Documents, http://utopiadocs.com [4] Apache cTAKES, http://ctakes.apache.org

  1. Automatic safety belt systems : changes in owner usage over time in GM Chevettes and VW Rabbits

    DOT National Transportation Integrated Search

    1981-08-01

    This study was designed to: (1) determine any decrement in use of the automatic restraint system, and (2) assess any change in owners' attitudes toward the automatic restraint system over a two year period. The information gathered will assist the NH...

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoang Duc, Albert K., E-mail: albert.hoangduc.ucl@gmail.com; McClelland, Jamie; Modat, Marc

    Purpose: The aim of this study was to assess whether clinically acceptable segmentations of organs at risk (OARs) in head and neck cancer can be obtained automatically and efficiently using the novel “similarity and truth estimation for propagated segmentations” (STEPS) compared to the traditional “simultaneous truth and performance level estimation” (STAPLE) algorithm. Methods: First, 6 OARs were contoured by 2 radiation oncologists in a dataset of 100 patients with head and neck cancer on planning computed tomography images. Each image in the dataset was then automatically segmented with STAPLE and STEPS using those manual contours. Dice similarity coefficient (DSC) wasmore » then used to compare the accuracy of these automatic methods. Second, in a blind experiment, three separate and distinct trained physicians graded manual and automatic segmentations into one of the following three grades: clinically acceptable as determined by universal delineation guidelines (grade A), reasonably acceptable for clinical practice upon manual editing (grade B), and not acceptable (grade C). Finally, STEPS segmentations graded B were selected and one of the physicians manually edited them to grade A. Editing time was recorded. Results: Significant improvements in DSC can be seen when using the STEPS algorithm on large structures such as the brainstem, spinal canal, and left/right parotid compared to the STAPLE algorithm (all p < 0.001). In addition, across all three trained physicians, manual and STEPS segmentation grades were not significantly different for the brainstem, spinal canal, parotid (right/left), and optic chiasm (all p > 0.100). In contrast, STEPS segmentation grades were lower for the eyes (p < 0.001). Across all OARs and all physicians, STEPS produced segmentations graded as well as manual contouring at a rate of 83%, giving a lower bound on this rate of 80% with 95% confidence. Reduction in manual interaction time was on average 61% and 93% when automatic segmentations did and did not, respectively, require manual editing. Conclusions: The STEPS algorithm showed better performance than the STAPLE algorithm in segmenting OARs for radiotherapy of the head and neck. It can automatically produce clinically acceptable segmentation of OARs, with results as relevant as manual contouring for the brainstem, spinal canal, the parotids (left/right), and optic chiasm. A substantial reduction in manual labor was achieved when using STEPS even when manual editing was necessary.« less

  3. Automatic Picking of Foraminifera: Design of the Foraminifera Image Recognition and Sorting Tool (FIRST) Prototype and Results of the Image Classification Scheme

    NASA Astrophysics Data System (ADS)

    de Garidel-Thoron, T.; Marchant, R.; Soto, E.; Gally, Y.; Beaufort, L.; Bolton, C. T.; Bouslama, M.; Licari, L.; Mazur, J. C.; Brutti, J. M.; Norsa, F.

    2017-12-01

    Foraminifera tests are the main proxy carriers for paleoceanographic reconstructions. Both geochemical and taxonomical studies require large numbers of tests to achieve statistical relevance. To date, the extraction of foraminifera from the sediment coarse fraction is still done by hand and thus time-consuming. Moreover, the recognition of morphotypes, ecologically relevant, requires some taxonomical skills not easily taught. The automatic recognition and extraction of foraminifera would largely help paleoceanographers to overcome these issues. Recent advances in automatic image classification using machine learning opens the way to automatic extraction of foraminifera. Here we detail progress on the design of an automatic picking machine as part of the FIRST project. The machine handles 30 pre-sieved samples (100-1000µm), separating them into individual particles (including foraminifera) and imaging each in pseudo-3D. The particles are classified and specimens of interest are sorted either for Individual Foraminifera Analyses (44 per slide) and/or for classical multiple analyses (8 morphological classes per slide, up to 1000 individuals per hole). The classification is based on machine learning using Convolutional Neural Networks (CNNs), similar to the approach used in the coccolithophorid imaging system SYRACO. To prove its feasibility, we built two training image datasets of modern planktonic foraminifera containing approximately 2000 and 5000 images each, corresponding to 15 & 25 morphological classes. Using a CNN with a residual topology (ResNet) we achieve over 95% correct classification for each dataset. We tested the network on 160,000 images from 45 depths of a sediment core from the Pacific ocean, for which we have human counts. The current algorithm is able to reproduce the downcore variability in both Globigerinoides ruber and the fragmentation index (r2 = 0.58 and 0.88 respectively). The FIRST prototype yields some promising results for high-resolution paleoceanographic studies and evolutionary studies.

  4. High-throughput protein analysis integrating bioinformatics and experimental assays

    PubMed Central

    del Val, Coral; Mehrle, Alexander; Falkenhahn, Mechthild; Seiler, Markus; Glatting, Karl-Heinz; Poustka, Annemarie; Suhai, Sandor; Wiemann, Stefan

    2004-01-01

    The wealth of transcript information that has been made publicly available in recent years requires the development of high-throughput functional genomics and proteomics approaches for its analysis. Such approaches need suitable data integration procedures and a high level of automation in order to gain maximum benefit from the results generated. We have designed an automatic pipeline to analyse annotated open reading frames (ORFs) stemming from full-length cDNAs produced mainly by the German cDNA Consortium. The ORFs are cloned into expression vectors for use in large-scale assays such as the determination of subcellular protein localization or kinase reaction specificity. Additionally, all identified ORFs undergo exhaustive bioinformatic analysis such as similarity searches, protein domain architecture determination and prediction of physicochemical characteristics and secondary structure, using a wide variety of bioinformatic methods in combination with the most up-to-date public databases (e.g. PRINTS, BLOCKS, INTERPRO, PROSITE SWISSPROT). Data from experimental results and from the bioinformatic analysis are integrated and stored in a relational database (MS SQL-Server), which makes it possible for researchers to find answers to biological questions easily, thereby speeding up the selection of targets for further analysis. The designed pipeline constitutes a new automatic approach to obtaining and administrating relevant biological data from high-throughput investigations of cDNAs in order to systematically identify and characterize novel genes, as well as to comprehensively describe the function of the encoded proteins. PMID:14762202

  5. Development of automatic body condition scoring using a low-cost 3-dimensional Kinect camera.

    PubMed

    Spoliansky, Roii; Edan, Yael; Parmet, Yisrael; Halachmi, Ilan

    2016-09-01

    Body condition scoring (BCS) is a farm-management tool for estimating dairy cows' energy reserves. Today, BCS is performed manually by experts. This paper presents a 3-dimensional algorithm that provides a topographical understanding of the cow's body to estimate BCS. An automatic BCS system consisting of a Kinect camera (Microsoft Corp., Redmond, WA) triggered by a passive infrared motion detector was designed and implemented. Image processing and regression algorithms were developed and included the following steps: (1) image restoration, the removal of noise; (2) object recognition and separation, identification and separation of the cows; (3) movie and image selection, selection of movies and frames that include the relevant data; (4) image rotation, alignment of the cow parallel to the x-axis; and (5) image cropping and normalization, removal of irrelevant data, setting the image size to 150×200 pixels, and normalizing image values. All steps were performed automatically, including image selection and classification. Fourteen individual features per cow, derived from the cows' topography, were automatically extracted from the movies and from the farm's herd-management records. These features appear to be measurable in a commercial farm. Manual BCS was performed by a trained expert and compared with the output of the training set. A regression model was developed, correlating the features with the manual BCS references. Data were acquired for 4 d, resulting in a database of 422 movies of 101 cows. Movies containing cows' back ends were automatically selected (389 movies). The data were divided into a training set of 81 cows and a test set of 20 cows; both sets included the identical full range of BCS classes. Accuracy tests gave a mean absolute error of 0.26, median absolute error of 0.19, and coefficient of determination of 0.75, with 100% correct classification within 1 step and 91% correct classification within a half step for BCS classes. Results indicated good repeatability, with all standard deviations under 0.33. The algorithm is independent of the background and requires 10 cows for training with approximately 30 movies of 4 s each. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Automated Data Handling And Instrument Control Using Low-Cost Desktop Computers And An IEEE 488 Compatible Version Of The ODETA V.

    NASA Astrophysics Data System (ADS)

    van Leunen, J. A. J.; Dreessen, J.

    1984-05-01

    The result of a measurement of the modulation transfer function is only useful as long as it is accompanied by a complete description of all relevant measuring conditions involved. For this reason it is necessary to file a full description of the relevant measuring conditions together with the results. In earlier times some of our results were rendered useless because some of the relevant measuring conditions were accidentally not written down and were forgotten. This was mainly due to the lack of concensus about which measuring conditions had to be filed together with the result of a measurement. One way to secure uniform and complete archiving of measuring conditions and results is to automate the data handling. An attendent advantage of automation of data handling is that it does away with the time-consuming correction of rough measuring results. The automation of the data handling was accomplished with rather cheap desktop computers, which were powerfull enough, however, to allow us to automate the measurement as well. After automation of the data handling we started with automatic collection of rough measurement data. Step by step we extended the automation by letting the desktop computer control more and more of the measuring set-up. At present the desktop computer controls all the electrical and most of the mechanical measuring conditions. Further it controls and reads the MTF measuring instrument. Focussing and orientation optimization can be fully automatic, semi-automatic or completely manual. MTF measuring results can be collected automatically but they can also be typed in by hand. Due to the automation we are able to implement proper archival of measuring results together with all necessary measuring conditions. The improved measuring efficiency made it possible to increase the number of routine measurements done in the same time period by an order of magnitude. To our surprise the measuring accuracy also improved by a factor of two. This was due to the much better reproducibility of the automatic optimization, which resulted in better reproducibility of the measurement result. Another advantage of the automation is that the programs that control the data handling and the automatic measurement are "user friendly". They guide the operator through the measuring procedure using information from earlier measurements of equivalent test specimens. This makes it possible to let routine measurements be done by much less skilled assistants. It also removes much of the tedious routine labour normally involved in MTF measurements. It can be concluded that automation of MTF measurements as described in the foregoing enhances the usefulness of MTF results as well as reducing the cost of MTF measurements.

  7. Speedup for quantum optimal control from automatic differentiation based on graphics processing units

    NASA Astrophysics Data System (ADS)

    Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David

    2017-04-01

    We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.

  8. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  9. On the role of conflict and control in social cognition: event-related brain potential investigations.

    PubMed

    Bartholow, Bruce D

    2010-03-01

    Numerous social-cognitive models posit that social behavior largely is driven by links between constructs in long-term memory that automatically become activated when relevant stimuli are encountered. Various response biases have been understood in terms of the influence of such "implicit" processes on behavior. This article reviews event-related potential (ERP) studies investigating the role played by cognitive control and conflict resolution processes in social-cognitive phenomena typically deemed automatic. Neurocognitive responses associated with response activation and conflict often are sensitive to the same stimulus manipulations that produce differential behavioral responses on social-cognitive tasks and that often are attributed to the role of automatic associations. Findings are discussed in the context of an overarching social cognitive neuroscience model in which physiological data are used to constrain social-cognitive theories.

  10. Building a database for statistical characterization of ELMs on DIII-D

    NASA Astrophysics Data System (ADS)

    Fritch, B. J.; Marinoni, A.; Bortolon, A.

    2017-10-01

    Edge localized modes (ELMs) are bursty instabilities which occur in the edge region of H-mode plasmas and have the potential to damage in-vessel components of future fusion machines by exposing the divertor region to large energy and particle fluxes during each ELM event. While most ELM studies focus on average quantities (e.g. energy loss per ELM), this work investigates the statistical distributions of ELM characteristics, as a function of plasma parameters. A semi-automatic algorithm is being used to create a database documenting trigger times of the tens of thousands of ELMs for DIII-D discharges in scenarios relevant to ITER, thus allowing statistically significant analysis. Probability distributions of inter-ELM periods and energy losses will be determined and related to relevant plasma parameters such as density, stored energy, and current in order to constrain models and improve estimates of the expected inter-ELM periods and sizes, both of which must be controlled in future reactors. Work supported in part by US DoE under the Science Undergraduate Laboratory Internships (SULI) program, DE-FC02-04ER54698 and DE-FG02- 94ER54235.

  11. Reward modulates attention independently of action value in posterior parietal cortex

    PubMed Central

    Peck, Christopher J.; Jangraw, David C.; Suzuki, Mototaka; Efem, Richard; Gottlieb, Jacqueline

    2009-01-01

    While numerous studies explored the mechanisms of reward-based decisions (the choice of action based on expected gain), few asked how reward influences attention (the selection of information relevant for a decision). Here we show that a powerful determinant of attentional priority is the association between a stimulus and an appetitive reward. A peripheral cue heralded the delivery of reward (RC+) or no reward (RC−); to experience the predicted outcome monkeys made a saccade to a target that appeared unpredictably at the same or opposite location relative to the cue. Although the RC had no operant associations (did not specify the required saccade) they automatically biased attention, such that the RC+ attracted attention and RC− repelled attention from their location. Neurons in the lateral intraparietal area (LIP) encoded these attentional biases, maintaining sustained excitation at the location of an RC+ and inhibition at the location of an RC−. Contrary to the hypothesis that LIP encodes action value, neurons did not encode the expected reward of the saccade. Moreover, the cue-evoked biases were maladaptive, interfering with the required saccade, and they biases increased rather than abating with training, strikingly at odds with an adaptive decision process. After prolonged training valence selectivity appeared at shorter latencies and automatically transferred to a novel task context, suggesting that training produced visual plasticity. The results suggest that reward predictors gain automatic attentional priority regardless of their operant associations, and this valence-specific priority is encoded in LIP independently of the expected reward of an action. PMID:19741125

  12. Automated encoding of clinical documents based on natural language processing.

    PubMed

    Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George

    2004-01-01

    The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.

  13. Laboratory review: the role of gait analysis in seniors' mobility and fall prevention.

    PubMed

    Bridenbaugh, Stephanie A; Kressig, Reto W

    2011-01-01

    Walking is a complex motor task generally performed automatically by healthy adults. Yet, by the elderly, walking is often no longer performed automatically. Older adults require more attention for motor control while walking than younger adults. Falls, often with serious consequences, can be the result. Gait impairments are one of the biggest risk factors for falls. Several studies have identified changes in certain gait parameters as independent predictors of fall risk. Such gait changes are often too discrete to be detected by clinical observation alone. At the Basel Mobility Center, we employ the GAITRite electronic walkway system for spatial-temporal gait analysis. Although we have a large range of indications for gait analyses and several areas of clinical research, our focus is on the association between gait and cognition. Gait analysis with walking as a single-task condition alone is often insufficient to reveal underlying gait disorders present during normal, everyday activities. We use a dual-task paradigm, walking while simultaneously performing a second cognitive task, to assess the effects of divided attention on motor performance and gait control. Objective quantification of such clinically relevant gait changes is necessary to determine fall risk. Early detection of gait disorders and fall risk permits early intervention and, in the best-case scenario, fall prevention. We and others have shown that rhythmic movement training such as Jaques-Dalcroze eurhythmics, tai chi and social dancing can improve gait regularity and automaticity, thus increasing gait safety and reducing fall risk. Copyright © 2010 S. Karger AG, Basel.

  14. 42 CFR 435.909 - Automatic entitlement to Medicaid following a determination of eligibility under other programs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Automatic entitlement to Medicaid following a... & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS... in the States and District of Columbia Applications § 435.909 Automatic entitlement to Medicaid...

  15. Fuzzy mutual information based grouping and new fitness function for PSO in selection of miRNAs in cancer.

    PubMed

    Pal, Jayanta Kumar; Ray, Shubhra Sankar; Pal, Sankar K

    2017-10-01

    MicroRNAs (miRNA) are one of the important regulators of cell division and also responsible for cancer development. Among the discovered miRNAs, not all are important for cancer detection. In this regard a fuzzy mutual information (FMI) based grouping and miRNA selection method (FMIGS) is developed to identify the miRNAs responsible for a particular cancer. First, the miRNAs are ranked and divided into several groups. Then the most important group is selected among the generated groups. Both the steps viz., ranking of miRNAs and selection of the most relevant group of miRNAs, are performed using FMI. Here the number of groups is automatically determined by the grouping method. After the selection process, redundant miRNAs are removed from the selected set of miRNAs as per user's necessity. In a part of the investigation we proposed a FMI based particle swarm optimization (PSO) method for selecting relevant miRNAs, where FMI is used as a fitness function to determine the fitness of the particles. The effectiveness of FMIGS and FMI based PSO is tested on five data sets and their efficiency in selecting relevant miRNAs are demonstrated. The superior performance of FMIGS to some existing methods are established and the biological significance of the selected miRNAs is observed by the findings of the biological investigation and publicly available pathway analysis tools. The source code related to our investigation is available at http://www.jayanta.droppages.com/FMIGS.html. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. 49 CFR 1150.32 - Procedures and relevant dates-transactions that involve creation of Class III carriers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... given to shippers. (c) If the notice contains false or misleading information, the exemption is void ab initio. A petition to revoke under 49 U.S.C. 10502(d) does not automatically stay the exemption. Stay...

  17. Salient sounds activate human visual cortex automatically.

    PubMed

    McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A

    2013-05-22

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.

  18. Salient sounds activate human visual cortex automatically

    PubMed Central

    McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.

    2013-01-01

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530

  19. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations.

    PubMed

    Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S

    2014-12-09

    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.

  20. Automatic Multiple-Needle Surgical Planning of Robotic-Assisted Microwave Coagulation in Large Liver Tumor Therapy

    PubMed Central

    Liu, Shaoli; Xia, Zeyang; Liu, Jianhua; Xu, Jing; Ren, He; Lu, Tong; Yang, Xiangdong

    2016-01-01

    The “robotic-assisted liver tumor coagulation therapy” (RALTCT) system is a promising candidate for large liver tumor treatment in terms of accuracy and speed. A prerequisite for effective therapy is accurate surgical planning. However, it is difficult for the surgeon to perform surgical planning manually due to the difficulties associated with robot-assisted large liver tumor therapy. These main difficulties include the following aspects: (1) multiple needles are needed to destroy the entire tumor, (2) the insertion trajectories of the needles should avoid the ribs, blood vessels, and other tissues and organs in the abdominal cavity, (3) the placement of multiple needles should avoid interference with each other, (4) an inserted needle will cause some deformation of liver, which will result in changes in subsequently inserted needles’ operating environment, and (5) the multiple needle-insertion trajectories should be consistent with the needle-driven robot’s movement characteristics. Thus, an effective multiple-needle surgical planning procedure is needed. To overcome these problems, we present an automatic multiple-needle surgical planning of optimal insertion trajectories to the targets, based on a mathematical description of all relevant structure surfaces. The method determines the analytical expression of boundaries of every needle “collision-free reachable workspace” (CFRW), which are the feasible insertion zones based on several constraints. Then, the optimal needle insertion trajectory within the optimization criteria will be chosen in the needle CFRW automatically. Also, the results can be visualized with our navigation system. In the simulation experiment, three needle-insertion trajectories were obtained successfully. In the in vitro experiment, the robot successfully achieved insertion of multiple needles. The proposed automatic multiple-needle surgical planning can improve the efficiency and safety of robot-assisted large liver tumor therapy, significantly reduce the surgeon’s workload, and is especially helpful for an inexperienced surgeon. The methodology should be easy to adapt in other body parts. PMID:26982341

  1. Retrieving definitional content for ontology development.

    PubMed

    Smith, L; Wilbur, W J

    2004-12-01

    Ontology construction requires an understanding of the meaning and usage of its encoded concepts. While definitions found in dictionaries or glossaries may be adequate for many concepts, the actual usage in expert writing could be a better source of information for many others. The goal of this paper is to describe an automated procedure for finding definitional content in expert writing. The approach uses machine learning on phrasal features to learn when sentences in a book contain definitional content, as determined by their similarity to glossary definitions provided in the same book. The end result is not a concise definition of a given concept, but for each sentence, a predicted probability that it contains information relevant to a definition. The approach is evaluated automatically for terms with explicit definitions, and manually for terms with no available definition.

  2. A Clinical Decision Support System for Breast Cancer Patients

    NASA Astrophysics Data System (ADS)

    Fernandes, Ana S.; Alves, Pedro; Jarman, Ian H.; Etchells, Terence A.; Fonseca, José M.; Lisboa, Paulo J. G.

    This paper proposes a Web clinical decision support system for clinical oncologists and for breast cancer patients making prognostic assessments, using the particular characteristics of the individual patient. This system comprises three different prognostic modelling methodologies: the clinically widely used Nottingham prognostic index (NPI); the Cox regression modelling and a partial logistic artificial neural network with automatic relevance determination (PLANN-ARD). All three models yield a different prognostic index that can be analysed together in order to obtain a more accurate prognostic assessment of the patient. Missing data is incorporated in the mentioned models, a common issue in medical data that was overcome using multiple imputation techniques. Risk group assignments are also provided through a methodology based on regression trees, where Boolean rules can be obtained expressed with patient characteristics.

  3. Mathematical support for automated geometry analysis of lathe machining of oblique peakless round-nose tools

    NASA Astrophysics Data System (ADS)

    Filippov, A. V.; Tarasov, S. Yu; Podgornyh, O. A.; Shamarin, N. N.; Filippova, E. O.

    2017-01-01

    Automatization of engineering processes requires developing relevant mathematical support and a computer software. Analysis of metal cutting kinematics and tool geometry is a necessary key task at the preproduction stage. This paper is focused on developing a procedure for determining the geometry of oblique peakless round-nose tool lathe machining with the use of vector/matrix transformations. Such an approach allows integration into modern mathematical software packages in distinction to the traditional analytic description. Such an advantage is very promising for developing automated control of the preproduction process. A kinematic criterion for the applicable tool geometry has been developed from the results of this study. The effect of tool blade inclination and curvature on the geometry-dependent process parameters was evaluated.

  4. Using Bayesian networks to support decision-focused information retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehner, P.; Elsaesser, C.; Seligman, L.

    This paper has described an approach to controlling the process of pulling data/information from distributed data bases in a way that is specific to a persons specific decision making context. Our prototype implementation of this approach uses a knowledge-based planner to generate a plan, an automatically constructed Bayesian network to evaluate the plan, specialized processing of the network to derive key information items that would substantially impact the evaluation of the plan (e.g., determine that replanning is needed), automated construction of Standing Requests for Information (SRIs) which are automated functions that monitor changes and trends in distributed data base thatmore » are relevant to the key information items. This emphasis of this paper is on how Bayesian networks are used.« less

  5. Testing the idea of privileged awareness of self-relevant information.

    PubMed

    Stein, Timo; Siebold, Alisha; van Zoest, Wieske

    2016-03-01

    Self-relevant information is prioritized in processing. Some have suggested the mechanism driving this advantage is akin to the automatic prioritization of physically salient stimuli in information processing (Humphreys & Sui, 2015). Here we investigate whether self-relevant information is prioritized for awareness under continuous flash suppression (CFS), as has been found for physical salience. Gabor patches with different orientations were first associated with the labels You or Other. Participants were more accurate in matching the self-relevant association, replicating previous findings of self-prioritization. However, breakthrough into awareness from CFS did not differ between self- and other-associated Gabors. These findings demonstrate that self-relevant information has no privileged access to awareness. Rather than modulating the initial visual processes that precede and lead to awareness, the advantage of self-relevant information may better be characterized as prioritization at later processing stages. (c) 2016 APA, all rights reserved).

  6. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    PubMed

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Self-adaptive relevance feedback based on multilevel image content analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yongying; Zhang, Yujin; Fu, Yu

    2001-01-01

    In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.

  8. Self-adaptive relevance feedback based on multilevel image content analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yongying; Zhang, Yujin; Fu, Yu

    2000-12-01

    In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.

  9. A semi-automatic annotation tool for cooking video

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe

    2013-03-01

    In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

  10. Automatic indexing in a drug information portal.

    PubMed

    Sakji, Saoussen; Letord, Catherine; Dahamna, Badisse; Kergourlay, Ivan; Pereira, Suzanne; Joubert, Michel; Darmoni, Stéfan

    2009-01-01

    The objective of this work is to create a bilingual (French/English) Drug Information Portal (DIP), in a multi-terminological context and to emphasize its exploitation by an ATC automatic indexing allowing having more pertinent information about substances, organs or systems on which drugs act and their therapeutic and chemical characteristics. The development of the DIP was based on the CISMeF portal, which catalogues and indexes the most important and quality-controlled sources of institutional health information in French. DIP has created specific functionalities and uses specific drugs terminologies such as the ATC classification which used to automatic index the DIP resources. DIP is the result of collaboration between the CISMeF team and the VIDAL Company, specialized in drug information. DIP is conceived to facilitate the user information retrieval. The ATC automatic indexing provided relevant results in 76% of cases. Using multi-terminological context and in the framework of the drug field, indexing drugs with the appropriate codes or/and terms revealed to be very important to have the appropriate information storage and retrieval. The main challenge in the coming year is to increase the accuracy of the approach.

  11. Microscale pH Titrations Using an Automatic Pipet.

    ERIC Educational Resources Information Center

    Flint, Edward B.; Kortz, Carrie L.; Taylor, Max A.

    2002-01-01

    Presents a microscale pH titration technique that utilizes an automatic pipet. A small aliquot (1-5 mL) of the analyte solution is titrated with repeated additions of titrant, and the pH is determined after each delivery. The equivalence point is determined graphically by either the second derivative method or a Gran plot. The pipet can be…

  12. Evidence-Based Diagnosis and Treatment for Specific Learning Disabilities Involving Impairments in Written and/or Oral Language

    ERIC Educational Resources Information Center

    Berninger, Virginia W.; May, Maggie O'Malley

    2011-01-01

    Programmatic, multidisciplinary research provided converging brain, genetic, and developmental support for evidence-based diagnoses of three specific learning disabilities based on hallmark phenotypes (behavioral expression of underlying genotypes) with treatment relevance: dysgraphia (impaired legible automatic letter writing, orthographic…

  13. LWPC: Long Wavelength Propagation Capability

    NASA Astrophysics Data System (ADS)

    U. S. Navy; Ferguson, J. A.; Hutchins, Michael

    2018-03-01

    Long Wavelength Propagation Capability (LWPC), written as a collection of separate programs that perform unique actions, generates geographical maps of signal availability for coverage analysis. The program makes it easy to set up these displays by automating most of the required steps. The user specifies the transmitter location and frequency, the orientation of the transmitting and receiving antennae, and the boundaries of the operating area. The program automatically selects paths along geographic bearing angles to ensure that the operating area is fully covered. The diurnal conditions and other relevant geophysical parameters are then determined along each path. After the mode parameters along each path are determined, the signal strength along each path is computed. The signal strength along the paths is then interpolated onto a grid overlying the operating area. The final grid of signal strength values is used to display the signal-strength in a geographic display. The LWPC uses character strings to control programs and to specify options. The control strings have the same meaning and use among all the programs.

  14. Automatic diet monitoring: a review of computer vision and wearable sensor-based methods.

    PubMed

    Hassannejad, Hamid; Matrella, Guido; Ciampolini, Paolo; De Munari, Ilaria; Mordonini, Monica; Cagnoni, Stefano

    2017-09-01

    Food intake and eating habits have a significant impact on people's health. Widespread diseases, such as diabetes and obesity, are directly related to eating habits. Therefore, monitoring diet can be a substantial base for developing methods and services to promote healthy lifestyle and improve personal and national health economy. Studies have demonstrated that manual reporting of food intake is inaccurate and often impractical. Thus, several methods have been proposed to automate the process. This article reviews the most relevant and recent researches on automatic diet monitoring, discussing their strengths and weaknesses. In particular, the article reviews two approaches to this problem, accounting for most of the work in the area. The first approach is based on image analysis and aims at extracting information about food content automatically from food images. The second one relies on wearable sensors and has the detection of eating behaviours as its main goal.

  15. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  16. Knowing who's boss: implicit perceptions of status from the nonverbal expression of pride.

    PubMed

    Shariff, Azim F; Tracy, Jessica L

    2009-10-01

    Evolutionary theory suggests that the universal recognition of nonverbal expressions of emotions functions to enhance fitness. Specifically, emotion expressions may send survival-relevant messages to other social group members, who have the capacity to automatically interpret these signals. In the present research, we used 3 different implicit association methodologies to test whether the nonverbal expression of pride sends a functional, automatically perceived signal about a social group member's increased social status. Results suggest that the pride expression strongly signals high status, and this association cannot be accounted for by positive valence or artifacts of the expression such as expanded size due to outstretched arms. These findings suggest that the pride expression may function to uniquely communicate the high status of those who show it. Discussion focuses on the implications of these findings for social functions of emotion expressions and the automatic communication of status.

  17. Automatic and manual segmentation of healthy retinas using high-definition optical coherence tomography.

    PubMed

    Golbaz, Isabelle; Ahlers, Christian; Goesseringer, Nina; Stock, Geraldine; Geitzenauer, Wolfgang; Prünte, Christian; Schmidt-Erfurth, Ursula Margarethe

    2011-03-01

    This study compared automatic- and manual segmentation modalities in the retina of healthy eyes using high-definition optical coherence tomography (HD-OCT). Twenty retinas in 20 healthy individuals were examined using an HD-OCT system (Carl Zeiss Meditec, Inc.). Three-dimensional imaging was performed with an axial resolution of 6 μm at a maximum scanning speed of 25,000 A-scans/second. Volumes of 6 × 6 × 2 mm were scanned. Scans were analysed using a matlab-based algorithm and a manual segmentation software system (3D-Doctor). The volume values calculated by the two methods were compared. Statistical analysis revealed a high correlation between automatic and manual modes of segmentation. The automatic mode of measuring retinal volume and the corresponding three-dimensional images provided similar results to the manual segmentation procedure. Both methods were able to visualize retinal and subretinal features accurately. This study compared two methods of assessing retinal volume using HD-OCT scans in healthy retinas. Both methods were able to provide realistic volumetric data when applied to raster scan sets. Manual segmentation methods represent an adequate tool with which to control automated processes and to identify clinically relevant structures, whereas automatic procedures will be needed to obtain data in larger patient populations. © 2009 The Authors. Journal compilation © 2009 Acta Ophthalmol.

  18. Recent Developments in the External Conjugate-T Matching Project at JET

    NASA Astrophysics Data System (ADS)

    Monakhov, I.; Walden, A.

    2007-09-01

    The External Conjugate-T (ECT) matching system is planned for installation on two A2 ICRH antenna arrays at JET in 2007. This will enhance the operational capabilities of the RF plant during ELMy plasma scenarios and create new opportunities for ITER-relevant matching studies. The main features of the project are discussed in the paper focusing on the specific challenges of the ECT automatic matching and arc detection in optimized ELM-tolerant configurations. A `co/counter-clockwise' automatic control mode selection and an Advanced Wave Amplitude Comparison System (AWACS) complementing the existing VSWR monitoring are proposed as simple and viable solutions to the identified problems.

  19. Recent Developments in the External Conjugate-T Matching Project at JET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monakhov, I.; Walden, A.

    2007-09-28

    The External Conjugate-T (ECT) matching system is planned for installation on two A2 ICRH antenna arrays at JET in 2007. This will enhance the operational capabilities of the RF plant during ELMy plasma scenarios and create new opportunities for ITER-relevant matching studies. The main features of the project are discussed in the paper focusing on the specific challenges of the ECT automatic matching and arc detection in optimized ELM-tolerant configurations. A 'co/counter-clockwise' automatic control mode selection and an Advanced Wave Amplitude Comparison System (AWACS) complementing the existing VSWR monitoring are proposed as simple and viable solutions to the identified problems.

  20. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  1. Dissociation between controlled and automatic processes in the behavioral variant of fronto-temporal dementia.

    PubMed

    Collette, Fabienne; Van der Linden, Martial; Salmon, Eric

    2010-01-01

    A decline of cognitive functioning affecting several cognitive domains was frequently reported in patients with frontotemporal dementia. We were interested in determining if these deficits can be interpreted as reflecting an impairment of controlled cognitive processes by using an assessment tool specifically developed to explore the distinction between automatic and controlled processes, namely the process dissociation procedure (PDP) developed by Jacoby. The PDP was applied to a word stem completion task to determine the contribution of automatic and controlled processes to episodic memory performance and was administered to a group of 12 patients with the behavioral variant of frontotemporal dementia (bv-FTD) and 20 control subjects (CS). Bv-FTD patients obtained a lower performance than CS for the estimates of controlled processes, but no group differences was observed for estimates of automatic processes. The between-groups comparison of the estimates of controlled and automatic processes showed a larger contribution of automatic processes to performance in bv-FTD, while a slightly more important contribution of controlled processes was observed in control subjects. These results are clearly indicative of an alteration of controlled memory processes in bv-FTD.

  2. Writing for the Robot: How Employer Search Tools Have Influenced Resume Rhetoric and Ethics

    ERIC Educational Resources Information Center

    Amare, Nicole; Manning, Alan

    2009-01-01

    To date, business communication scholars and textbook writers have encouraged resume rhetoric that accommodates technology, for example, recommending keyword-enhancing techniques to attract the attention of searchbots: customized search engines that allow companies to automatically scan resumes for relevant keywords. However, few scholars have…

  3. SABRE--A Novel Software Tool for Bibliographic Post-Processing.

    ERIC Educational Resources Information Center

    Burge, Cecil D.

    1989-01-01

    Describes the software architecture and application of SABRE (Semi-Automated Bibliographic Environment), which is one of the first products to provide a semi-automatic environment for relevancy ranking of citations obtained from searches of bibliographic databases. Features designed to meet the review, categorization, culling, and reporting needs…

  4. Design and implementation of an intranet dashboard.

    PubMed

    Wolpin, S E

    2005-01-01

    Healthcare organizations are complex systems and well served by efficient feedback mechanisms. Many organizations have invested in data warehouses; however there are few tools for automatically extracting and delivering relevant measures to decision makers. This research study resulted in the design and implementation of an intranet dashboard linked to a data warehouse.

  5. GeneTopics - interpretation of gene sets via literature-driven topic models

    PubMed Central

    2013-01-01

    Background Annotation of a set of genes is often accomplished through comparison to a library of labelled gene sets such as biological processes or canonical pathways. However, this approach might fail if the employed libraries are not up to date with the latest research, don't capture relevant biological themes or are curated at a different level of granularity than is required to appropriately analyze the input gene set. At the same time, the vast biomedical literature offers an unstructured repository of the latest research findings that can be tapped to provide thematic sub-groupings for any input gene set. Methods Our proposed method relies on a gene-specific text corpus and extracts commonalities between documents in an unsupervised manner using a topic model approach. We automatically determine the number of topics summarizing the corpus and calculate a gene relevancy score for each topic allowing us to eliminate non-specific topics. As a result we obtain a set of literature topics in which each topic is associated with a subset of the input genes providing directly interpretable keywords and corresponding documents for literature research. Results We validate our method based on labelled gene sets from the KEGG metabolic pathway collection and the genetic association database (GAD) and show that the approach is able to detect topics consistent with the labelled annotation. Furthermore, we discuss the results on three different types of experimentally derived gene sets, (1) differentially expressed genes from a cardiac hypertrophy experiment in mice, (2) altered transcript abundance in human pancreatic beta cells, and (3) genes implicated by GWA studies to be associated with metabolite levels in a healthy population. In all three cases, we are able to replicate findings from the original papers in a quick and semi-automated manner. Conclusions Our approach provides a novel way of automatically generating meaningful annotations for gene sets that are directly tied to relevant articles in the literature. Extending a general topic model method, the approach introduced here establishes a workflow for the interpretation of gene sets generated from diverse experimental scenarios that can complement the classical approach of comparison to reference gene sets. PMID:24564875

  6. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  7. Automatic medical image annotation and keyword-based image retrieval using relevance feedback.

    PubMed

    Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal

    2012-08-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.

  8. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  9. CT-guided automated detection of lung tumors on PET images

    NASA Astrophysics Data System (ADS)

    Cui, Yunfeng; Zhao, Binsheng; Akhurst, Timothy J.; Yan, Jiayong; Schwartz, Lawrence H.

    2008-03-01

    The calculation of standardized uptake values (SUVs) in tumors on serial [ 18F]2-fluoro-2-deoxy-D-glucose ( 18F-FDG) positron emission tomography (PET) images is often used for the assessment of therapy response. We present a computerized method that automatically detects lung tumors on 18F-FDG PET/Computed Tomography (CT) images using both anatomic and metabolic information. First, on CT images, relevant organs, including lung, bone, liver and spleen, are automatically identified and segmented based on their locations and intensity distributions. Hot spots (SUV >= 1.5) on 18F-FDG PET images are then labeled using the connected component analysis. The resultant "hot objects" (geometrically connected hot spots in three dimensions) that fall into, reside at the edges or are in the vicinity of the lungs are considered as tumor candidates. To determine true lesions, further analyses are conducted, including reduction of tumor candidates by the masking out of hot objects within CT-determined normal organs, and analysis of candidate tumors' locations, intensity distributions and shapes on both CT and PET. The method was applied to 18F-FDG-PET/CT scans from 9 patients, on which 31 target lesions had been identified by a nuclear medicine radiologist during a Phase II lung cancer clinical trial. Out of 31 target lesions, 30 (97%) were detected by the computer method. However, sensitivity and specificity were not estimated because not all lesions had been marked up in the clinical trial. The method effectively excluded the hot spots caused by mediastinum, liver, spleen, skeletal muscle and bone metastasis.

  10. Numerosity processing is context driven even in the subitizing range: An fMRI study

    PubMed Central

    Leibovich, Tali; Henik, Avishai; Salti, Moti

    2015-01-01

    Numerical judgments are involved in almost every aspect of our daily life. They are carried out so efficiently that they are often considered to be automatic and innate. However, numerosity of non-symbolic stimuli is highly correlated with its continuous properties (e.g., density, area), and so it is hard to determine whether numerosity and continuous properties rely on the same mechanism. Here we examined the behavioral and neuronal mechanisms underlying such judgments. We scanned subjects' hemodynamic responses to a numerosity comparison task and to a surface area comparison task. In these tasks, numerical and continuous magnitudes could be either congruent or incongruent. Behaviorally, an interaction between the order of the tasks and the relevant dimension modulated the congruency effects. Continuous magnitudes always interfered with numerosity comparison. Numerosity, on the other hand, interfered with the surface area comparison only when participants began with the numerosity task. Hemodynamic activity showed that context (induced by task order) determined the neuronal pathways in which the dimensions were processed. Starting with the numerosity task led to enhanced activity in the right hemisphere, while starting with the continuous task led to enhanced left hemisphere activity. Continuous magnitudes processing relied on activation of the frontal eye field and the post-central gyrus. Processing of numerosities, on the other hand, relied on deactivation of these areas, suggesting active suppression of the continuous dimension. Accordingly, we suggest that numerosities, even in the subitizing range, are not always processed automatically; their processing depends on context and task demands. PMID:26297625

  11. Determining Surface Roughness in Urban Areas Using Lidar Data

    NASA Technical Reports Server (NTRS)

    Holland, Donald

    2009-01-01

    An automated procedure has been developed to derive relevant factors, which can increase the ability to produce objective, repeatable methods for determining aerodynamic surface roughness. Aerodynamic surface roughness is used for many applications, like atmospheric dispersive models and wind-damage models. For this technique, existing lidar data was used that was originally collected for terrain analysis, and demonstrated that surface roughness values can be automatically derived, and then subsequently utilized in disaster-management and homeland security models. The developed lidar-processing algorithm effectively distinguishes buildings from trees and characterizes their size, density, orientation, and spacing (see figure); all of these variables are parameters that are required to calculate the estimated surface roughness for a specified area. By using this algorithm, aerodynamic surface roughness values in urban areas can then be extracted automatically. The user can also adjust the algorithm for local conditions and lidar characteristics, like summer/winter vegetation and dense/sparse lidar point spacing. Additionally, the user can also survey variations in surface roughness that occurs due to wind direction; for example, during a hurricane, when wind direction can change dramatically, this variable can be extremely significant. In its current state, the algorithm calculates an estimated surface roughness for a square kilometer area; techniques using the lidar data to calculate the surface roughness for a point, whereby only roughness elements that are upstream from the point of interest are used and the wind direction is a vital concern, are being investigated. This technological advancement will improve the reliability and accuracy of models that use and incorporate surface roughness.

  12. The design of an fast Fourier filter for enhancing diagnostically relevant structures - endodontic files.

    PubMed

    Bruellmann, Dan; Sander, Steven; Schmidtmann, Irene

    2016-05-01

    The endodontic working length is commonly determined by electronic apex locators and intraoral periapical radiographs. No algorithms for the automatic detection of endodontic files in dental radiographs have been described in the recent literature. Teeth from the mandibles of pig cadavers were accessed, and digital radiographs of these specimens were obtained using an optical bench. The specimens were then recorded in identical positions and settings after the insertion of endodontic files of known sizes (ISO sizes 10-15). The frequency bands generated by the endodontic files were determined using fast Fourier transforms (FFTs) to convert the resulting images into frequency spectra. The detected frequencies were used to design a pre-segmentation filter, which was programmed using Delphi XE RAD Studio software (Embarcadero Technologies, San Francisco, USA) and tested on 20 radiographs. For performance evaluation purposes, the gauged lengths (measured with a caliper) of visible endodontic files were measured in the native and filtered images. The software was able to segment the endodontic files in both the samples and similar dental radiographs. We observed median length differences of 0.52 mm (SD: 2.76 mm) and 0.46 mm (SD: 2.33 mm) in the native and post-segmentation images, respectively. Pearson's correlation test revealed a significant correlation of 0.915 between the true length and the measured length in the native images; the corresponding correlation for the filtered images was 0.97 (p=0.0001). The algorithm can be used to automatically detect and measure the lengths of endodontic files in digital dental radiographs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Automatic switching matrix

    DOEpatents

    Schlecht, Martin F.; Kassakian, John G.; Caloggero, Anthony J.; Rhodes, Bruce; Otten, David; Rasmussen, Neil

    1982-01-01

    An automatic switching matrix that includes an apertured matrix board containing a matrix of wires that can be interconnected at each aperture. Each aperture has associated therewith a conductive pin which, when fully inserted into the associated aperture, effects electrical connection between the wires within that particular aperture. Means is provided for automatically inserting the pins in a determined pattern and for removing all the pins to permit other interconnecting patterns.

  14. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  15. Bayesian Quantification of Contrast-Enhanced Ultrasound Images With Adaptive Inclusion of an Irreversible Component.

    PubMed

    Rizzo, Gaia; Tonietto, Matteo; Castellaro, Marco; Raffeiner, Bernd; Coran, Alessandro; Fiocco, Ugo; Stramare, Roberto; Grisan, Enrico

    2017-04-01

    Contrast Enhanced Ultrasound (CEUS) is a sensitive imaging technique to assess tissue vascularity and it can be particularly useful in early detection and grading of arthritis. In a recent study we have shown that a Gamma-variate can accurately quantify synovial perfusion and it is flexible enough to describe many heterogeneous patterns. However, in some cases the heterogeneity of the kinetics can be such that even the Gamma model does not properly describe the curve, with a high number of outliers. In this work we apply to CEUS data the single compartment recirculation model (SCR) which takes explicitly into account the trapping of the microbubbles contrast agent by adding to the single Gamma-variate model its integral. The SCR model, originally proposed for dynamic-susceptibility magnetic resonance imaging, is solved here at pixel level within a Bayesian framework using Variational Bayes (VB). We also include the automatic relevant determination (ARD) algorithm to automatically infer the model complexity (SCR vs. Gamma model) from the data. We demonstrate that the inclusion of trapping best describes the CEUS patterns in 50% of the pixels, with the other 50% best fitted by a single Gamma. Such results highlight the necessity of the use ARD, to automatically exclude the irreversible component where not supported by the data. VB with ARD returns precise estimates in the majority of the kinetics (88% of total percentage of pixels) in a limited computational time (on average, 3.6 min per subject). Moreover, the impact of the additional trapping component has been evaluated for the differentiation of rheumatoid and non-rheumatoid patients, by means of a support vector machine classifier with backward feature selection. The results show that the trapping parameter is always present in the selected feature set, and improves the classification.

  16. Full automatic fiducial marker detection on coil arrays for accurate instrumentation placement during MRI guided breast interventions

    NASA Astrophysics Data System (ADS)

    Filippatos, Konstantinos; Boehler, Tobias; Geisler, Benjamin; Zachmann, Harald; Twellmann, Thorsten

    2010-02-01

    With its high sensitivity, dynamic contrast-enhanced MR imaging (DCE-MRI) of the breast is today one of the first-line tools for early detection and diagnosis of breast cancer, particularly in the dense breast of young women. However, many relevant findings are very small or occult on targeted ultrasound images or mammography, so that MRI guided biopsy is the only option for a precise histological work-up [1]. State-of-the-art software tools for computer-aided diagnosis of breast cancer in DCE-MRI data offer also means for image-based planning of biopsy interventions. One step in the MRI guided biopsy workflow is the alignment of the patient position with the preoperative MR images. In these images, the location and orientation of the coil localization unit can be inferred from a number of fiducial markers, which for this purpose have to be manually or semi-automatically detected by the user. In this study, we propose a method for precise, full-automatic localization of fiducial markers, on which basis a virtual localization unit can be subsequently placed in the image volume for the purpose of determining the parameters for needle navigation. The method is based on adaptive thresholding for separating breast tissue from background followed by rigid registration of marker templates. In an evaluation of 25 clinical cases comprising 4 different commercial coil array models and 3 different MR imaging protocols, the method yielded a sensitivity of 0.96 at a false positive rate of 0.44 markers per case. The mean distance deviation between detected fiducial centers and ground truth information that was appointed from a radiologist was 0.94mm.

  17. Recent Development in Optical Chemical Sensors Coupling with Flow Injection Analysis

    PubMed Central

    Ojeda, Catalina Bosch; Rojas, Fuensanta Sánchez

    2006-01-01

    Optical techniques for chemical analysis are well established and sensors based on these techniques are now attracting considerable attention because of their importance in applications such as environmental monitoring, biomedical sensing, and industrial process control. On the other hand, flow injection analysis (FIA) is advisable for the rapid analysis of microliter volume samples and can be interfaced directly to the chemical process. The FIA has become a widespread automatic analytical method for more reasons; mainly due to the simplicity and low cost of the setups, their versatility, and ease of assembling. In this paper, an overview of flow injection determinations by using optical chemical sensors is provided, and instrumentation, sensor design, and applications are discussed. This work summarizes the most relevant manuscripts from 1980 to date referred to analysis using optical chemical sensors in FIA.

  18. Pitch-informed solo and accompaniment separation towards its use in music education applications

    NASA Astrophysics Data System (ADS)

    Cano, Estefanía; Schuller, Gerald; Dittmar, Christian

    2014-12-01

    We present a system for the automatic separation of solo instruments and music accompaniment in polyphonic music recordings. Our approach is based on a pitch detection front-end and a tone-based spectral estimation. We assess the plausibility of using sound separation technologies to create practice material in a music education context. To better understand the sound separation quality requirements in music education, a listening test was conducted to determine the most perceptually relevant signal distortions that need to be improved. Results from the listening test show that solo and accompaniment tracks pose different quality requirements and should be optimized differently. We propose and evaluate algorithm modifications to better understand their effects on objective perceptual quality measures. Finally, we outline possible ways of optimizing our separation approach to better suit the requirements of music education applications.

  19. Assessment of the Denver Regional Transportation District's automatic vehicle location system

    DOT National Transportation Integrated Search

    2000-08-01

    The purpose of this evaluation was to determine how well the Denver Regional Transportation District's (RTD) automatic vehicle location (AVL) system achieved its major objectives of improving scheduling efficiency, improving the ability of dispatcher...

  20. Automatic Welding of Stainless Steel Tubing

    NASA Technical Reports Server (NTRS)

    Clautice, W. E.

    1978-01-01

    To determine if the use of automatic welding would allow reduction of the radiographic inspection requirement, and thereby reduce fabrication costs, a series of welding tests were performed. In these tests an automatic welder was used on stainless steel tubing of 1/2, 3/4, and 1/2 inch diameter size. The optimum parameters were investigated to determine how much variation from optimum in machine settings could be tolerate and still result in a good quality weld. The process variables studied were the welding amperes, the revolutions per minute as a function of the circumferential weld travel speed, and the shielding gas flow. The investigation showed that the close control of process variables in conjunction with a thorough visual inspection of welds can be relied upon as an acceptable quality assurance procedure, thus permitting the radiographic inspection to be reduced by a large percentage when using the automatic process.

  1. Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.

    PubMed

    Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2010-11-01

    Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.

  2. Feasibility of automatic evaluation of clinical rules in general practice.

    PubMed

    Opondo, Dedan; Visscher, Stefan; Eslami, Saied; Medlock, Stephanie; Verheij, Robert; Korevaar, Joke C; Abu-Hanna, Ameen

    2017-04-01

    To assess the extent to which clinical rules (CRs) can be implemented for automatic evaluation of quality of care in general practice. We assessed 81 clinical rules (CRs) adapted from a subset of Assessing Care of Vulnerable Elders (ACOVE) clinical rules, against Dutch College of General Practitioners (NHG) data model. Each CR was analyzed using the Logical Elements Rule METHOD: (LERM). LERM is a stepwise method of assessing and formalizing clinical rules for decision support. Clinical rules that satisfied the criteria outlined in the LERM method were judged to be implementable in automatic evaluation in general practice. Thirty-three out of 81 (40.7%) Dutch-translated ACOVE clinical rules can be automatically evaluated in electronic medical record systems. Seven out of 7 CRs (100%) in the domain of diabetes can be automatically evaluated, 9/17 (52.9%) in medication use, 5/10 (50%) in depression care, 3/6 (50%) in nutrition care, 6/13 (46.1%) in dementia care, 1/6 (16.6%) in end of life care, 2/13 (15.3%) in continuity of care, and 0/9 (0%) in the fall-related care. Lack of documentation of care activities between primary and secondary health facilities and ambiguous formulation of clinical rules were the main reasons for the inability to automate the clinical rules. Approximately two-fifths of the primary care Dutch ACOVE-based clinical rules can be automatically evaluated. Clear definition of clinical rules, improved GP database design and electronic linkage of primary and secondary healthcare facilities can improve prospects of automatic assessment of quality of care. These findings are relevant especially because the Netherlands has very high automation of primary care. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Are Alcohol Expectancies Associations? Comment on Moss and Albery (2009)

    ERIC Educational Resources Information Center

    Wiers, Reinout W.; Stacy, Alan W.

    2010-01-01

    Moss and Albery (2009) presented a dual-process model of the alcohol-behavior link, integrating alcohol expectancy and alcohol myopia theory. Their integrative theory rests on a number of assumptions including, first, that alcohol expectancies are associations that can be activated automatically by an alcohol-relevant context, and second, that…

  4. intelligentCAPTURE 1.0 Adds Tables of Content to Library Catalogues and Improves Retrieval.

    ERIC Educational Resources Information Center

    Hauer, Manfred; Simedy, Walton

    2002-01-01

    Describes an online library catalog that was developed for an Austrian scientific library that includes table of contents in addition to the standard bibliographic information in order to increase relevance for searchers. Discusses the technology involved, including OCR (Optical Character Recognition) and automatic indexing techniques; weighted…

  5. The Psychobiology of Aggression and Violence: Bioethical Implications

    ERIC Educational Resources Information Center

    Diaz, Jose Luis

    2010-01-01

    Bioethics is concerned with the moral aspects of biology and medicine. The bioethical relevance of aggression and violence is clear, as very different moral and legal responsibilities may apply depending on whether aggression and violence are forms of behaviour that are innate or acquired, deliberate or automatic or not, or understandable and…

  6. Information Storage and Retrieval, Scientific Report No. ISR-15.

    ERIC Educational Resources Information Center

    Salton, Gerard

    Several algorithms were investigated which would allow a user to interact with an automatic document retrieval system by requesting relevance judgments on selected sets of documents. Two viewpoints were taken in evaluation. One measured the movement of queries toward the optimum query as defined by Rocchio; the other measured the retrieval…

  7. Design and Implementation of an Intranet Dashboard

    PubMed Central

    Wolpin, SE

    2005-01-01

    Healthcare organizations are complex systems and well served by efficient feedback mechanisms. Many organizations have invested in data warehouses; however there are few tools for automatically extracting and delivering relevant measures to decision makers. This research study resulted in the design and implementation of an intranet dashboard linked to a data warehouse PMID:16779440

  8. A hypertext system that learns from user feedback

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie

    1994-01-01

    Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation). Besides providing an hypertext interface for browsing large documents, the CID system automatically acquires and reuses the context in which previous searches were appropriate. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. Thus, the user continually augments and refines the intelligence of the retrieval system. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time. We successfully tested the CID system with users of the Space Station Freedom requirements documents. We are currently extending CID to other application domains (Space Shuttle operations documents, airplane maintenance manuals, and on-line training). We are also exploring the potential commercialization of this technique.

  9. Autonomous mental development with selective attention, object perception, and knowledge representation

    NASA Astrophysics Data System (ADS)

    Ban, Sang-Woo; Lee, Minho

    2008-04-01

    Knowledge-based clustering and autonomous mental development remains a high priority research topic, among which the learning techniques of neural networks are used to achieve optimal performance. In this paper, we present a new framework that can automatically generate a relevance map from sensory data that can represent knowledge regarding objects and infer new knowledge about novel objects. The proposed model is based on understating of the visual what pathway in our brain. A stereo saliency map model can selectively decide salient object areas by additionally considering local symmetry feature. The incremental object perception model makes clusters for the construction of an ontology map in the color and form domains in order to perceive an arbitrary object, which is implemented by the growing fuzzy topology adaptive resonant theory (GFTART) network. Log-polar transformed color and form features for a selected object are used as inputs of the GFTART. The clustered information is relevant to describe specific objects, and the proposed model can automatically infer an unknown object by using the learned information. Experimental results with real data have demonstrated the validity of this approach.

  10. Automatic rock detection for in situ spectroscopy applications on Mars

    NASA Astrophysics Data System (ADS)

    Mahapatra, Pooja; Foing, Bernard H.

    A novel algorithm for rock detection has been developed for effectively utilising Mars rovers, and enabling autonomous selection of target rocks that require close-contact spectroscopic measurements. The algorithm demarcates small rocks in terrain images as seen by cameras on a Mars rover during traverse. This information may be used by the rover for selection of geologically relevant sample rocks, and (in conjunction with a rangefinder) to pick up target samples using a robotic arm for automatic in situ determination of rock composition and mineralogy using, for example, a Raman spectrometer. Determining rock samples within the region that are of specific interest without physically approaching them significantly reduces time, power and risk. Input images in colour are converted to greyscale for intensity analysis. Bilateral filtering is used for texture removal while preserving rock boundaries. Unsharp masking is used for contrast enhance-ment. Sharp contrasts in intensities are detected using Canny edge detection, with thresholds that are calculated from the image obtained after contrast-limited adaptive histogram equalisation of the unsharp masked image. Scale-space representations are then generated by convolving this image with a Gaussian kernel. A scale-invariant blob detector (Laplacian of the Gaussian, LoG) detects blobs independently of their sizes, and therefore requires a multi-scale approach with automatic scale se-lection. The scale-space blob detector consists of convolution of the Canny edge-detected image with a scale-normalised LoG at several scales, and finding the maxima of squared LoG response in scale-space. After the extraction of local intensity extrema, the intensity profiles along rays going out of the local extremum are investigated. An ellipse is fitted to the region determined by significant changes in the intensity profiles. The fitted ellipses are overlaid on the original Mars terrain image for a visual estimation of the rock detection accuracy, and the number of ellipses are counted. Since geometry and illumination have the least effect on small rocks, the proposed algorithm is effective in detecting small rocks (or bigger rocks at larger distances from the camera) that consist of a small fraction of image pixels. Acknowledgements: The first author would like to express her gratitude to the European Space Agency (ESA/ESTEC) and the International Lunar Exploration Working Group (ILEWG) for their support of this work.

  11. Intentional and automatic numerical processing as predictors of mathematical abilities in primary school children

    PubMed Central

    Pina, Violeta; Castillo, Alejandro; Cohen Kadosh, Roi; Fuentes, Luis J.

    2015-01-01

    Previous studies have suggested that numerical processing relates to mathematical performance, but it seems that such relationship is more evident for intentional than for automatic numerical processing. In the present study we assessed the relationship between the two types of numerical processing and specific mathematical abilities in a sample of 109 children in grades 1–6. Participants were tested in an ample range of mathematical tests and also performed both a numerical and a size comparison task. The results showed that numerical processing related to mathematical performance only when inhibitory control was involved in the comparison tasks. Concretely, we found that intentional numerical processing, as indexed by the numerical distance effect in the numerical comparison task, was related to mathematical reasoning skills only when the task-irrelevant dimension (the physical size) was incongruent; whereas automatic numerical processing, indexed by the congruency effect in the size comparison task, was related to mathematical calculation skills only when digits were separated by small distance. The observed double dissociation highlights the relevance of both intentional and automatic numerical processing in mathematical skills, but when inhibitory control is also involved. PMID:25873909

  12. From Memory to Attitude: The Neurocognitive Process beyond Euthanasia Acceptance.

    PubMed

    Enke, Martin; Meyer, Patric; Flor, Herta

    2016-01-01

    Numerous questionnaire studies on attitudes towards euthanasia produced conflicting results, precluding any general conclusion. This might be due to the fact that human behavior can be influenced by automatically triggered attitudes, which represent ingrained associations in memory and cannot be assessed by standard questionnaires, but require indirect measures such as reaction times (RT) or electroencephalographic recording (EEG). Event related potentials (ERPs) of the EEG and RT during an affective priming task were assessed to investigate the impact of automatically triggered attitudes and were compared to results of an explicit questionnaire. Explicit attitudes were ambivalent. Reaction time data showed neither positive nor negative associations towards euthanasia. ERP analyses revealed an N400 priming effect with lower mean amplitudes when euthanasia was associated with negative words. The euthanasia-related modulation of the N400 component shows an integration of the euthanasia object in negatively valenced associative neural networks. The integration of all measures suggests a bottom-up process of attitude activation, where automatically triggered negative euthanasia-relevant associations can become more ambiguous with increasing time in order to regulate the bias arising from automatic processes. These data suggest that implicit measures may make an important contribution to the understanding of euthanasia-related attitudes.

  13. From Memory to Attitude: The Neurocognitive Process beyond Euthanasia Acceptance

    PubMed Central

    Enke, Martin; Meyer, Patric; Flor, Herta

    2016-01-01

    Numerous questionnaire studies on attitudes towards euthanasia produced conflicting results, precluding any general conclusion. This might be due to the fact that human behavior can be influenced by automatically triggered attitudes, which represent ingrained associations in memory and cannot be assessed by standard questionnaires, but require indirect measures such as reaction times (RT) or electroencephalographic recording (EEG). Event related potentials (ERPs) of the EEG and RT during an affective priming task were assessed to investigate the impact of automatically triggered attitudes and were compared to results of an explicit questionnaire. Explicit attitudes were ambivalent. Reaction time data showed neither positive nor negative associations towards euthanasia. ERP analyses revealed an N400 priming effect with lower mean amplitudes when euthanasia was associated with negative words. The euthanasia-related modulation of the N400 component shows an integration of the euthanasia object in negatively valenced associative neural networks. The integration of all measures suggests a bottom-up process of attitude activation, where automatically triggered negative euthanasia-relevant associations can become more ambiguous with increasing time in order to regulate the bias arising from automatic processes. These data suggest that implicit measures may make an important contribution to the understanding of euthanasia-related attitudes. PMID:27088244

  14. [Trans healthcare : Between depsychopathologization and a needs-based treatment of accompanying mental disorders].

    PubMed

    Nieder, T O; Güldenring, A; Köhler, A; Briken, P

    2017-05-01

    Historically, the function of psychiatry and psychotherapy in the healthcare treatment of transsexualism has been impaired by the basic assumption that non-conforming gender experiences and behavior are automatically considered as expressions of psychopathology. In line with revision of the diagnostic criteria and changing standards of care and treatment recommendations, the therapeutic relationship between mental healthcare professionals and transgender individuals is critically discussed aiming at providing a needs-based psychiatric and psychotherapeutic treatment and a patient-centered approach for trans persons. Literature search focusing on the prevalence of trans persons and the presence of accompanying mental disorders. Discussion of professional experiences with mental healthcare of trans persons. Trans persons without clinically relevant mental distress do not need any kind of psychiatric or psychotherapeutic treatment; however, trans people with clinically relevant mental impairment need safe access to mental healthcare without linking the trans identity a priori to a mental disorder. In order to ensure individual trans healthcare in the long term, the therapeutic relationship should take into account both the body knowledge and self-determination of trans persons as well as the clinical expertise of mental healthcare professionals.

  15. Automatic Refraction: How It Is Done: Some Clinical Results

    ERIC Educational Resources Information Center

    Safir, Aran; And Others

    1973-01-01

    Compaired are methods of determining visual refraction needs of young children or other unreliable observers by means of retinosocopy or the Opthalmetron, an automatic instrument which can be operated by a technician with no knowledge of refraction. (DB)

  16. Response variability in rapid automatized naming predicts reading comprehension

    PubMed Central

    Li, James J.; Cutting, Laurie E.; Ryan, Matthew; Zilioli, Monica; Denckla, Martha B.; Mahone, E. Mark

    2009-01-01

    A total of 37 children ages 8 to 14 years, screened for word-reading difficulties (23 with attention-deficit/hyperactivity disorder, ADHD; 14 controls) completed oral reading and rapid automatized naming (RAN) tests. RAN trials were segmented into pause and articulation time and intraindividual variability. There were no group differences on reading or RAN variables. Color- and letter-naming pause times and number-naming articulation time were significant predictors of reading fluency. In contrast, number and letter pause variability were predictors of comprehension. Results support analysis of subcomponents of RAN and add to literature emphasizing intraindividual variability as a marker for response preparation, which has relevance to reading comprehension. PMID:19221923

  17. Automatic Generation of Mashups for Personalized Commerce in Digital TV by Semantic Reasoning

    NASA Astrophysics Data System (ADS)

    Blanco-Fernández, Yolanda; López-Nores, Martín; Pazos-Arias, José J.; Martín-Vicente, Manuela I.

    The evolution of information technologies is consolidating recommender systems as essential tools in e-commerce. To date, these systems have focused on discovering the items that best match the preferences, interests and needs of individual users, to end up listing those items by decreasing relevance in some menus. In this paper, we propose extending the current scope of recommender systems to better support trading activities, by automatically generating interactive applications that provide the users with personalized commercial functionalities related to the selected items. We explore this idea in the context of Digital TV advertising, with a system that brings together semantic reasoning techniques and new architectural solutions for web services and mashups.

  18. Using the Weighted Keyword Model to Improve Information Retrieval for Answering Biomedical Questions

    PubMed Central

    Yu, Hong; Cao, Yong-gang

    2009-01-01

    Physicians ask many complex questions during the patient encounter. Information retrieval systems that can provide immediate and relevant answers to these questions can be invaluable aids to the practice of evidence-based medicine. In this study, we first automatically identify topic keywords from ad hoc clinical questions with a Condition Random Field model that is trained over thousands of manually annotated clinical questions. We then report on a linear model that assigns query weights based on their automatically identified semantic roles: topic keywords, domain specific terms, and their synonyms. Our evaluation shows that this weighted keyword model improves information retrieval from the Text Retrieval Conference Genomics track data. PMID:21347188

  19. Using the weighted keyword model to improve information retrieval for answering biomedical questions.

    PubMed

    Yu, Hong; Cao, Yong-Gang

    2009-03-01

    Physicians ask many complex questions during the patient encounter. Information retrieval systems that can provide immediate and relevant answers to these questions can be invaluable aids to the practice of evidence-based medicine. In this study, we first automatically identify topic keywords from ad hoc clinical questions with a Condition Random Field model that is trained over thousands of manually annotated clinical questions. We then report on a linear model that assigns query weights based on their automatically identified semantic roles: topic keywords, domain specific terms, and their synonyms. Our evaluation shows that this weighted keyword model improves information retrieval from the Text Retrieval Conference Genomics track data.

  20. Is gaze following purely reflexive or goal-directed instead? Revisiting the automaticity of orienting attention by gaze cues.

    PubMed

    Ricciardelli, Paola; Carcagno, Samuele; Vallar, Giuseppe; Bricolo, Emanuela

    2013-01-01

    Distracting gaze has been shown to elicit automatic gaze following. However, it is still debated whether the effects of perceived gaze are a simple automatic spatial orienting response or are instead sensitive to the context (i.e. goals and task demands). In three experiments, we investigated the conditions under which gaze following occurs. Participants were instructed to saccade towards one of two lateral targets. A face distracter, always present in the background, could gaze towards: (a) a task-relevant target--("matching" goal-directed gaze shift)--congruent or incongruent with the instructed direction, (b) a task-irrelevant target, orthogonal to the one instructed ("non-matching" goal-directed gaze shift), or (c) an empty spatial location (no-goal-directed gaze shift). Eye movement recordings showed faster saccadic latencies in correct trials in congruent conditions especially when the distracting gaze shift occurred before the instruction to make a saccade. Interestingly, while participants made a higher proportion of gaze-following errors (i.e. errors in the direction of the distracting gaze) in the incongruent conditions when the distracter's gaze shift preceded the instruction onset indicating an automatic gaze following, they never followed the distracting gaze when it was directed towards an empty location or a stimulus that was never the target. Taken together, these findings suggest that gaze following is likely to be a product of both automatic and goal-driven orienting mechanisms.

  1. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    PubMed

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  2. The effect of psychological distance on automatic goal contagion

    PubMed Central

    Wessler, Janet; Hansen, Jochim

    2016-01-01

    ABSTRACT We investigated how psychological distance influences goal contagion (the extent to which people automatically adopt another person’s goals). On the basis of construal-level theory, we predicted people would be more prone to goal contagion when primed with psychological distance (vs. closeness) because they would construe the other person’s behavior in terms of its underlying goal. Alternatively, we predicted people primed with psychological closeness (vs. distance) would be more prone to goal contagion because closeness may increase the personal relevance of another’s goals – a process not mediated by construal level. In two preregistered studies, participants read about a student whose behavior implied either an academic or a social goal. We manipulated (a) participants’ level of mental construal with a mind-set task (Study 1) and (b) their social distance from another person who showed academic or social behaviors (Study 2). We measured performance on an anagram task as an indicator of academic goal contagion. For Study 1, we predicted that participants reading about academic (vs. social) behaviors would show a better anagram performance, especially when primed with an abstract mind-set. For Study 2, we predicted that construal level and relevance effects might cancel each other out, because distance triggers both high-level construal and less relevance. In contrast to the construal-level hypothesis, the mind-set manipulation did not affect goal contagion in Study 1. In accordance with the relevance hypothesis, psychological proximity increased goal contagion in Study 2. We discuss how the findings relate to previous findings on goal contagion and imitation. PMID:29098177

  3. Automation of the Image Analysis for Thermographic Inspection

    NASA Technical Reports Server (NTRS)

    Plotnikov, Yuri A.; Winfree, William P.

    1998-01-01

    Several data processing procedures for the pulse thermal inspection require preliminary determination of an unflawed region. Typically, an initial analysis of the thermal images is performed by an operator to determine the locations of unflawed and the defective areas. In the present work an algorithm is developed for automatically determining a reference point corresponding to an unflawed region. Results are obtained for defects which are arbitrarily located in the inspection region. A comparison is presented of the distributions of derived values with right and wrong localization of the reference point. Different algorithms of automatic determination of the reference point are compared.

  4. A data reduction package for multiple object spectroscopy

    NASA Technical Reports Server (NTRS)

    Hill, J. M.; Eisenhamer, J. D.; Silva, D. R.

    1986-01-01

    Experience with fiber-optic spectrometers has demonstrated improvements in observing efficiency for clusters of 30 or more objects that must in turn be matched by data reduction capability increases. The Medusa Automatic Reduction System reduces data generated by multiobject spectrometers in the form of two-dimensional images containing 44 to 66 individual spectra, using both software and hardware improvements to efficiently extract the one-dimensional spectra. Attention is given to the ridge-finding algorithm for automatic location of the spectra in the CCD frame. A simultaneous extraction of calibration frames allows an automatic wavelength calibration routine to determine dispersion curves, and both line measurements and cross-correlation techniques are used to determine galaxy redshifts.

  5. QA-driven Guidelines Generation for Bacteriotherapy

    PubMed Central

    Pasche, Emilie; Teodoro, Douglas; Gobeill, Julien; Ruch, Patrick; Lovis, Christian

    2009-01-01

    PURPOSE We propose a question-answering (QA) driven generation approach for automatic acquisition of structured rules that can be used in a knowledge authoring tool for antibiotic prescription guidelines management. METHODS: The rule generation is seen as a question-answering problem, where the parameters of the questions are known items of the rule (e.g. an infectious disease, caused by a given bacterium) and answers (e.g. some antibiotics) are obtained by a question-answering engine. RESULTS: When looking for a drug given a pathogen and a disease, top-precision of 0.55 is obtained by the combination of the Boolean engine (PubMed) and the relevance-driven engine (easyIR), which means that for more than half of our evaluation benchmark at least one of the recommended antibiotics was automatically acquired by the rule generation method. CONCLUSION: These results suggest that such an automatic text mining approach could provide a useful tool for guidelines management, by improving knowledge update and discovery. PMID:20351908

  6. From gaze cueing to perspective taking: Revisiting the claim that we automatically compute where or what other people are looking at

    PubMed Central

    Bukowski, Henryk; Hietanen, Jari K.; Samson, Dana

    2015-01-01

    ABSTRACT Two paradigms have shown that people automatically compute what or where another person is looking at. In the visual perspective-taking paradigm, participants judge how many objects they see; whereas, in the gaze cueing paradigm, participants identify a target. Unlike in the former task, in the latter task, the influence of what or where the other person is looking at is only observed when the other person is presented alone before the task-relevant objects. We show that this discrepancy across the two paradigms is not due to differences in visual settings (Experiment 1) or available time to extract the directional information (Experiment 2), but that it is caused by how attention is deployed in response to task instructions (Experiment 3). Thus, the mere presence of another person in the field of view is not sufficient to compute where/what that person is looking at, which qualifies the claimed automaticity of such computations. PMID:26924936

  7. Automatic and controlled components of judgment and decision making.

    PubMed

    Ferreira, Mario B; Garcia-Marques, Leonel; Sherman, Steven J; Sherman, Jeffrey W

    2006-11-01

    The categorization of inductive reasoning into largely automatic processes (heuristic reasoning) and controlled analytical processes (rule-based reasoning) put forward by dual-process approaches of judgment under uncertainty (e.g., K. E. Stanovich & R. F. West, 2000) has been primarily a matter of assumption with a scarcity of direct empirical findings supporting it. The present authors use the process dissociation procedure (L. L. Jacoby, 1991) to provide convergent evidence validating a dual-process perspective to judgment under uncertainty based on the independent contributions of heuristic and rule-based reasoning. Process dissociations based on experimental manipulation of variables were derived from the most relevant theoretical properties typically used to contrast the two forms of reasoning. These include processing goals (Experiment 1), cognitive resources (Experiment 2), priming (Experiment 3), and formal training (Experiment 4); the results consistently support the author's perspective. They conclude that judgment under uncertainty is neither an automatic nor a controlled process but that it reflects both processes, with each making independent contributions.

  8. Alignment effects in beer mugs: Automatic action activation or response competition?

    PubMed

    Roest, Sander A; Pecher, Diane; Naeije, Lilian; Zeelenberg, René

    2016-08-01

    Responses to objects with a graspable handle are faster when the response hand and handle orientation are aligned (e.g., a key press with the right hand is required and the object handle is oriented to the right) than when they are not aligned. This effect could be explained by automatic activation of specific motor programs when an object is viewed. Alternatively, the effect could be explained by competition at the response level. Participants performed a reach-and-grasp or reach-and-button-press action with their left or right hand in response to the color of a beer mug. The alignment effect did not vary as a function of the type of action. In addition, the alignment effect disappeared in a go/no-go version of the task. The same results were obtained when participants made upright/inverted decisions, so that object shape was task-relevant. Our results indicate that alignment effects are not due to automatic motor activation of the left or right limb.

  9. Social priming of hemispatial neglect affects spatial coding: Evidence from the Simon task.

    PubMed

    Arend, Isabel; Aisenberg, Daniela; Henik, Avishai

    2016-10-01

    In the Simon effect (SE), choice reactions are fast if the location of the stimulus and the response correspond when stimulus location is task-irrelevant; therefore, the SE reflects the automatic processing of space. Priming of social concepts was found to affect automatic processing in the Stroop effect. We investigated whether spatial coding measured by the SE can be affected by the observer's mental state. We used two social priming manipulations of impairments: one involving spatial processing - hemispatial neglect (HN) and another involving color perception - achromatopsia (ACHM). In two experiments the SE was reduced in the "neglected" visual field (VF) under the HN, but not under the ACHM manipulation. Our results show that spatial coding is sensitive to spatial representations that are not derived from task-relevant parameters, but from the observer's cognitive state. These findings dispute stimulus-response interference models grounded on the idea of the automaticity of spatial processing. Copyright © 2016. Published by Elsevier Inc.

  10. From gaze cueing to perspective taking: Revisiting the claim that we automatically compute where or what other people are looking at.

    PubMed

    Bukowski, Henryk; Hietanen, Jari K; Samson, Dana

    2015-09-14

    Two paradigms have shown that people automatically compute what or where another person is looking at. In the visual perspective-taking paradigm, participants judge how many objects they see; whereas, in the gaze cueing paradigm, participants identify a target. Unlike in the former task, in the latter task, the influence of what or where the other person is looking at is only observed when the other person is presented alone before the task-relevant objects. We show that this discrepancy across the two paradigms is not due to differences in visual settings (Experiment 1) or available time to extract the directional information (Experiment 2), but that it is caused by how attention is deployed in response to task instructions (Experiment 3). Thus, the mere presence of another person in the field of view is not sufficient to compute where/what that person is looking at, which qualifies the claimed automaticity of such computations.

  11. The impact of facial emotional expressions on behavioral tendencies in females and males

    PubMed Central

    Seidel, Eva-Maria; Habel, Ute; Kirschner, Michaela; Gur, Ruben C.; Derntl, Birgit

    2010-01-01

    Emotional faces communicate both the emotional state and behavioral intentions of an individual. They also activate behavioral tendencies in the perceiver, namely approach or avoidance. Here, we compared more automatic motor to more conscious rating responses to happy, sad, angry and disgusted faces in a healthy student sample. Happiness was associated with approach and anger with avoidance. However, behavioral tendencies in response to sadness and disgust were more complex. Sadness produced automatic approach but conscious withdrawal, probably influenced by interpersonal relations or personality. Disgust elicited withdrawal in the rating task whereas no significant tendency emerged in the joystick task, probably driven by expression style. Based on our results it is highly relevant to further explore actual reactions to emotional expressions and to differentiate between automatic and controlled processes since emotional faces are used in various kinds of studies. Moreover, our results highlight the importance of gender of poser effects when applying emotional expressions as stimuli. PMID:20364933

  12. Multi-view information fusion for automatic BI-RADS description of mammographic masses

    NASA Astrophysics Data System (ADS)

    Narvaez, Fabián; Díaz, Gloria; Romero, Eduardo

    2011-03-01

    Most CBIR-based CAD systems (Content Based Image Retrieval systems for Computer Aided Diagnosis) identify lesions that are eventually relevant. These systems base their analysis upon a single independent view. This article presents a CBIR framework which automatically describes mammographic masses with the BI-RADS lexicon, fusing information from the two mammographic views. After an expert selects a Region of Interest (RoI) at the two views, a CBIR strategy searches similar masses in the database by automatically computing the Mahalanobis distance between shape and texture feature vectors of the mammography. The strategy was assessed in a set of 400 cases, for which the suggested descriptions were compared with the ground truth provided by the data base. Two information fusion strategies were evaluated, allowing a retrieval precision rate of 89.6% in the best scheme. Likewise, the best performance obtained for shape, margin and pathology description, using a ROC methodology, was reported as AUC = 0.86, AUC = 0.72 and AUC = 0.85, respectively.

  13. CASAS: A tool for composing automatically and semantically astrophysical services

    NASA Astrophysics Data System (ADS)

    Louge, T.; Karray, M. H.; Archimède, B.; Knödlseder, J.

    2017-07-01

    Multiple astronomical datasets are available through internet and the astrophysical Distributed Computing Infrastructure (DCI) called Virtual Observatory (VO). Some scientific workflow technologies exist for retrieving and combining data from those sources. However selection of relevant services, automation of the workflows composition and the lack of user-friendly platforms remain a concern. This paper presents CASAS, a tool for semantic web services composition in astrophysics. This tool proposes automatic composition of astrophysical web services and brings a semantics-based, automatic composition of workflows. It widens the services choice and eases the use of heterogeneous services. Semantic web services composition relies on ontologies for elaborating the services composition; this work is based on Astrophysical Services ONtology (ASON). ASON had its structure mostly inherited from the VO services capacities. Nevertheless, our approach is not limited to the VO and brings VO plus non-VO services together without the need for premade recipes. CASAS is available for use through a simple web interface.

  14. Dynamic generation of a table of contents with consumer-friendly labels.

    PubMed

    Miller, Trudi; Leroy, Gondy; Wood, Elizabeth

    2006-01-01

    Consumers increasingly look to the Internet for health information, but available resources are too difficult for the majority to understand. Interactive tables of contents (TOC) can help consumers access health information by providing an easy to understand structure. Using natural language processing and the Unified Medical Language System (UMLS), we have automatically generated TOCs for consumer health information. The TOC are categorized according to consumer-friendly labels for the UMLS semantic types and semantic groups. Categorizing phrases by semantic types is significantly more correct and relevant. Greater correctness and relevance was achieved with documents that are difficult to read than those at an easier reading level. Pruning TOCs to use categories that consumers favor further increases relevancy and correctness while reducing structural complexity.

  15. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  16. Automatic Collision Avoidance Technology (ACAT)

    NASA Technical Reports Server (NTRS)

    Swihart, Donald E.; Skoog, Mark A.

    2007-01-01

    This document represents two views of the Automatic Collision Avoidance Technology (ACAT). One viewgraph presentation reviews the development and system design of Automatic Collision Avoidance Technology (ACAT). Two types of ACAT exist: Automatic Ground Collision Avoidance (AGCAS) and Automatic Air Collision Avoidance (AACAS). The AGCAS Uses Digital Terrain Elevation Data (DTED) for mapping functions, and uses Navigation data to place aircraft on map. It then scans DTED in front of and around aircraft and uses future aircraft trajectory (5g) to provide automatic flyup maneuver when required. The AACAS uses data link to determine position and closing rate. It contains several canned maneuvers to avoid collision. Automatic maneuvers can occur at last instant and both aircraft maneuver when using data link. The system can use sensor in place of data link. The second viewgraph presentation reviews the development of a flight test and an evaluation of the test. A review of the operation and comparison of the AGCAS and a pilot's performance are given. The same review is given for the AACAS is given.

  17. MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank.

    PubMed

    Mao, Yuqing; Lu, Zhiyong

    2017-04-17

    MeSH indexing is the task of assigning relevant MeSH terms based on a manual reading of scholarly publications by human indexers. The task is highly important for improving literature retrieval and many other scientific investigations in biomedical research. Unfortunately, given its manual nature, the process of MeSH indexing is both time-consuming (new articles are not immediately indexed until 2 or 3 months later) and costly (approximately ten dollars per article). In response, automatic indexing by computers has been previously proposed and attempted but remains challenging. In order to advance the state of the art in automatic MeSH indexing, a community-wide shared task called BioASQ was recently organized. We propose MeSH Now, an integrated approach that first uses multiple strategies to generate a combined list of candidate MeSH terms for a target article. Through a novel learning-to-rank framework, MeSH Now then ranks the list of candidate terms based on their relevance to the target article. Finally, MeSH Now selects the highest-ranked MeSH terms via a post-processing module. We assessed MeSH Now on two separate benchmarking datasets using traditional precision, recall and F 1 -score metrics. In both evaluations, MeSH Now consistently achieved over 0.60 in F-score, ranging from 0.610 to 0.612. Furthermore, additional experiments show that MeSH Now can be optimized by parallel computing in order to process MEDLINE documents on a large scale. We conclude that MeSH Now is a robust approach with state-of-the-art performance for automatic MeSH indexing and that MeSH Now is capable of processing PubMed scale documents within a reasonable time frame. http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/MeSHNow/ .

  18. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions.

    PubMed

    Burgmans, Mark Christiaan; den Harder, J Michiel; Meershoek, Philippa; van den Berg, Nynke S; Chan, Shaun Xavier Ju Min; van Leeuwen, Fijs W B; van Erkel, Arian R

    2017-06-01

    To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.

  19. Automatic classification of seismic events within a regional seismograph network

    NASA Astrophysics Data System (ADS)

    Tiira, Timo; Kortström, Jari; Uski, Marja

    2015-04-01

    A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.

  20. 29 CFR 4050.8 - Automatic lump sum.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... present value (determined as of the deemed distribution date under the missing participant lump sum... Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION PLAN TERMINATIONS MISSING PARTICIPANTS § 4050.8 Automatic lump sum. This section applies to a missing participant whose designated benefit was...

  1. Optical Automatic Car Identification (OACI) Field Test Program

    DOT National Transportation Integrated Search

    1976-05-01

    The results of the Optical Automatic Car Identification (OACI) tests at Chicago conducted from August 16 to September 4, 1975 are presented. The main purpose of this test was to determine the suitability of optics as a principle of operation for an a...

  2. Automatic design of optical systems by digital computer

    NASA Technical Reports Server (NTRS)

    Casad, T. A.; Schmidt, L. F.

    1967-01-01

    Computer program uses geometrical optical techniques and a least squares optimization method employing computing equipment for the automatic design of optical systems. It evaluates changes in various optical parameters, provides comprehensive ray-tracing, and generally determines the acceptability of the optical system characteristics.

  3. Automatic optometer operates with infrared test pattern

    NASA Technical Reports Server (NTRS)

    Cornsweet, T. N.; Crane, H. D.

    1970-01-01

    Refractive strength of human eye is monitored by optometer that automatically and continuously images infrared test pattern onto the retina. Condition of focus of the eye at any instant is determined from optometer settings needed to maintain focus of the pattern on the retina.

  4. Automatic Extraction of High-Resolution Rainfall Series from Rainfall Strip Charts

    NASA Astrophysics Data System (ADS)

    Saa-Requejo, Antonio; Valencia, Jose Luis; Garrido, Alberto; Tarquis, Ana M.

    2015-04-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on a host of factors, including climate, soil, topography, cropping and land management practices among others. Most models for soil erosion or hydrological processes need an accurate storm characterization. However, this data are not always available and in some cases indirect models are generated to fill this gap. In Spain, the rain intensity data known for time periods less than 24 hours back to 1924 and many studies are limited by it. In many cases this data is stored in rainfall strip charts in the meteorological stations but haven't been transfer in a numerical form. To overcome this deficiency in the raw data a process of information extraction from large amounts of rainfall strip charts is implemented by means of computer software. The method has been developed that largely automates the intensive-labour extraction work based on van Piggelen et al. (2011). The method consists of the following five basic steps: 1) scanning the charts to high-resolution digital images, 2) manually and visually registering relevant meta information from charts and pre-processing, 3) applying automatic curve extraction software in a batch process to determine the coordinates of cumulative rainfall lines on the images (main step), 4) post processing the curves that were not correctly determined in step 3, and 5) aggregating the cumulative rainfall in pixel coordinates to the desired time resolution. A colour detection procedure is introduced that automatically separates the background of the charts and rolls from the grid and subsequently the rainfall curve. The rainfall curve is detected by minimization of a cost function. Some utilities have been added to improve the previous work and automates some auxiliary processes: readjust the bands properly, merge bands when those have been scanned in two parts, detect and cut the borders of bands not used (demanded by the software). Also some variations in which colour system is tried basing in HUE or RGB colour have been included. Thanks to apply this digitization rainfall strip charts 209 station-years of three locations in the centre of Spain have been transformed to long-term rainfall time series with 5-min resolution. References van Piggelen, H.E., T. Brandsma, H. Manders, and J. F. Lichtenauer, 2011: Automatic Curve Extraction for Digitizing Rainfall Strip Charts. J. Atmos. Oceanic Technol., 28, 891-906. Acknowledgements Financial support for this research by DURERO Project (Env.C1.3913442) is greatly appreciated.

  5. Detecting, Monitoring, and Reporting Possible Adverse Drug Events Using an Arden-Syntax-based Rule Engine.

    PubMed

    Fehre, Karsten; Plössnig, Manuela; Schuler, Jochen; Hofer-Dückelmann, Christina; Rappelsberger, Andrea; Adlassnig, Klaus-Peter

    2015-01-01

    The detection of adverse drug events (ADEs) is an important aspect of improving patient safety. The iMedication system employs predefined triggers associated with significant events in a patient's clinical data to automatically detect possible ADEs. We defined four clinically relevant conditions: hyperkalemia, hyponatremia, renal failure, and over-anticoagulation. These are some of the most relevant ADEs in internal medical and geriatric wards. For each patient, ADE risk scores for all four situations are calculated, compared against a threshold, and judged to be monitored, or reported. A ward-based cockpit view summarizes the results.

  6. When Synchronizing to Rhythms Is Not a Good Thing: Modulations of Preparatory and Post-Target Neural Activity When Shifting Attention Away from On-Beat Times of a Distracting Rhythm.

    PubMed

    Breska, Assaf; Deouell, Leon Y

    2016-07-06

    Environmental rhythms potently drive predictive resource allocation in time, typically leading to perceptual and motor benefits for on-beat, relative to off-beat, times, even if the rhythmic stream is not intentionally used. In two human EEG experiments, we investigated the behavioral and electrophysiological expressions of using rhythms to direct resources away from on-beat times. This allowed us to distinguish goal-directed attention from the automatic capture of attention by rhythms. The following three conditions were compared: (1) a rhythmic stream with targets appearing frequently at a fixed off-beat position; (2) a rhythmic stream with targets appearing frequently at on-beat times; and (3) a nonrhythmic stream with matched target intervals. Shifting resources away from on-beat times was expressed in the slowing of responses to on-beat targets, but not in the facilitation of off-beat targets. The shifting of resources was accompanied by anticipatory adjustment of the contingent negative variation (CNV) buildup toward the expected off-beat time. In the second experiment, off-beat times were jittered, resulting in a similar CNV adjustment and also in preparatory amplitude reduction of beta-band activity. Thus, the CNV and beta activity track the relevance of time points and not the rhythm, given sufficient incentive. Furthermore, the effects of task relevance (appearing in a task-relevant vs irrelevant time) and rhythm (appearing on beat vs off beat) had additive behavioral effects and also dissociable neural manifestations in target-evoked activity: rhythm affected the target response as early as the P1 component, while relevance affected only the later N2 and P3. Thus, these two factors operate by distinct mechanisms. Rhythmic streams are widespread in our environment, and are typically conceptualized as automatic, bottom-up resource attractors to on-beat times-preparatory neural activity peaks at rhythm-on-beat times and behavioral benefits are seen to on-beat compared with off-beat targets. We show that this behavioral benefit is reversed when targets are more frequent at off-beat compared with on-beat times, and that preparatory neural activity, previously thought to be driven by the rhythm to on-beat times, is adjusted toward off-beat times. Furthermore, the effect of this relevance-based shifting on target-evoked brain activity was dissociable from the automatic effect of rhythms. Thus, rhythms can act as cues for flexible resource allocation according to the goal relevance of each time point, instead of being obligatory resource attractors. Copyright © 2016 the authors 0270-6474/16/367154-13$15.00/0.

  7. Incidental Learning of S-R Contingencies in the Masked Prime Task

    ERIC Educational Resources Information Center

    Schlaghecken, Friederike; Blagrove, Elisabeth; Maylor, Elizabeth A.

    2007-01-01

    Subliminal motor priming effects in the masked prime paradigm can only be obtained when primes are part of the task set. In 2 experiments, the authors investigated whether the relevant task set feature needs to be explicitly instructed or could be extracted automatically in an incidental learning paradigm. Primes and targets were symmetrical…

  8. Enriching Formal Language Learning with an Informal Social Component

    ERIC Educational Resources Information Center

    Dettori, Giuliana; Torsani, Simone

    2013-01-01

    This paper describes an informal component that we added to an online formal language learning environment in order to help the learners reach relevant Internet pages they can freely use to complement their learning activity. Thanks to this facility, each lesson is enriched, at run time, with a number of links automatically retrieved from social…

  9. Real Tweets, Fake News … and More from the NEJHE Beat …

    ERIC Educational Resources Information Center

    Harney, John O.

    2017-01-01

    Twitter is the closest thing that New England Higher Education has to a news service. Every New England Journal of Higher Education (NEJHE) item automatically posts to Twitter. But NEJHE also uses Twitter to disseminate relevant stories from outside. Not so much communicating personally, but aggregating interesting news or opinion from elsewhere,…

  10. Why Increased Social Presence through Web Videoconferencing Does Not Automatically Lead to Improved Learning

    ERIC Educational Resources Information Center

    Giesbers, Bas; Rienties, Bart; Tempelaar, Dirk T.; Gijselaers, Wim

    2014-01-01

    The Community of Inquiry (CoI) model provides a well-researched theoretical framework to understand how learners and teachers interact and learn together in computer-supported collaborative learning (CSCL). Most CoI research focuses on asynchronous learning. However, with the arrival of easy-to-use synchronous communication tools the relevance of…

  11. A novel approach for determining three-dimensional acetabular orientation: results from two hundred subjects.

    PubMed

    Higgins, Sean W; Spratley, E Meade; Boe, Richard A; Hayes, Curtis W; Jiranek, William A; Wayne, Jennifer S

    2014-11-05

    The inherently complex three-dimensional morphology of both the pelvis and acetabulum create difficulties in accurately determining acetabular orientation. Our objectives were to develop a reliable and accurate methodology for determining three-dimensional acetabular orientation and to utilize it to describe relevant characteristics of a large population of subjects without apparent hip pathology. High-resolution computed tomography studies of 200 patients previously receiving pelvic scans for indications not related to orthopaedic conditions were selected from our institution's database. Three-dimensional models of each osseous pelvis were generated to extract specific anatomical data sets. A novel computational method was developed to determine standard measures of three-dimensional acetabular orientation within an automatically identified anterior pelvic plane reference frame. Automatically selected points on the osseous ridge of the acetabulum were used to generate a best-fit plane for describing acetabular orientation. Our method showed excellent interobserver and intraobserver agreement (an intraclass correlation coefficient [ICC] of >0.999) and achieved high levels of accuracy. A significant difference between males and females in both anteversion (average, 3.5°; 95% confidence interval [CI], 1.9° to 5.1° across all angular definitions; p < 0.0001) and inclination (1.4°; 95% CI, 0.6° to 2.3° for anatomic angular definition; p < 0.002) was observed. Intrapatient asymmetry in anatomic measures showed bilateral differences in anteversion (maximum, 12.1°) and in inclination (maximum, 10.9°). Significant differences in acetabular orientation between the sexes can be detected only with accurate measurements that account for the entire acetabulum. While a wide range of interpatient acetabular orientations was observed, the majority of subjects had acetabula that were relatively symmetrical in both inclination and anteversion. A highly accurate and reproducible method for determining the orientation of the acetabulum's aperture will benefit both surgeons and patients, by further refining the distinctions between normal and abnormal hip characteristics. Enhanced understanding of the acetabulum could be useful in the diagnostic, planning, and execution stages for surgical procedures of the hip or in advancing the design of new implant systems. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  12. Pulmonary lobar volumetry using novel volumetric computer-aided diagnosis and computed tomography

    PubMed Central

    Iwano, Shingo; Kitano, Mariko; Matsuo, Keiji; Kawakami, Kenichi; Koike, Wataru; Kishimoto, Mariko; Inoue, Tsutomu; Li, Yuanzhong; Naganawa, Shinji

    2013-01-01

    OBJECTIVES To compare the accuracy of pulmonary lobar volumetry using the conventional number of segments method and novel volumetric computer-aided diagnosis using 3D computed tomography images. METHODS We acquired 50 consecutive preoperative 3D computed tomography examinations for lung tumours reconstructed at 1-mm slice thicknesses. We calculated the lobar volume and the emphysematous lobar volume < −950 HU of each lobe using (i) the slice-by-slice method (reference standard), (ii) number of segments method, and (iii) semi-automatic and (iv) automatic computer-aided diagnosis. We determined Pearson correlation coefficients between the reference standard and the three other methods for lobar volumes and emphysematous lobar volumes. We also compared the relative errors among the three measurement methods. RESULTS Both semi-automatic and automatic computer-aided diagnosis results were more strongly correlated with the reference standard than the number of segments method. The correlation coefficients for automatic computer-aided diagnosis were slightly lower than those for semi-automatic computer-aided diagnosis because there was one outlier among 50 cases (2%) in the right upper lobe and two outliers among 50 cases (4%) in the other lobes. The number of segments method relative error was significantly greater than those for semi-automatic and automatic computer-aided diagnosis (P < 0.001). The computational time for automatic computer-aided diagnosis was 1/2 to 2/3 than that of semi-automatic computer-aided diagnosis. CONCLUSIONS A novel lobar volumetry computer-aided diagnosis system could more precisely measure lobar volumes than the conventional number of segments method. Because semi-automatic computer-aided diagnosis and automatic computer-aided diagnosis were complementary, in clinical use, it would be more practical to first measure volumes by automatic computer-aided diagnosis, and then use semi-automatic measurements if automatic computer-aided diagnosis failed. PMID:23526418

  13. 30 CFR 7.103 - Safety system control test.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... sensors which will automatically activate the safety shutdown system and stop the engine before the... the temperature sensor in the exhaust gas stream which will automatically activate the safety shutdown... using a wet exhaust conditioner, determine the effectiveness of the temperature sensor in the exhaust...

  14. 30 CFR 7.103 - Safety system control test.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... sensors which will automatically activate the safety shutdown system and stop the engine before the... the temperature sensor in the exhaust gas stream which will automatically activate the safety shutdown... using a wet exhaust conditioner, determine the effectiveness of the temperature sensor in the exhaust...

  15. 30 CFR 7.103 - Safety system control test.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... sensors which will automatically activate the safety shutdown system and stop the engine before the... the temperature sensor in the exhaust gas stream which will automatically activate the safety shutdown... using a wet exhaust conditioner, determine the effectiveness of the temperature sensor in the exhaust...

  16. Scanning Seismic Intrusion Detector

    NASA Technical Reports Server (NTRS)

    Lee, R. D.

    1982-01-01

    Scanning seismic intrusion detector employs array of automatically or manually scanned sensors to determine approximate location of intruder. Automatic-scanning feature enables one operator to tend system of many sensors. Typical sensors used with new system are moving-coil seismic pickups. Detector finds uses in industrial security systems.

  17. Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize

    PubMed Central

    Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto

    2014-01-01

    Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize. PMID:24948911

  18. Determination of matrix composition based on solute-solute nearest-neighbor distances in atom probe tomography.

    PubMed

    De Geuser, F; Lefebvre, W

    2011-03-01

    In this study, we propose a fast automatic method providing the matrix concentration in an atom probe tomography (APT) data set containing two phases or more. The principle of this method relies on the calculation of the relative amount of isolated solute atoms (i.e., not surrounded by a similar solute atom) as a function of a distance d in the APT reconstruction. Simulated data sets have been generated to test the robustness of this new tool and demonstrate that rapid and reproducible results can be obtained without the need of any user input parameter. The method has then been successfully applied to a ternary Al-Zn-Mg alloy containing a fine dispersion of hardening precipitates. The relevance of this method for direct estimation of matrix concentration is discussed and compared with the existing methodologies. Copyright © 2010 Wiley-Liss, Inc.

  19. Multiresolution forecasting for futures trading using wavelet decompositions.

    PubMed

    Zhang, B L; Coggins, R; Jabri, M A; Dersch, D; Flower, B

    2001-01-01

    We investigate the effectiveness of a financial time-series forecasting strategy which exploits the multiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift invariant scale-related representation. In transform space, each individual wavelet series is modeled by a separate multilayer perceptron (MLP). We apply the Bayesian method of automatic relevance determination to choose short past windows (short-term history) for the inputs to the MLPs at lower scales and long past windows (long-term history) at higher scales. To form the overall forecast, the individual forecasts are then recombined by the linear reconstruction property of the inverse transform with the chosen autocorrelation shell representation, or by another perceptron which learns the weight of each scale in the prediction of the original time series. The forecast results are then passed to a money management system to generate trades.

  20. ROBIN: a platform for evaluating automatic target recognition algorithms: II. Protocols used for evaluating algorithms and results obtained on the SAGEM DS database

    NASA Astrophysics Data System (ADS)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    Over the five past years, the computer vision community has explored many different avenues of research for Automatic Target Recognition. Noticeable advances have been made and we are now in the situation where large-scale evaluations of ATR technologies have to be carried out, to determine what the limitations of the recently proposed methods are and to determine the best directions for future works. ROBIN, which is a project funded by the French Ministry of Defence and by the French Ministry of Research, has the ambition of being a new reference for benchmarking ATR algorithms in operational contexts. This project, headed by major companies and research centers involved in Computer Vision R&D in the field of Defense (Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES) recently released a large dataset of several thousands of hand-annotated infrared and RGB images of different targets in different situations. Setting up an evaluation campaign requires us to define, accurately and carefully, sets of data (both for training ATR algorithms and for their evaluation), tasks to be evaluated, and finally protocols and metrics for the evaluation. ROBIN offers interesting contributions to each one of these three points. This paper first describes, justifies and defines the set of functions used in the ROBIN competitions and relevant for evaluating ATR algorithms (Detection, Localization, Recognition and Identification). It also defines the metrics and the protocol used for evaluating these functions. In the second part of the paper, the results obtained by several state-of-the-art algorithms on the SAGEM DS database (a subpart of ROBIN) are presented and discussed

  1. On a program manifold's stability of one contour automatic control systems

    NASA Astrophysics Data System (ADS)

    Zumatov, S. S.

    2017-12-01

    Methodology of analysis of stability is expounded to the one contour systems automatic control feedback in the presence of non-linearities. The methodology is based on the use of the simplest mathematical models of the nonlinear controllable systems. Stability of program manifolds of one contour automatic control systems is investigated. The sufficient conditions of program manifold's absolute stability of one contour automatic control systems are obtained. The Hurwitz's angle of absolute stability was determined. The sufficient conditions of program manifold's absolute stability of control systems by the course of plane in the mode of autopilot are obtained by means Lyapunov's second method.

  2. Automatic analysis of stereoscopic satellite image pairs for determination of cloud-top height and structure

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.

    1991-01-01

    Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.

  3. High-speed data search

    NASA Technical Reports Server (NTRS)

    Driscoll, James N.

    1994-01-01

    The high-speed data search system developed for KSC incorporates existing and emerging information retrieval technology to help a user intelligently and rapidly locate information found in large textual databases. This technology includes: natural language input; statistical ranking of retrieved information; an artificial intelligence concept called semantics, where 'surface level' knowledge found in text is used to improve the ranking of retrieved information; and relevance feedback, where user judgements about viewed information are used to automatically modify the search for further information. Semantics and relevance feedback are features of the system which are not available commercially. The system further demonstrates focus on paragraphs of information to decide relevance; and it can be used (without modification) to intelligently search all kinds of document collections, such as collections of legal documents medical documents, news stories, patents, and so forth. The purpose of this paper is to demonstrate the usefulness of statistical ranking, our semantic improvement, and relevance feedback.

  4. ActiveSeismoPick3D - automatic first arrival determination for large active seismic arrays

    NASA Astrophysics Data System (ADS)

    Paffrath, Marcel; Küperkoch, Ludger; Wehling-Benatelli, Sebastian; Friederich, Wolfgang

    2016-04-01

    We developed a tool for automatic determination of first arrivals in active seismic data based on an approach, that utilises higher order statistics (HOS) and the Akaike information criterion (AIC), commonly used in seismology, but not in active seismics. Automatic picking is highly desirable in active seismics as the number of data provided by large seismic arrays rapidly exceeds of what an analyst can evaluate in a reasonable amount of time. To bring the functionality of automatic phase picking into the context of active data, the software package ActiveSeismoPick3D was developed in Python. It uses a modified algorithm for the determination of first arrivals which searches for the HOS maximum in unfiltered data. Additionally, it offers tools for manual quality control and postprocessing, e.g. various visualisation and repicking functionalities. For flexibility, the tool also includes methods for the preparation of geometry information of large seismic arrays and improved interfaces to the Fast Marching Tomography Package (FMTOMO), which can be used for the prediction of travel times and inversion for subsurface properties. Output files are generated in the VTK format, allowing the 3D visualization of e.g. the inversion results. As a test case, a data set consisting of 9216 traces from 64 shots was gathered, recorded at 144 receivers deployed in a regular 2D array of a size of 100 x 100 m. ActiveSeismoPick3D automatically checks the determined first arrivals by a dynamic signal to noise ratio threshold. From the data a 3D model of the subsurface was generated using the export functionality of the package and FMTOMO.

  5. Computerized Bone Age Estimation Using Deep Learning Based Program: Evaluation of the Accuracy and Efficiency.

    PubMed

    Kim, Jeong Rye; Shim, Woo Hyun; Yoon, Hee Mang; Hong, Sang Hyup; Lee, Jin Seong; Cho, Young Ah; Kim, Sangki

    2017-12-01

    The purpose of this study is to evaluate the accuracy and efficiency of a new automatic software system for bone age assessment and to validate its feasibility in clinical practice. A Greulich-Pyle method-based deep-learning technique was used to develop the automatic software system for bone age determination. Using this software, bone age was estimated from left-hand radiographs of 200 patients (3-17 years old) using first-rank bone age (software only), computer-assisted bone age (two radiologists with software assistance), and Greulich-Pyle atlas-assisted bone age (two radiologists with Greulich-Pyle atlas assistance only). The reference bone age was determined by the consensus of two experienced radiologists. First-rank bone ages determined by the automatic software system showed a 69.5% concordance rate and significant correlations with the reference bone age (r = 0.992; p < 0.001). Concordance rates increased with the use of the automatic software system for both reviewer 1 (63.0% for Greulich-Pyle atlas-assisted bone age vs 72.5% for computer-assisted bone age) and reviewer 2 (49.5% for Greulich-Pyle atlas-assisted bone age vs 57.5% for computer-assisted bone age). Reading times were reduced by 18.0% and 40.0% for reviewers 1 and 2, respectively. Automatic software system showed reliably accurate bone age estimations and appeared to enhance efficiency by reducing reading times without compromising the diagnostic accuracy.

  6. PubMedReco: A Real-Time Recommender System for PubMed Citations.

    PubMed

    Samuel, Hamman W; Zaïane, Osmar R

    2017-01-01

    We present a recommender system, PubMedReco, for real-time suggestions of medical articles from PubMed, a database of over 23 million medical citations. PubMedReco can recommend medical article citations while users are conversing in a synchronous communication environment such as a chat room. Normally, users would have to leave their chat interface to open a new web browser window, and formulate an appropriate search query to retrieve relevant results. PubMedReco automatically generates the search query and shows relevant citations within the same integrated user interface. PubMedReco analyzes relevant keywords associated with the conversation and uses them to search for relevant citations using the PubMed E-utilities programming interface. Our contributions include improvements to the user experience for searching PubMed from within health forums and chat rooms, and a machine learning model for identifying relevant keywords. We demonstrate the feasibility of PubMedReco using BMJ's Doc2Doc forum discussions.

  7. Semi automatic indexing of PostScript files using Medical Text Indexer in medical education.

    PubMed

    Mollah, Shamim Ara; Cimino, Christopher

    2007-10-11

    At Albert Einstein College of Medicine a large part of online lecture materials contain PostScript files. As the collection grows it becomes essential to create a digital library to have easy access to relevant sections of the lecture material that is full-text indexed; to create this index it is necessary to extract all the text from the document files that constitute the originals of the lectures. In this study we present a semi automatic indexing method using robust technique for extracting text from PostScript files and National Library of Medicine's Medical Text Indexer (MTI) program for indexing the text. This model can be applied to other medical schools for indexing purposes.

  8. Food Safety by Using Machine Learning for Automatic Classification of Seeds of the South-American Incanut Plant

    NASA Astrophysics Data System (ADS)

    Lemanzyk, Thomas; Anding, Katharina; Linss, Gerhard; Rodriguez Hernández, Jorge; Theska, René

    2015-02-01

    The following paper deals with the classification of seeds and seed components of the South-American Incanut plant and the modification of a machine to handle this task. Initially the state of the art is being illustrated. The research was executed in Germany and with a relevant part in Peru and Ecuador. Theoretical considerations for the solution of an automatically analysis of the Incanut seeds were specified. The optimization of the analyzing software and the separation unit of the mechanical hardware are carried out with recognition results. In a final step the practical application of the analysis of the Incanut seeds is held on a trial basis and rated on the bases of statistic values.

  9. Singing numbers…in cognitive space--a dual-task study of the link between pitch, space, and numbers.

    PubMed

    Fischer, Martin H; Riello, Marianna; Giordano, Bruno L; Rusconi, Elena

    2013-04-01

    We assessed the automaticity of spatial-numerical and spatial-musical associations by testing their intentionality and load sensitivity in a dual-task paradigm. In separate sessions, 16 healthy adults performed magnitude and pitch comparisons on sung numbers with variable pitch. Stimuli and response alternatives were identical, but the relevant stimulus attribute (pitch or number) differed between tasks. Concomitant tasks required retention of either color or location information. Results show that spatial associations of both magnitude and pitch are load sensitive and that the spatial association for pitch is more powerful than that for magnitude. These findings argue against the automaticity of spatial mappings in either stimulus dimension. Copyright © 2013 Cognitive Science Society, Inc.

  10. The validity of different measures of automatic alcohol action tendencies.

    PubMed

    Kersbergen, Inge; Woud, Marcella L; Field, Matt

    2015-03-01

    Previous studies have demonstrated that automatic alcohol action tendencies are related to alcohol consumption and hazardous drinking. These action tendencies are measured with reaction time tasks in which the latency to make an approach response to alcohol pictures is compared with the latency to make an avoidance response. In the literature, 4 different tasks have been used, and these tasks differ on whether alcohol is a relevant (R) or irrelevant (IR) feature for categorization and on whether participants must make a symbolic approach response (stimulus-response compatibility [SRC] tasks) or an overt behavioral response (approach avoidance tasks [AAT]) to the pictures. Previous studies have shown positive correlations between measures of action tendencies and hazardous drinking and weekly alcohol consumption. However, results have been inconsistent and the different measures have not been directly compared with each other. Therefore, it is unclear which task is the best predictor of hazardous drinking and alcohol consumption. In the present study, 80 participants completed all 4 measures of action tendencies (i.e., R-SRC, IR-SRC, R-AAT, and IR-AAT) and measures of alcohol consumption and hazardous drinking. Stepwise regressions showed that the R-SRC and R-AAT were the only significant predictors of hazardous drinking, whereas the R-AAT was the only reliable predictor of alcohol consumption. Our results confirm that drinking behavior is positively correlated with automatic alcohol approach tendencies, but only if alcohol-relatedness is the relevant feature for categorization. Theoretical implications and methodological issues are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  11. Symbolic and non symbolic numerical representation in adults with and without developmental dyscalculia

    PubMed Central

    2012-01-01

    Background The question whether Developmental Dyscalculia (DD; a deficit in the ability to process numerical information) is the result of deficiencies in the non symbolic numerical representation system (e.g., a group of dots) or in the symbolic numerical representation system (e.g., Arabic numerals) has been debated in scientific literature. It is accepted that the non symbolic system is divided into two different ranges, the subitizing range (i.e., quantities from 1-4) which is processed automatically and quickly, and the counting range (i.e., quantities larger than 4) which is an attention demanding procedure and is therefore processed serially and slowly. However, so far no study has tested the automaticity of symbolic and non symbolic representation in DD participants separately for the subitizing and the counting ranges. Methods DD and control participants undergo a novel version of the Stroop task, i.e., the Enumeration Stroop. They were presented with a random series of between one and nine written digits, and were asked to name either the relevant written digit (in the symbolic task) or the relevant quantity of digits (in the non symbolic task) while ignoring the irrelevant aspect. Result DD participants, unlike the control group, didn't show any congruency effect in the subitizing range of the non symbolic task. Conclusion These findings suggest that DD may be impaired in the ability to process symbolic numerical information or in the ability to automatically associate the two systems (i.e., the symbolic vs. the non symbolic). Additionally DD have deficiencies in the non symbolic counting range. PMID:23190433

  12. Symbolic and non symbolic numerical representation in adults with and without developmental dyscalculia.

    PubMed

    Furman, Tamar; Rubinsten, Orly

    2012-11-28

    The question whether Developmental Dyscalculia (DD; a deficit in the ability to process numerical information) is the result of deficiencies in the non symbolic numerical representation system (e.g., a group of dots) or in the symbolic numerical representation system (e.g., Arabic numerals) has been debated in scientific literature. It is accepted that the non symbolic system is divided into two different ranges, the subitizing range (i.e., quantities from 1-4) which is processed automatically and quickly, and the counting range (i.e., quantities larger than 4) which is an attention demanding procedure and is therefore processed serially and slowly. However, so far no study has tested the automaticity of symbolic and non symbolic representation in DD participants separately for the subitizing and the counting ranges. DD and control participants undergo a novel version of the Stroop task, i.e., the Enumeration Stroop. They were presented with a random series of between one and nine written digits, and were asked to name either the relevant written digit (in the symbolic task) or the relevant quantity of digits (in the non symbolic task) while ignoring the irrelevant aspect. DD participants, unlike the control group, didn't show any congruency effect in the subitizing range of the non symbolic task. These findings suggest that DD may be impaired in the ability to process symbolic numerical information or in the ability to automatically associate the two systems (i.e., the symbolic vs. the non symbolic). Additionally DD have deficiencies in the non symbolic counting range.

  13. Ingress clearance requirements and seat positioning for automatic belt systems

    DOT National Transportation Integrated Search

    1981-06-01

    The purposes of this study were (1) to determine how much clearance between a seat belt and seat cushion is needed for a driver to enter the front seat of an automobile equipped with automatic seat belts---without his/her having to lift the webbing, ...

  14. Apparatus enables automatic microanalysis of body fluids

    NASA Technical Reports Server (NTRS)

    Soffen, G. A.; Stuart, J. L.

    1966-01-01

    Apparatus will automatically and quantitatively determine body fluid constituents which are amenable to analysis by fluorometry or colorimetry. The results of the tests are displayed as percentages of full scale deflection on a strip-chart recorder. The apparatus can also be adapted for microanalysis of various other fluids.

  15. Automatic classification and detection of clinically relevant images for diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Xu, Xinyu; Li, Baoxin

    2008-03-01

    We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.

  16. Use of an automatic procedure for determination of classes of land use in the Teste Araras area of the peripheral Paulist depression

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Valeriano, D. D.

    1981-01-01

    An evaluation of the multispectral image analyzer (system Image 1-100), using automatic classification, is presented. The region studied is situated. The automatic was carried out using the maximum likelihood (MAXVER) classification system. The following classes were established: urban area, bare soil, sugar cane, citrus culture (oranges), pastures, and reforestation. The classification matrix of the test sites indicate that the percentage of correct classification varied between 63% and 100%.

  17. Automatics adjusment on private pension fund for Asian Mathematics Conferences

    NASA Astrophysics Data System (ADS)

    Purwadi, J.

    2017-10-01

    This paper discussed about how the automatic adjustment mechanism in the pension fund with defined benefits in case conditions beyond assumptions - assumptions that have been determined. Automatic adjustment referred to in this experiment is intended to anticipate changes in economic and demographic conditions. The method discuss in this paper are indexing life expectancy. In this paper discussed about how the methods on private pension fund and how’s the impact of the change of life expectancy on benefit.

  18. Automatic priming of attentional control by relevant colors.

    PubMed

    Ansorge, Ulrich; Becker, Stefanie I

    2012-01-01

    We tested whether color word cues automatically primed attentional control settings during visual search, or whether color words were used in a strategic manner for the control of attention. In Experiment 1, we used color words as cues that were informative or uninformative with respect to the target color. Regardless of the cue's informativeness, distractors similar to the color cue captured more attention. In Experiment 2, the participants either indicated their expectation about the target color or recalled the last target color, which was uncorrelated with the present target color. We observed more attentional capture by distractors that were similar to the participants' predictions and recollections, but no difference between effects of the recollected and predicted colors. In Experiment 3, we used 100%-informative word cues that were congruent with the predicted target color (e.g., the word "red" informed that the target would be red) or incongruent with the predicted target color (e.g., the word "green" informed that the target would be red) and found that informative incongruent word cues primed attention capture by a word-similar distractor. Together, the results suggest that word cues (Exps. 1 and 3) and color representations (Exp. 2) primed attention capture in an automatic manner. This indicates that color cues automatically primed temporary adjustments in attention control settings.

  19. Automatic MRI 2D brain segmentation using graph searching technique.

    PubMed

    Pedoia, Valentina; Binaghi, Elisabetta

    2013-09-01

    Accurate and efficient segmentation of the whole brain in magnetic resonance (MR) images is a key task in many neuroscience and medical studies either because the whole brain is the final anatomical structure of interest or because the automatic extraction facilitates further analysis. The problem of segmenting brain MRI images has been extensively addressed by many researchers. Despite the relevant achievements obtained, automated segmentation of brain MRI imagery is still a challenging problem whose solution has to cope with critical aspects such as anatomical variability and pathological deformation. In the present paper, we describe and experimentally evaluate a method for segmenting brain from MRI images basing on two-dimensional graph searching principles for border detection. The segmentation of the whole brain over the entire volume is accomplished slice by slice, automatically detecting frames including eyes. The method is fully automatic and easily reproducible by computing the internal main parameters directly from the image data. The segmentation procedure is conceived as a tool of general applicability, although design requirements are especially commensurate with the accuracy required in clinical tasks such as surgical planning and post-surgical assessment. Several experiments were performed to assess the performance of the algorithm on a varied set of MRI images obtaining good results in terms of accuracy and stability. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Automated contour detection in X-ray left ventricular angiograms using multiview active appearance models and dynamic programming.

    PubMed

    Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2006-09-01

    This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.

  1. Does expert knowledge improve automatic probabilistic classification of gait joint motion patterns in children with cerebral palsy?

    PubMed Central

    Papageorgiou, Eirini; Nieuwenhuys, Angela; Desloovere, Kaat

    2017-01-01

    Background This study aimed to improve the automatic probabilistic classification of joint motion gait patterns in children with cerebral palsy by using the expert knowledge available via a recently developed Delphi-consensus study. To this end, this study applied both Naïve Bayes and Logistic Regression classification with varying degrees of usage of the expert knowledge (expert-defined and discretized features). A database of 356 patients and 1719 gait trials was used to validate the classification performance of eleven joint motions. Hypotheses Two main hypotheses stated that: (1) Joint motion patterns in children with CP, obtained through a Delphi-consensus study, can be automatically classified following a probabilistic approach, with an accuracy similar to clinical expert classification, and (2) The inclusion of clinical expert knowledge in the selection of relevant gait features and the discretization of continuous features increases the performance of automatic probabilistic joint motion classification. Findings This study provided objective evidence supporting the first hypothesis. Automatic probabilistic gait classification using the expert knowledge available from the Delphi-consensus study resulted in accuracy (91%) similar to that obtained with two expert raters (90%), and higher accuracy than that obtained with non-expert raters (78%). Regarding the second hypothesis, this study demonstrated that the use of more advanced machine learning techniques such as automatic feature selection and discretization instead of expert-defined and discretized features can result in slightly higher joint motion classification performance. However, the increase in performance is limited and does not outweigh the additional computational cost and the higher risk of loss of clinical interpretability, which threatens the clinical acceptance and applicability. PMID:28570616

  2. Automatic segmentation and quantification of the cardiac structures from non-contrast-enhanced cardiac CT scans

    NASA Astrophysics Data System (ADS)

    Shahzad, Rahil; Bos, Daniel; Budde, Ricardo P. J.; Pellikaan, Karlijn; Niessen, Wiro J.; van der Lugt, Aad; van Walsum, Theo

    2017-05-01

    Early structural changes to the heart, including the chambers and the coronary arteries, provide important information on pre-clinical heart disease like cardiac failure. Currently, contrast-enhanced cardiac computed tomography angiography (CCTA) is the preferred modality for the visualization of the cardiac chambers and the coronaries. In clinical practice not every patient undergoes a CCTA scan; many patients receive only a non-contrast-enhanced calcium scoring CT scan (CTCS), which has less radiation dose and does not require the administration of contrast agent. Quantifying cardiac structures in such images is challenging, as they lack the contrast present in CCTA scans. Such quantification would however be relevant, as it enables population based studies with only a CTCS scan. The purpose of this work is therefore to investigate the feasibility of automatic segmentation and quantification of cardiac structures viz whole heart, left atrium, left ventricle, right atrium, right ventricle and aortic root from CTCS scans. A fully automatic multi-atlas-based segmentation approach is used to segment the cardiac structures. Results show that the segmentation overlap between the automatic method and that of the reference standard have a Dice similarity coefficient of 0.91 on average for the cardiac chambers. The mean surface-to-surface distance error over all the cardiac structures is 1.4+/- 1.7 mm. The automatically obtained cardiac chamber volumes using the CTCS scans have an excellent correlation when compared to the volumes in corresponding CCTA scans, a Pearson correlation coefficient (R) of 0.95 is obtained. Our fully automatic method enables large-scale assessment of cardiac structures on non-contrast-enhanced CT scans.

  3. Automatic quality assessment and peak identification of auditory brainstem responses with fitted parametric peaks.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Thornton, A Roger D; Sainz, Manuel; Vargas, Jose Luis

    2014-05-01

    The recording of the auditory brainstem response (ABR) is used worldwide for hearing screening purposes. In this process, a precise estimation of the most relevant components is essential for an accurate interpretation of these signals. This evaluation is usually carried out subjectively by an audiologist. However, the use of automatic methods for this purpose is being encouraged nowadays in order to reduce human evaluation biases and ensure uniformity among test conditions, patients, and screening personnel. This article describes a new method that performs automatic quality assessment and identification of the peaks, the fitted parametric peaks (FPP). This method is based on the use of synthesized peaks that are adjusted to the ABR response. The FPP is validated, on one hand, by an analysis of amplitudes and latencies measured manually by an audiologist and automatically by the FPP method in ABR signals recorded at different stimulation rates; and on the other hand, contrasting the performance of the FPP method with the automatic evaluation techniques based on the correlation coefficient, FSP, and cross correlation with a predefined template waveform by comparing the automatic evaluations of the quality of these methods with subjective evaluations provided by five experienced evaluators on a set of ABR signals of different quality. The results of this study suggest (a) that the FPP method can be used to provide an accurate parameterization of the peaks in terms of amplitude, latency, and width, and (b) that the FPP remains as the method that best approaches the averaged subjective quality evaluation, as well as provides the best results in terms of sensitivity and specificity in ABR signals validation. The significance of these findings and the clinical value of the FPP method are highlighted on this paper. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. The system neurophysiological basis of non-adaptive cognitive control: Inhibition of implicit learning mediated by right prefrontal regions.

    PubMed

    Stock, Ann-Kathrin; Steenbergen, Laura; Colzato, Lorenza; Beste, Christian

    2016-12-01

    Cognitive control is adaptive in the sense that it inhibits automatic processes to optimize goal-directed behavior, but high levels of control may also have detrimental effects in case they suppress beneficial automatisms. Until now, the system neurophysiological mechanisms and functional neuroanatomy underlying these adverse effects of cognitive control have remained elusive. This question was examined by analyzing the automatic exploitation of a beneficial implicit predictive feature under conditions of high versus low cognitive control demands, combining event-related potentials (ERPs) and source localization. It was found that cognitive control prohibits the beneficial automatic exploitation of additional implicit information when task demands are high. Bottom-up perceptual and attentional selection processes (P1 and N1 ERPs) are not modulated by this, but the automatic exploitation of beneficial predictive information in case of low cognitive control demands was associated with larger response-locked P3 amplitudes and stronger activation of the right inferior frontal gyrus (rIFG, BA47). This suggests that the rIFG plays a key role in the detection of relevant task cues, the exploitation of alternative task sets, and the automatic (bottom-up) implementation and reprogramming of action plans. Moreover, N450 amplitudes were larger under high cognitive control demands, which was associated with activity differences in the right medial frontal gyrus (BA9). This most likely reflects a stronger exploitation of explicit task sets which hinders the exploration of the implicit beneficial information in case of high cognitive control demands. Hum Brain Mapp 37:4511-4522, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Automatic Association of News Items.

    ERIC Educational Resources Information Center

    Carrick, Christina; Watters, Carolyn

    1997-01-01

    Discussion of electronic news delivery systems and the automatic generation of electronic editions focuses on the association of related items of different media type, specifically photos and stories. The goal is to be able to determine to what degree any two news items refer to the same news event. (Author/LRW)

  6. Grasping Beer Mugs: On the Dynamics of Alignment Effects Induced by Handled Objects

    ERIC Educational Resources Information Center

    Bub, Daniel N.; Masson, Michael E. J.

    2010-01-01

    We examined automatic spatial alignment effects evoked by handled objects. Using color as the relevant cue carried by an irrelevant handled object aligned or misaligned with the response hand, responses to color were faster when the handle aligned with the response hand. Alignment effects were observed only when the task was to make a reach and…

  7. Developing an Automatic Crawling System for Populating a Digital Repository of Professional Development Resources: A Pilot Study

    ERIC Educational Resources Information Center

    Park, Jung-ran; Yang, Chris; Tosaka, Yuji; Ping, Qing; Mimouni, Houda El

    2016-01-01

    This study is a part of the larger project that develops a sustainable digital repository of professional development resources on emerging data standards and technologies for data organization and management in libraries. Toward that end, the project team developed an automated workflow to crawl for, monitor, and classify relevant web objects…

  8. Automatic Keyword Identification by Artificial Neural Networks Compared to Manual Identification by Users of Filtering Systems.

    ERIC Educational Resources Information Center

    Boger, Zvi; Kuflik, Tsvi; Shoval, Peretz; Shapira, Bracha

    2001-01-01

    Discussion of information filtering (IF) and information retrieval focuses on the use of an artificial neural network (ANN) as an alternative method for both IF and term selection and compares its effectiveness to that of traditional methods. Results show that the ANN relevance prediction out-performs the prediction of an IF system. (Author/LRW)

  9. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    NASA Astrophysics Data System (ADS)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude of the equivalent Wood-Anderson displacement recordings. The moment magnitude (Mw) is then estimated from the inversion of displacement spectra. The duration magnitude (Md) is rapidly computed, based on a simple and automatic measurement of the seismic wave coda duration. Starting from the magnitude estimates, other relevant pieces of information are also computed, such as the corner frequency, the seismic moment, the source radius and the seismic energy. The ground-shaking maps on a Google map are produced, for peak ground acceleration (PGA), peak ground velocity (PGV) and instrumental intensity (in SHAKEMAP® format), or a plot of the measured peak ground values. Furthermore, based on a specific decisional scheme, the automatic discrimination between local earthquakes occurred within the network and regional/teleseismic events occurred outside the network is performed. Finally, for largest events, if a consistent number of P-wave polarity reading are available, the focal mechanism is also computed. For each event, all of the available pieces of information are stored in a local database and the results of the automatic analyses are published on an interactive web page. "The Bulletin" shows a map with event location and stations, as well as a table listing all the events, with the associated parameters. The catalogue fields are the event ID, the origin date and time, latitude, longitude, depth, Ml, Mw, Md, the number of triggered stations, the S-displacement spectra, and shaking maps. Some of these entries also provide additional information, such as the focal mechanism (when available). The picked traces are uploaded in the database and from the web interface of the Bulletin the traces can be download for more specific analysis. This innovative software represents a smart solution, with a friendly and interactive interface, for high-level analysis of seismic data analysis and it may represent a relevant tool not only for seismologists, but also for non-expert external users who are interested in the seismological data. The software is a valid tool for the automatic analysis of the background seismicity at different time scales and can be a relevant tool for the monitoring of both natural and induced seismicity.

  10. Data-driven backward chaining

    NASA Technical Reports Server (NTRS)

    Haley, Paul

    1991-01-01

    The C Language Integrated Production System (CLIPS) cannot effectively perform sound and complete logical inference in most real-world contexts. The problem facing CLIPS is its lack of goal generation. Without automatic goal generation and maintenance, forward chaining can only deduce all instances of a relationship. Backward chaining, which requires goal generation, allows deduction of only that subset of what is logically true which is also relevant to ongoing problem solving. Goal generation can be mimicked in simple cases using forward chaining. However, such mimicry requires manual coding of additional rules which can assert an inadequate goal representation for every condition in every rule that can have corresponding facts derived by backward chaining. In general, for N rules with an average of M conditions per rule the number of goal generation rules required is on the order of N*M. This is clearly intractable from a program maintenance perspective. We describe the support in Eclipse for backward chaining which it automatically asserts as it checks rule conditions. Important characteristics of this extension are that it does not assert goals which cannot match any rule conditions, that 2 equivalent goals are never asserted, and that goals persist as long as, but no longer than, they remain relevant.

  11. WCE video segmentation using textons

    NASA Astrophysics Data System (ADS)

    Gallo, Giovanni; Granata, Eliana

    2010-03-01

    Wireless Capsule Endoscopy (WCE) integrates wireless transmission with image and video technology. It has been used to examine the small intestine non invasively. Medical specialists look for signicative events in the WCE video by direct visual inspection manually labelling, in tiring and up to one hour long sessions, clinical relevant frames. This limits the WCE usage. To automatically discriminate digestive organs such as esophagus, stomach, small intestine and colon is of great advantage. In this paper we propose to use textons for the automatic discrimination of abrupt changes within a video. In particular, we consider, as features, for each frame hue, saturation, value, high-frequency energy content and the responses to a bank of Gabor filters. The experiments have been conducted on ten video segments extracted from WCE videos, in which the signicative events have been previously labelled by experts. Results have shown that the proposed method may eliminate up to 70% of the frames from further investigations. The direct analysis of the doctors may hence be concentrated only on eventful frames. A graphical tool showing sudden changes in the textons frequencies for each frame is also proposed as a visual aid to find clinically relevant segments of the video.

  12. Decaying Relevance of Clinical Data Towards Future Decisions in Data-Driven Inpatient Clinical Order Sets

    PubMed Central

    Chen, Jonathan H; Alagappan, Muthuraman; Goldstein, Mary K; Asch, Steven M; Altman, Russ B

    2017-01-01

    Objective Determine how varying longitudinal historical training data can impact prediction of future clinical decisions. Estimate the “decay rate” of clinical data source relevance. Materials and Methods We trained a clinical order recommender system, analogous to Netflix or Amazon’s “Customers who bought A also bought B…” product recommenders, based on a tertiary academic hospital’s structured electronic health record data. We used this system to predict future (2013) admission orders based on different subsets of historical training data (2009 through 2012), relative to existing human-authored order sets. Results Predicting future (2013) inpatient orders is more accurate with models trained on just one month of recent (2012) data than with 12 months of older (2009) data (ROC AUC 0.91 vs. 0.88, precision 27% vs. 22%, recall 52% vs. 43%, all P<10−10). Algorithmically learned models from even the older (2009) data was still more effective than existing human-authored order sets (ROC AUC 0.81, precision 16% recall 35%). Training with more longitudinal data (2009–2012) was no better than using only the most recent (2012) data, unless applying a decaying weighting scheme with a “half-life” of data relevance about 4 months. Discussion Clinical practice patterns (automatically) learned from electronic health record data can vary substantially across years. Gold standards for clinical decision support are elusive moving targets, reinforcing the need for automated methods that can adapt to evolving information. Conclusions and Relevance Prioritizing small amounts of recent data is more effective than using larger amounts of older data towards future clinical predictions. PMID:28495350

  13. Computer-aided technique for automatic determination of the relationship between transglottal pressure change and voice fundamental frequency.

    PubMed

    Deguchi, Shinji; Kawashima, Kazutaka; Washio, Seiichi

    2008-12-01

    The effect of artificially altered transglottal pressures on the voice fundamental frequency (F0) is known to be associated with vocal fold stiffness. Its measurement, though useful as a potential diagnostic tool for noncontact assessment of vocal fold stiffness, often requires manual and painstaking determination of an unstable F0 of voice. Here, we provide a computer-aided technique that enables one to carry out the determination easily and accurately. Human subjects vocalized in accordance with a series of reference sounds from a speaker controlled by a computer. Transglottal pressures were altered by means of a valve embedded in a mouthpiece. Time-varying vocal F0 was extracted, without manual procedures, from a specific range of the voice spectrum determined on the basis of the controlled reference sounds. The validity of the proposed technique was assessed for 11 healthy subjects. Fluctuating voice F0 was tracked automatically during experiments, providing the relationship between transglottal pressure change and F0 on the computer. The proposed technique overcomes the difficulty in automatic determination of the voice F0, which tends to be transient both in normal voice and in some types of pathological voice.

  14. Methods for automatically analyzing humpback song units.

    PubMed

    Rickwood, Peter; Taylor, Andrew

    2008-03-01

    This paper presents mathematical techniques for automatically extracting and analyzing bioacoustic signals. Automatic techniques are described for isolation of target signals from background noise, extraction of features from target signals and unsupervised classification (clustering) of the target signals based on these features. The only user-provided inputs, other than raw sound, is an initial set of signal processing and control parameters. Of particular note is that the number of signal categories is determined automatically. The techniques, applied to hydrophone recordings of humpback whales (Megaptera novaeangliae), produce promising initial results, suggesting that they may be of use in automated analysis of not only humpbacks, but possibly also in other bioacoustic settings where automated analysis is desirable.

  15. Automatic tracking of labeled red blood cells in microchannels.

    PubMed

    Pinho, Diana; Lima, Rui; Pereira, Ana I; Gayubo, Fernando

    2013-09-01

    The current study proposes an automatic method for the segmentation and tracking of red blood cells flowing through a 100- μm glass capillary. The original images were obtained by means of a confocal system and then processed in MATLAB using the Image Processing Toolbox. The measurements obtained with the proposed automatic method were compared with the results determined by a manual tracking method. The comparison was performed by using both linear regressions and Bland-Altman analysis. The results have shown a good agreement between the two methods. Therefore, the proposed automatic method is a powerful way to provide rapid and accurate measurements for in vitro blood experiments in microchannels. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  17. Identifying relevant data for a biological database: handcrafted rules versus machine learning.

    PubMed

    Sehgal, Aditya Kumar; Das, Sanmay; Noto, Keith; Saier, Milton H; Elkan, Charles

    2011-01-01

    With well over 1,000 specialized biological databases in use today, the task of automatically identifying novel, relevant data for such databases is increasingly important. In this paper, we describe practical machine learning approaches for identifying MEDLINE documents and Swiss-Prot/TrEMBL protein records, for incorporation into a specialized biological database of transport proteins named TCDB. We show that both learning approaches outperform rules created by hand by a human expert. As one of the first case studies involving two different approaches to updating a deployed database, both the methods compared and the results will be of interest to curators of many specialized databases.

  18. Computer-Assisted Search Of Large Textual Data Bases

    NASA Technical Reports Server (NTRS)

    Driscoll, James R.

    1995-01-01

    "QA" denotes high-speed computer system for searching diverse collections of documents including (but not limited to) technical reference manuals, legal documents, medical documents, news releases, and patents. Incorporates previously available and emerging information-retrieval technology to help user intelligently and rapidly locate information found in large textual data bases. Technology includes provision for inquiries in natural language; statistical ranking of retrieved information; artificial-intelligence implementation of semantics, in which "surface level" knowledge found in text used to improve ranking of retrieved information; and relevance feedback, in which user's judgements of relevance of some retrieved documents used automatically to modify search for further information.

  19. SU-C-202-03: A Tool for Automatic Calculation of Delivered Dose Variation for Off-Line Adaptive Therapy Using Cone Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B; Lee, S; Chen, S

    Purpose: Monitoring the delivered dose is an important task for the adaptive radiotherapy (ART) and for determining time to re-plan. A software tool which enables automatic delivered dose calculation using cone-beam CT (CBCT) has been developed and tested. Methods: The tool consists of four components: a CBCT Colleting Module (CCM), a Plan Registration Moduel (PRM), a Dose Calculation Module (DCM), and an Evaluation and Action Module (EAM). The CCM is triggered periodically (e.g. every 1:00 AM) to search for newly acquired CBCTs of patients of interest and then export the DICOM files of the images and related registrations defined inmore » ARIA followed by triggering the PRM. The PRM imports the DICOM images and registrations, links the CBCTs to the related treatment plan of the patient in the planning system (RayStation V4.5, RaySearch, Stockholm, Sweden). A pre-determined CT-to-density table is automatically generated for dose calculation. Current version of the DCM uses a rigid registration which regards the treatment isocenter of the CBCT to be the isocenter of the treatment plan. Then it starts the dose calculation automatically. The AEM evaluates the plan using pre-determined plan evaluation parameters: PTV dose-volume metrics and critical organ doses. The tool has been tested for 10 patients. Results: Automatic plans are generated and saved in the order of the treatment dates of the Adaptive Planning module of the RayStation planning system, without any manual intervention. Once the CTV dose deviates more than 3%, both email and page alerts are sent to the physician and the physicist of the patient so that one can look the case closely. Conclusion: The tool is capable to perform automatic dose tracking and to alert clinicians when an action is needed. It is clinically useful for off-line adaptive therapy to catch any gross error. Practical way of determining alarming level for OAR is under development.« less

  20. Vadose zone monitoring strategies to control water flux dynamics and changes in soil hydraulic properties.

    NASA Astrophysics Data System (ADS)

    Valdes-Abellan, Javier; Jiménez-Martínez, Joaquin; Candela, Lucila

    2013-04-01

    For monitoring the vadose zone, different strategies can be chosen, depending on the objectives and scale of observation. The effects of non-conventional water use on the vadose zone might produce impacts in porous media which could lead to changes in soil hydraulic properties, among others. Controlling these possible effects requires an accurate monitoring strategy that controls the volumetric water content, θ, and soil pressure, h, along the studied profile. According to the available literature, different monitoring systems have been carried out independently, however less attention has received comparative studies between different techniques. An experimental plot of 9x5 m2 was set with automatic and non-automatic sensors to control θ and h up to 1.5m depth. The non-automatic system consisted of ten Jet Fill tensiometers at 30, 45, 60, 90 and 120 cm (Soil Moisture®) and a polycarbonate access tube of 44 mm (i.d) for soil moisture measurements with a TRIME FM TDR portable probe (IMKO®). Vertical installation was carefully performed; measurements with this system were manual, twice a week for θ and three times per week for h. The automatic system composed of five 5TE sensors (Decagon Devices®) installed at 20, 40, 60, 90 and 120 cm for θ measurements and one MPS1 sensor (Decagon Devices®) at 60 cm depth for h. Installation took place laterally in a 40-50 cm length hole bored in a side of a trench that was excavated. All automatic sensors hourly recorded and stored in a data-logger. Boundary conditions were controlled with a volume-meter and with a meteorological station. ET was modelled with Penman-Monteith equation. Soil characterization include bulk density, gravimetric water content, grain size distribution, saturated hydraulic conductivity and soil water retention curves determined following laboratory standards. Soil mineralogy was determined by X-Ray difractometry. Unsaturated soil hydraulic parameters were model-fitted through SWRC-fit code and ROSETTA based on soil textural fractions. Simulation of water flow using automatic and non-automatic date was carried out by HYDRUS-1D independently. A good agreement from collected automatic and non-automatic data and modelled results can be recognized. General trend was captured, except for the outlier values as expected. Slightly differences were found between hydraulic properties obtained from laboratory determinations, and from inverse modelling from the two approaches. Differences up to 14% of flux through the lower boundary were detected between the two strategies According to results, automatic sensors have more resolution and then they're more appropriated to detect subtle changes of soil hydraulic properties. Nevertheless, if the aim of the research is to control the general trend of water dynamics, no significant differences were observed between the two systems.

  1. Automating document classification for the Immune Epitope Database

    PubMed Central

    Wang, Peng; Morgan, Alexander A; Zhang, Qing; Sette, Alessandro; Peters, Bjoern

    2007-01-01

    Background The Immune Epitope Database contains information on immune epitopes curated manually from the scientific literature. Like similar projects in other knowledge domains, significant effort is spent on identifying which articles are relevant for this purpose. Results We here report our experience in automating this process using Naïve Bayes classifiers trained on 20,910 abstracts classified by domain experts. Improvements on the basic classifier performance were made by a) utilizing information stored in PubMed beyond the abstract itself b) applying standard feature selection criteria and c) extracting domain specific feature patterns that e.g. identify peptides sequences. We have implemented the classifier into the curation process determining if abstracts are clearly relevant, clearly irrelevant, or if no certain classification can be made, in which case the abstracts are manually classified. Testing this classification scheme on an independent dataset, we achieve 95% sensitivity and specificity in the 51.1% of abstracts that were automatically classified. Conclusion By implementing text classification, we have sped up the reference selection process without sacrificing sensitivity or specificity of the human expert classification. This study provides both practical recommendations for users of text classification tools, as well as a large dataset which can serve as a benchmark for tool developers. PMID:17655769

  2. Open Dataset for the Automatic Recognition of Sedentary Behaviors.

    PubMed

    Possos, William; Cruz, Robinson; Cerón, Jesús D; López, Diego M; Sierra-Torres, Carlos H

    2017-01-01

    Sedentarism is associated with the development of noncommunicable diseases (NCD) such as cardiovascular diseases (CVD), type 2 diabetes, and cancer. Therefore, the identification of specific sedentary behaviors (TV viewing, sitting at work, driving, relaxing, etc.) is especially relevant for planning personalized prevention programs. To build and evaluate a public a dataset for the automatic recognition (classification) of sedentary behaviors. The dataset included data from 30 subjects, who performed 23 sedentary behaviors while wearing a commercial wearable on the wrist, a smartphone on the hip and another in the thigh. Bluetooth Low Energy (BLE) beacons were used in order to improve the automatic classification of different sedentary behaviors. The study also compared six well know data mining classification techniques in order to identify the more precise method of solving the classification problem of the 23 defined behaviors. A better classification accuracy was obtained using the Random Forest algorithm and when data were collected from the phone on the hip. Furthermore, the use of beacons as a reference for obtaining the symbolic location of the individual improved the precision of the classification.

  3. Automatic detection and measurement of viral replication compartments by ellipse adjustment

    PubMed Central

    Garcés, Yasel; Guerrero, Adán; Hidalgo, Paloma; López, Raul Eduardo; Wood, Christopher D.; Gonzalez, Ramón A.; Rendón-Mancha, Juan Manuel

    2016-01-01

    Viruses employ a variety of strategies to hijack cellular activities through the orchestrated recruitment of macromolecules to specific virus-induced cellular micro-environments. Adenoviruses (Ad) and other DNA viruses induce extensive reorganization of the cell nucleus and formation of nuclear Replication Compartments (RCs), where the viral genome is replicated and expressed. In this work an automatic algorithm designed for detection and segmentation of RCs using ellipses is presented. Unlike algorithms available in the literature, this approach is deterministic, automatic, and can adjust multiple RCs using ellipses. The proposed algorithm is non iterative, computationally efficient and is invariant to affine transformations. The method was validated over both synthetic images and more than 400 real images of Ad-infected cells at various timepoints of the viral replication cycle obtaining relevant information about the biogenesis of adenoviral RCs. As proof of concept the algorithm was then used to quantitatively compare RCs in cells infected with the adenovirus wild type or an adenovirus mutant that is null for expression of a viral protein that is known to affect activities associated with RCs that result in deficient viral progeny production. PMID:27819325

  4. Behavioral finance and retirement plan contributions: how participants behave, and prescriptive solutions.

    PubMed

    DiCenzo, Jodi

    2007-01-01

    Behavioral research has made important, relevant contributions to retirement saving and investing. This work has cast a new light on participant behavior and its underpinnings: By and large, individuals are inert--with good intentions, poor follow-through, and bounded rationality. Loss aversion and decision-making biases often lead to unfortunate outcomes, including a poorly funded retirement. Further, behavioral economists have demonstrated that education and communication programs alone may not be effective in changing behavior. Instead, with their behavioral insights, they have offered new retirement plan design alternatives and empirically tested their efficacy in overcoming identified suboptimal behavior. These efforts are helping to pave a path of least resistance that should lead to greater retirement security. The Pension Protection Act of 2006 appears to support these alternatives by providing incentives to plan sponsors that implement automatic features such as automatic enrollment and deferral rate escalation. It also allows plan sponsors to choose more aggressive investment defaults. Perhaps implicit in this support is some advice to sponsors to accept participant behavior and to think more about changing their own by embracing automatic plan features.

  5. Automatic detection and measurement of viral replication compartments by ellipse adjustment

    NASA Astrophysics Data System (ADS)

    Garcés, Yasel; Guerrero, Adán; Hidalgo, Paloma; López, Raul Eduardo; Wood, Christopher D.; Gonzalez, Ramón A.; Rendón-Mancha, Juan Manuel

    2016-11-01

    Viruses employ a variety of strategies to hijack cellular activities through the orchestrated recruitment of macromolecules to specific virus-induced cellular micro-environments. Adenoviruses (Ad) and other DNA viruses induce extensive reorganization of the cell nucleus and formation of nuclear Replication Compartments (RCs), where the viral genome is replicated and expressed. In this work an automatic algorithm designed for detection and segmentation of RCs using ellipses is presented. Unlike algorithms available in the literature, this approach is deterministic, automatic, and can adjust multiple RCs using ellipses. The proposed algorithm is non iterative, computationally efficient and is invariant to affine transformations. The method was validated over both synthetic images and more than 400 real images of Ad-infected cells at various timepoints of the viral replication cycle obtaining relevant information about the biogenesis of adenoviral RCs. As proof of concept the algorithm was then used to quantitatively compare RCs in cells infected with the adenovirus wild type or an adenovirus mutant that is null for expression of a viral protein that is known to affect activities associated with RCs that result in deficient viral progeny production.

  6. Processing of Crawled Urban Imagery for Building Use Classification

    NASA Astrophysics Data System (ADS)

    Tutzauer, P.; Haala, N.

    2017-05-01

    Recent years have shown a shift from pure geometric 3D city models to data with semantics. This is induced by new applications (e.g. Virtual/Augmented Reality) and also a requirement for concepts like Smart Cities. However, essential urban semantic data like building use categories is often not available. We present a first step in bridging this gap by proposing a pipeline to use crawled urban imagery and link it with ground truth cadastral data as an input for automatic building use classification. We aim to extract this city-relevant semantic information automatically from Street View (SV) imagery. Convolutional Neural Networks (CNNs) proved to be extremely successful for image interpretation, however, require a huge amount of training data. Main contribution of the paper is the automatic provision of such training datasets by linking semantic information as already available from databases provided from national mapping agencies or city administrations to the corresponding façade images extracted from SV. Finally, we present first investigations with a CNN and an alternative classifier as a proof of concept.

  7. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgmans, Mark Christiaan, E-mail: m.c.burgmans@lumc.nl; Harder, J. Michiel den, E-mail: chiel.den.harder@gmail.com; Meershoek, Philippa, E-mail: P.Meershoek@lumc.nl

    PurposeTo determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions.Materials and MethodsCT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined bymore » measurement of the residual displacement in phantom lesions by two independent observers.ResultsMean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values.ConclusionThe accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.« less

  8. [Schizophrenia and semantic priming effects].

    PubMed

    Lecardeur, L; Giffard, B; Eustache, F; Dollfus, S

    2006-01-01

    This article is a review of studies using the semantic priming paradigm to assess the functioning of semantic memory in schizophrenic patients. Semantic priming describes the phenomenon of increasing the speed with which a string of letters (the target) is recognized as a word (lexical decision task) by presenting to the subject a semantically related word (the prime) prior to the appearance of the target word. This semantic priming is linked to both automatic and controlled processes depending on experimental conditions (stimulus onset asynchrony (SOA), percentage of related words and explicit memory instructions). Automatic process observed with short SOA, low related word percentage and instructions asking only to process the target, could be linked to the "automatic spreading activation" through the semantic network. Controlled processes involve "semantic matching" (the number of related and unrelated pairs influences the subjects decision) and "expectancy" (the prime leads the subject to generate an expectancy set of potential target to the prime). These processes can be observed whatever the SOA for the former and with long SOA for the later, but both with only high related word percentage and explicit memory instructions. Studies evaluating semantic priming effects in schizophrenia show conflicting results: schizophrenic patients can present hyperpriming (semantic priming effect is larger in patients than in controls), hypopriming (semantic priming effect is lower in patients than in controls) or equal semantic priming effects compared to control subjects. These results could be associated to a global impairment of controlled processes in schizophrenia, essentially to a dysfunction of semantic matching process. On the other hand, efficiency of semantic automatic spreading activation process is controversial. These discrepancies could be linked to the different experimental conditions used (duration of SOA, proportion of related pairs and instructions), which influence on the degree of involvement of controlled processes and therefore prevent to really assess its functioning. In addition, manipulations of the relation between prime and target (semantic distance, type of semantic relation and strength of semantic relation) seem to influence reaction times. However, the relation between prime and target (mediated priming) frequently used could not be the most relevant relation to understand the way of spreading of activation in semantic network in patients with schizophrenia. Finally, patients with formal thought disorders present particularly high priming effects relative to controls. These abnormal semantic priming effects could reflect a dysfunction of automatic spreading activation process and consequently an exaggerated diffusion of activation in the semantic network. In the future, the inclusion of different groups schizophrenic subjects could allow us to determine whether semantic memory disorders are pathognomonic or specific of a particular group of patients with schizophrenia.

  9. Drinking behavior in nursery pigs: Determining the accuracy between an automatic water meter versus human observers

    USDA-ARS?s Scientific Manuscript database

    Assimilating accurate behavioral events over a long period can be labor intensive and relatively expensive. If an automatic device could accurately record the duration and frequency for a given behavioral event, it would be a valuable alternative to the traditional use of human observers for behavio...

  10. Cognitive Arithmetic: Evidence for the Development of Automaticity.

    ERIC Educational Resources Information Center

    LeFevre, Jo-Anne; Bisanz, Jeffrey

    To determine whether children's knowledge of arithmetic facts becomes increasingly "automatic" with age, 7-year-olds, 11-year-olds, and adults were given a number-matching task for which mental arithmetic should have been irrelevant. Specifically, students were required to verify the presence of a probe number in a previously presented pair (e.g.,…

  11. The Influence of Inattention on Rapid Automatized Naming and Reading Skills

    ERIC Educational Resources Information Center

    Pham, Andy V.

    2010-01-01

    The purpose of this study is to determine how behavioral symptoms of inattention predict rapid automatized naming (RAN) performance and reading skills in typically developing children. Participants included 104 third- and fourth-grade children from different elementary schools in mid-Michigan. RAN performance was assessed using the four Rapid…

  12. Automatic Coding of Dialogue Acts in Collaboration Protocols

    ERIC Educational Resources Information Center

    Erkens, Gijsbert; Janssen, Jeroen

    2008-01-01

    Although protocol analysis can be an important tool for researchers to investigate the process of collaboration and communication, the use of this method of analysis can be time consuming. Hence, an automatic coding procedure for coding dialogue acts was developed. This procedure helps to determine the communicative function of messages in online…

  13. Automated Robot Movement in the Mapped Area Using Fuzzy Logic for Wheel Chair Application

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Efendi, S.; Ramadhana, H.; Andayani, U.; Fahmi, F.

    2018-03-01

    The difficulties of the disabled to move make them unable to live independently. People with disabilities need supporting device to move from place to place. For that, we proposed a solution that can help people with disabilities to move from one room to another automatically. This study aims to create a wheelchair prototype in the form of a wheeled robot as a means to learn the automatic mobilization. The fuzzy logic algorithm was used to determine motion direction based on initial position, ultrasonic sensors reading in avoiding obstacles, infrared sensors reading as a black line reader for the wheeled robot to move smooth and smartphone as a mobile controller. As a result, smartphones with the Android operating system can control the robot using Bluetooth. Here Bluetooth technology can be used to control the robot from a maximum distance of 15 meters. The proposed algorithm was able to work stable for automatic motion determination based on initial position, and also able to modernize the wheelchair movement from one room to another automatically.

  14. Automatic brain tumor segmentation with a fast Mumford-Shah algorithm

    NASA Astrophysics Data System (ADS)

    Müller, Sabine; Weickert, Joachim; Graf, Norbert

    2016-03-01

    We propose a fully-automatic method for brain tumor segmentation that does not require any training phase. Our approach is based on a sequence of segmentations using the Mumford-Shah cartoon model with varying parameters. In order to come up with a very fast implementation, we extend the recent primal-dual algorithm of Strekalovskiy et al. (2014) from the 2D to the medically relevant 3D setting. Moreover, we suggest a new confidence refinement and show that it can increase the precision of our segmentations substantially. Our method is evaluated on 188 data sets with high-grade gliomas and 25 with low-grade gliomas from the BraTS14 database. Within a computation time of only three minutes, we achieve Dice scores that are comparable to state-of-the-art methods.

  15. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  16. SAVLOC, computer program for automatic control and analysis of X-ray fluorescence experiments

    NASA Technical Reports Server (NTRS)

    Leonard, R. F.

    1977-01-01

    A program for a PDP-15 computer is presented which provides for control and analysis of trace element determinations by using X-ray fluorescence. The program simultaneously handles data accumulation for one sample and analysis of data from previous samples. Data accumulation consists of sample changing, timing, and data storage. Analysis requires the locating of peaks in X-ray spectra, determination of intensities of peaks, identification of origins of peaks, and determination of a real density of the element responsible for each peak. The program may be run in either a manual (supervised) mode or an automatic (unsupervised) mode.

  17. Controlled versus automatic processes: which is dominant to safety? The moderating effect of inhibitory control.

    PubMed

    Xu, Yaoshan; Li, Yongjuan; Ding, Weidong; Lu, Fan

    2014-01-01

    This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT) reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end.

  18. Controlled versus Automatic Processes: Which Is Dominant to Safety? The Moderating Effect of Inhibitory Control

    PubMed Central

    Xu, Yaoshan; Li, Yongjuan; Ding, Weidong; Lu, Fan

    2014-01-01

    This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT) reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end. PMID:24520338

  19. Clothes Dryer Automatic Termination Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TeGrotenhuis, Ward E.

    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this reportmore » shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.« less

  20. Analysis of regional rainfall-runoff parameters for the Lake Michigan Diversion hydrological modeling

    USGS Publications Warehouse

    Soong, David T.; Over, Thomas M.

    2015-01-01

    Recalibration of the HSPF parameters to the updated inputs and land covers was completed on two representative watershed models selected from the nine by using a manual method (HSPEXP) and an automatic method (PEST). The objective of the recalibration was to develop a regional parameter set that improves the accuracy in runoff volume prediction for the nine study watersheds. Knowledge about flow and watershed characteristics plays a vital role for validating the calibration in both manual and automatic methods. The best performing parameter set was determined by the automatic calibration method on a two-watershed model. Applying this newly determined parameter set to the nine watersheds for runoff volume simulation resulted in “very good” ratings in five watersheds, an improvement as compared to “very good” ratings achieved for three watersheds by the North Branch parameter set.

  1. Underspecification-Based Grammatical Feedback Generation Tailored to the Learner's Current Acquisition Level in an e-Learning System for German as Second Language

    ERIC Educational Resources Information Center

    Harbusch, Karin; Cameran, Christel-Joy; Härtel, Johannes

    2014-01-01

    We present a new feedback strategy implemented in a natural language generation-based e-learning system for German as a second language (L2). Although the system recognizes a large proportion of the grammar errors in learner-produced written sentences, its automatically generated feedback only addresses errors against rules that are relevant at…

  2. Paternalism v. autonomy - are we barking up the wrong tree?

    PubMed

    Lepping, Peter; Palmstierna, Tom; Raveesh, Bevinahalli N

    2016-08-01

    We explore whether we can reduce paternalism by increasing patient autonomy. We argue that autonomy should not have any automatic priority over other ethical values. Thus, balancing autonomy v. other ethical pillars and finding the optimal balance between the patient's wishes and those of other relevant stakeholders such as the patient's family has to be dynamic over time. © The Royal College of Psychiatrists 2016.

  3. The Distinct Role of the Amygdala, Superior Colliculus and Pulvinar in Processing of Central and Peripheral Snakes

    PubMed Central

    Almeida, Inês; Soares, Sandra C.; Castelo-Branco, Miguel

    2015-01-01

    Introduction Visual processing of ecologically relevant stimuli involves a central bias for stimuli demanding detailed processing (e.g., faces), whereas peripheral object processing is based on coarse identification. Fast detection of animal shapes holding a significant phylogenetic value, such as snakes, may benefit from peripheral vision. The amygdala together with the pulvinar and the superior colliculus are implicated in an ongoing debate regarding their role in automatic and deliberate spatial processing of threat signals. Methods Here we tested twenty healthy participants in an fMRI task, and investigated the role of spatial demands (the main effect of central vs. peripheral vision) in the processing of fear-relevant ecological features. We controlled for stimulus dependence using true or false snakes; snake shapes or snake faces and for task constraints (implicit or explicit). The main idea justifying this double task is that amygdala and superior colliculus are involved in both automatic and controlled processes. Moreover the explicit/implicit instruction in the task with respect to emotion is not necessarily equivalent to explicit vs. implicit in the sense of endogenous vs. exogenous attention, or controlled vs. automatic processes. Results We found that stimulus-driven processing led to increased amygdala responses specifically to true snake shapes presented in the centre or in the peripheral left hemifield (right hemisphere). Importantly, the superior colliculus showed significantly biased and explicit central responses to snake-related stimuli. Moreover, the pulvinar, which also contains foveal representations, also showed strong central responses, extending the results of a recent single cell pulvinar study in monkeys. Similar hemispheric specialization was found across structures: increased amygdala responses occurred to true snake shapes presented to the right hemisphere, with this pattern being closely followed by the superior colliculus and the pulvinar. Conclusion These results show that subcortical structures containing foveal representations such as the amygdala, pulvinar and superior colliculus play distinct roles in the central and peripheral processing of snake shapes. Our findings suggest multiple phylogenetic fingerprints in the responses of subcortical structures to fear-relevant stimuli. PMID:26075614

  4. Cognitive regulation of smoking behavior within a cigarette: Automatic and nonautomatic processes.

    PubMed

    Motschman, Courtney A; Tiffany, Stephen T

    2016-06-01

    There has been limited research on cognitive processes governing smoking behavior in individuals who are tobacco dependent. In a replication (Baxter & Hinson, 2001) and extension, this study examined the theory (Tiffany, 1990) that drug use may be controlled by automatic processes that develop over repeated use. Heavy and occasional cigarette smokers completed a button-press reaction time (RT) task while concurrently smoking a cigarette, pretending to smoke a lit cigarette, or not smoking. Slowed RT during the button-press task indexed the cognitive disruption associated with nonautomatic control of behavior. Occasional smokers' RTs were slowed when smoking or pretending to smoke compared with when not smoking. Heavy smokers' RTs were slowed when pretending to smoke versus not smoking; however, their RTs were similarly fast when smoking compared with not smoking. The results indicated that smoking behavior was more highly regulated by controlled, nonautomatic processes among occasional smokers and by automatic processes among heavy smokers. Patterns of RT across the interpuff interval indicated that occasional smokers were significantly slowed in anticipation of and immediately after puffing onset, whereas heavy smokers were only slowed significantly after puffing onset. These findings suggest that the entirety of the smoking sequence becomes automatized, with the behaviors leading up to puffing becoming more strongly regulated by automatic processes with experience. These results have relevance to theories on the cognitive regulation of cigarette smoking and support the importance of interventions that focus on routinized behaviors that individuals engage in during and leading up to drug use. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Automatic evidence retrieval for systematic reviews.

    PubMed

    Choong, Miew Keen; Galgani, Filippo; Dunn, Adam G; Tsafnat, Guy

    2014-10-01

    Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing's effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Our goal was to evaluate an automatic method for citation snowballing's capacity to identify and retrieve the full text and/or abstracts of cited articles. Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews.

  6. Using Antelope and Seiscomp in the framework of the Romanian Seismic Network

    NASA Astrophysics Data System (ADS)

    Marius Craiu, George; Craiu, Andreea; Marmureanu, Alexandru; Neagoe, Cristian

    2014-05-01

    The National Institute for Earth Physics (NIEP) operates a real-time seismic network designed to monitor the seismic activity on the Romania territory, dominated by the Vrancea intermediate-depth (60-200 km) earthquakes. The NIEP real-time network currently consists of 102 stations and two seismic arrays equipped with different high quality digitizers (Kinemetrics K2, Quanterra Q330, Quanterra Q330HR, PS6-26, Basalt), broadband and short period seismometers (CMG3ESP, CMG40T, KS2000, KS54000, KS2000, CMG3T, STS2, SH-1, S13, Mark l4c, Ranger, Gs21, Mark 22) and acceleration sensors (Episensor Kinemetrics). The primary goal of the real-time seismic network is to provide earthquake parameters from more broad-band stations with a high dynamic range, for more rapid and accurate computation of the locations and magnitudes of earthquakes. The Seedlink and AntelopeTM program packages are completely automated Antelope seismological system is run at the Data Center in Măgurele. The Antelope data acquisition and processing software is running for real-time processing and post processing. The Antelope real-time system provides automatic event detection, arrival picking, event location, and magnitude calculation. It also provides graphical displays and automatic location within near real time after a local, regional or teleseismic event has occurred SeisComP 3 is another automated system that is run at the NIEP and which provides the following features: data acquisition, data quality control, real-time data exchange and processing, network status monitoring, issuing event alerts, waveform archiving and data distribution, automatic event detection and location, easy access to relevant information about stations, waveforms, and recent earthquakes. The main goal of this paper is to compare both of these data acquisitions systems in order to improve their detection capabilities, location accuracy, magnitude and depth determination and reduce the RMS and other location errors.

  7. An Interative Grahical User Interface for Maritime Security Services

    NASA Astrophysics Data System (ADS)

    Reize, T.; Müller, R.; Kiefl, R.

    2013-10-01

    In order to analyse optical satellite images for maritime security issues in Near-Real-Time (NRT) an interactive graphical user interface (GUI) based on NASA World Wind was developed and is presented in this article. Targets or activities can be detected, measured and classified with this tool simply and quickly. The service uses optical satellite images, currently taken from 6 sensors: Worldview-1 and Worldview-2, Ikonos, Quickbird, GeoEye-1 and EROS-B. The GUI can also handle SAR-images, air-borne images or UAV images. Software configurations are provided in a job-order file and thus all preparation tasks, such as image installation are performed fully automatically. The imagery can be overlaid with vessels derived by an automatic detection processor. These potential vessel layers can be zoomed in by a single click and sorted with an adapted method. Further object properties, such as vessel type or confidence level of identification, can be added by the operator manually. The heading angle can be refined by dragging the vessel's head or switching it to 180° with a single click. Further vessels or other relevant objects can be added. The objects length, width, heading and position are calculated automatically from three clicks on top, bottom and an arbitrary point at one of the object's longer side. In case of an Activity Detection, the detected objects can be grouped in area of interests (AOI) and classified, according to the ordered activities. All relevant information is finally written to an exchange file, after quality control and necessary correction procedures are performed. If required, image thumbnails can be cut around objects or around whole areas of interest and saved as separated, geo-referenced images.

  8. Recognizing lexical and semantic change patterns in evolving life science ontologies to inform mapping adaptation.

    PubMed

    Dos Reis, Julio Cesar; Dinh, Duy; Da Silveira, Marcos; Pruski, Cédric; Reynaud-Delaître, Chantal

    2015-03-01

    Mappings established between life science ontologies require significant efforts to maintain them up to date due to the size and frequent evolution of these ontologies. In consequence, automatic methods for applying modifications on mappings are highly demanded. The accuracy of such methods relies on the available description about the evolution of ontologies, especially regarding concepts involved in mappings. However, from one ontology version to another, a further understanding of ontology changes relevant for supporting mapping adaptation is typically lacking. This research work defines a set of change patterns at the level of concept attributes, and proposes original methods to automatically recognize instances of these patterns based on the similarity between attributes denoting the evolving concepts. This investigation evaluates the benefits of the proposed methods and the influence of the recognized change patterns to select the strategies for mapping adaptation. The summary of the findings is as follows: (1) the Precision (>60%) and Recall (>35%) achieved by comparing manually identified change patterns with the automatic ones; (2) a set of potential impact of recognized change patterns on the way mappings is adapted. We found that the detected correlations cover ∼66% of the mapping adaptation actions with a positive impact; and (3) the influence of the similarity coefficient calculated between concept attributes on the performance of the recognition algorithms. The experimental evaluations conducted with real life science ontologies showed the effectiveness of our approach to accurately characterize ontology evolution at the level of concept attributes. This investigation confirmed the relevance of the proposed change patterns to support decisions on mapping adaptation. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. An efficient scheme for automatic web pages categorization using the support vector machine

    NASA Astrophysics Data System (ADS)

    Bhalla, Vinod Kumar; Kumar, Neeraj

    2016-07-01

    In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.

  10. BiobankConnect: software to rapidly connect data elements for pooled analysis across biobanks using ontological and lexical indexing.

    PubMed

    Pang, Chao; Hendriksen, Dennis; Dijkstra, Martijn; van der Velde, K Joeri; Kuiper, Joel; Hillege, Hans L; Swertz, Morris A

    2015-01-01

    Pooling data across biobanks is necessary to increase statistical power, reveal more subtle associations, and synergize the value of data sources. However, searching for desired data elements among the thousands of available elements and harmonizing differences in terminology, data collection, and structure, is arduous and time consuming. To speed up biobank data pooling we developed BiobankConnect, a system to semi-automatically match desired data elements to available elements by: (1) annotating the desired elements with ontology terms using BioPortal; (2) automatically expanding the query for these elements with synonyms and subclass information using OntoCAT; (3) automatically searching available elements for these expanded terms using Lucene lexical matching; and (4) shortlisting relevant matches sorted by matching score. We evaluated BiobankConnect using human curated matches from EU-BioSHaRE, searching for 32 desired data elements in 7461 available elements from six biobanks. We found 0.75 precision at rank 1 and 0.74 recall at rank 10 compared to a manually curated set of relevant matches. In addition, best matches chosen by BioSHaRE experts ranked first in 63.0% and in the top 10 in 98.4% of cases, indicating that our system has the potential to significantly reduce manual matching work. BiobankConnect provides an easy user interface to significantly speed up the biobank harmonization process. It may also prove useful for other forms of biomedical data integration. All the software can be downloaded as a MOLGENIS open source app from http://www.github.com/molgenis, with a demo available at http://www.biobankconnect.org. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  11. BiobankConnect: software to rapidly connect data elements for pooled analysis across biobanks using ontological and lexical indexing

    PubMed Central

    Pang, Chao; Hendriksen, Dennis; Dijkstra, Martijn; van der Velde, K Joeri; Kuiper, Joel; Hillege, Hans L; Swertz, Morris A

    2015-01-01

    Objective Pooling data across biobanks is necessary to increase statistical power, reveal more subtle associations, and synergize the value of data sources. However, searching for desired data elements among the thousands of available elements and harmonizing differences in terminology, data collection, and structure, is arduous and time consuming. Materials and methods To speed up biobank data pooling we developed BiobankConnect, a system to semi-automatically match desired data elements to available elements by: (1) annotating the desired elements with ontology terms using BioPortal; (2) automatically expanding the query for these elements with synonyms and subclass information using OntoCAT; (3) automatically searching available elements for these expanded terms using Lucene lexical matching; and (4) shortlisting relevant matches sorted by matching score. Results We evaluated BiobankConnect using human curated matches from EU-BioSHaRE, searching for 32 desired data elements in 7461 available elements from six biobanks. We found 0.75 precision at rank 1 and 0.74 recall at rank 10 compared to a manually curated set of relevant matches. In addition, best matches chosen by BioSHaRE experts ranked first in 63.0% and in the top 10 in 98.4% of cases, indicating that our system has the potential to significantly reduce manual matching work. Conclusions BiobankConnect provides an easy user interface to significantly speed up the biobank harmonization process. It may also prove useful for other forms of biomedical data integration. All the software can be downloaded as a MOLGENIS open source app from http://www.github.com/molgenis, with a demo available at http://www.biobankconnect.org. PMID:25361575

  12. Brain response to masked and unmasked facial emotions as a function of implicit and explicit personality self-concept of extraversion.

    PubMed

    Suslow, Thomas; Kugel, Harald; Lindner, Christian; Dannlowski, Udo; Egloff, Boris

    2017-01-06

    Extraversion-introversion is a personality dimension referring to individual differences in social behavior. In the past, neurobiological research on extraversion was almost entirely based upon questionnaires which inform about the explicit self-concept. Today, indirect measures are available that tap into the implicit self-concept of extraversion which is assumed to result from automatic processing functions. In our study, brain activation while viewing facial expression of affiliation relevant (i.e., happiness, and disgust) and irrelevant (i.e., fear) emotions was examined as a function of the implicit and explicit self-concept of extraversion and processing mode (automatic vs. controlled). 40 healthy volunteers watched blocks of masked and unmasked emotional faces while undergoing functional magnetic resonance imaging. The Implicit Association Test and the NEO Five-Factor Inventory were applied as implicit and explicit measures of extraversion which were uncorrelated in our sample. Implicit extraversion was found to be positively associated with neural response to masked happy faces in the thalamus and temporo-parietal regions and to masked disgust faces in cerebellar areas. Moreover, it was positively correlated with brain response to unmasked disgust faces in the amygdala and cortical areas. Explicit extraversion was not related to brain response to facial emotions when controlling trait anxiety. The implicit compared to the explicit self-concept of extraversion seems to be more strongly associated with brain activation not only during automatic but also during controlled processing of affiliation relevant facial emotions. Enhanced neural response to facial disgust could reflect high sensitivity to signals of interpersonal rejection in extraverts (i.e., individuals with affiliative tendencies). Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. Musical space synesthesia: automatic, explicit and conceptual connections between musical stimuli and space.

    PubMed

    Akiva-Kabiri, Lilach; Linkovski, Omer; Gertner, Limor; Henik, Avishai

    2014-08-01

    In musical-space synesthesia, musical pitches are perceived as having a spatially defined array. Previous studies showed that symbolic inducers (e.g., numbers, months) can modulate response according to the inducer's relative position on the synesthetic spatial form. In the current study we tested two musical-space synesthetes and a group of matched controls on three different tasks: musical-space mapping, spatial cue detection and a spatial Stroop-like task. In the free mapping task, both synesthetes exhibited a diagonal organization of musical pitch tones rising from bottom left to the top right. This organization was found to be consistent over time. In the subsequent tasks, synesthetes were asked to ignore an auditory or visually presented musical pitch (irrelevant information) and respond to a visual target (i.e., an asterisk) on the screen (relevant information). Compatibility between musical pitch and the target's spatial location was manipulated to be compatible or incompatible with the synesthetes' spatial representations. In the spatial cue detection task participants had to press the space key immediately upon detecting the target. In the Stroop-like task, they had to reach the target by using a mouse cursor. In both tasks, synesthetes' performance was modulated by the compatibility between irrelevant and relevant spatial information. Specifically, the target's spatial location conflicted with the spatial information triggered by the irrelevant musical stimulus. These results reveal that for musical-space synesthetes, musical information automatically orients attention according to their specific spatial musical-forms. The present study demonstrates the genuineness of musical-space synesthesia by revealing its two hallmarks-automaticity and consistency. In addition, our results challenge previous findings regarding an implicit vertical representation for pitch tones in non-synesthete musicians. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Finite temperature properties of clusters by replica exchange metadynamics: the water nonamer.

    PubMed

    Zhai, Yingteng; Laio, Alessandro; Tosatti, Erio; Gong, Xin-Gao

    2011-03-02

    We introduce an approach for the accurate calculation of thermal properties of classical nanoclusters. On the basis of a recently developed enhanced sampling technique, replica exchange metadynamics, the method yields the true free energy of each relevant cluster structure, directly sampling its basin and measuring its occupancy in full equilibrium. All entropy sources, whether vibrational, rotational anharmonic, or especially configurational, the latter often forgotten in many cluster studies, are automatically included. For the present demonstration, we choose the water nonamer (H(2)O)(9), an extremely simple cluster, which nonetheless displays a sufficient complexity and interesting physics in its relevant structure spectrum. Within a standard TIP4P potential description of water, we find that the nonamer second relevant structure possesses a higher configurational entropy than the first, so that the two free energies surprisingly cross for increasing temperature.

  15. Finite Temperature Properties of Clusters by Replica Exchange Metadynamics: The Water Nonamer

    NASA Astrophysics Data System (ADS)

    Zhai, Yingteng; Laio, Alessandro; Tosatti, Erio; Gong, Xingao

    2012-02-01

    We introduce an approach for the accurate calculation of thermal properties of classical nanoclusters. Based on a recently developed enhanced sampling technique, replica exchange metadynamics, the method yields the true free energy of each relevant cluster structure, directly sampling its basin and measuring its occupancy in full equilibrium. All entropy sources, whether vibrational, rotational anharmonic and especially configurational -- the latter often forgotten in many cluster studies -- are automatically included. For the present demonstration we choose the water nonamer (H2O)9, an extremely simple cluster which nonetheless displays a sufficient complexity and interesting physics in its relevant structure spectrum. Within a standard TIP4P potential description of water, we find that the nonamer second relevant structure possesses a higher configurational entropy than the first, so that the two free energies surprisingly cross for increasing temperature.

  16. Formal Specification and Automatic Analysis of Business Processes under Authorization Constraints: An Action-Based Approach

    NASA Astrophysics Data System (ADS)

    Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa

    We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.

  17. On the feasibility of automatically selecting similar patients in highly individualized radiotherapy dose reconstruction for historic data of pediatric cancer survivors.

    PubMed

    Virgolin, Marco; van Dijk, Irma W E M; Wiersma, Jan; Ronckers, Cécile M; Witteveen, Cees; Bel, Arjan; Alderliesten, Tanja; Bosman, Peter A N

    2018-04-01

    The aim of this study is to establish the first step toward a novel and highly individualized three-dimensional (3D) dose distribution reconstruction method, based on CT scans and organ delineations of recently treated patients. Specifically, the feasibility of automatically selecting the CT scan of a recently treated childhood cancer patient who is similar to a given historically treated child who suffered from Wilms' tumor is assessed. A cohort of 37 recently treated children between 2- and 6-yr old are considered. Five potential notions of ground-truth similarity are proposed, each focusing on different anatomical aspects. These notions are automatically computed from CT scans of the abdomen and 3D organ delineations (liver, spleen, spinal cord, external body contour). The first is based on deformable image registration, the second on the Dice similarity coefficient, the third on the Hausdorff distance, the fourth on pairwise organ distances, and the last is computed by means of the overlap volume histogram. The relationship between typically available features of historically treated patients and the proposed ground-truth notions of similarity is studied by adopting state-of-the-art machine learning techniques, including random forest. Also, the feasibility of automatically selecting the most similar patient is assessed by comparing ground-truth rankings of similarity with predicted rankings. Similarities (mainly) based on the external abdomen shape and on the pairwise organ distances are highly correlated (Pearson r p ≥ 0.70) and are successfully modeled with random forests based on historically recorded features (pseudo-R 2 ≥ 0.69). In contrast, similarities based on the shape of internal organs cannot be modeled. For the similarities that random forest can reliably model, an estimation of feature relevance indicates that abdominal diameters and weight are the most important. Experiments on automatically selecting similar patients lead to coarse, yet quite robust results: the most similar patient is retrieved only 22% of the times, however, the error in worst-case scenarios is limited, with the fourth most similar patient being retrieved. Results demonstrate that automatically selecting similar patients is feasible when focusing on the shape of the external abdomen and on the position of internal organs. Moreover, whereas the common practice in phantom-based dose reconstruction is to select a representative phantom using age, height, and weight as discriminant factors for any treatment scenario, our analysis on abdominal tumor treatment for children shows that the most relevant features are weight and the anterior-posterior and left-right abdominal diameters. © 2018 American Association of Physicists in Medicine.

  18. The relationship between visual attention and visual working memory encoding: A dissociation between covert and overt orienting.

    PubMed

    Tas, A Caglar; Luck, Steven J; Hollingworth, Andrew

    2016-08-01

    There is substantial debate over whether visual working memory (VWM) and visual attention constitute a single system for the selection of task-relevant perceptual information or whether they are distinct systems that can be dissociated when their representational demands diverge. In the present study, we focused on the relationship between visual attention and the encoding of objects into VWM. Participants performed a color change-detection task. During the retention interval, a secondary object, irrelevant to the memory task, was presented. Participants were instructed either to execute an overt shift of gaze to this object (Experiments 1-3) or to attend it covertly (Experiments 4 and 5). Our goal was to determine whether these overt and covert shifts of attention disrupted the information held in VWM. We hypothesized that saccades, which typically introduce a memorial demand to bridge perceptual disruption, would lead to automatic encoding of the secondary object. However, purely covert shifts of attention, which introduce no such demand, would not result in automatic memory encoding. The results supported these predictions. Saccades to the secondary object produced substantial interference with VWM performance, but covert shifts of attention to this object produced no interference with VWM performance. These results challenge prevailing theories that consider attention and VWM to reflect a common mechanism. In addition, they indicate that the relationship between attention and VWM is dependent on the memorial demands of the orienting behavior. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. eWaterCycle visualisation. combining the strength of NetCDF and Web Map Service: ncWMS

    NASA Astrophysics Data System (ADS)

    Hut, R.; van Meersbergen, M.; Drost, N.; Van De Giesen, N.

    2016-12-01

    As a result of the eWatercycle global hydrological forecast we have created Cesium-ncWMS, a web application based on ncWMS and Cesium. ncWMS is a server side application capable of reading any NetCDF file written using the Climate and Forecasting (CF) conventions, and making the data available as a Web Map Service(WMS). ncWMS automatically determines available variables in a file, and creates maps colored according to map data and a user selected color scale. Cesium is a Javascript 3D virtual Globe library. It uses WebGL for rendering, which makes it very fast, and it is capable of displaying a wide variety of data types such as vectors, 3D models, and 2D maps. The forecast results are automatically uploaded to our web server running ncWMS. In turn, the web application can be used to change the settings for color maps and displayed data. The server uses the settings provided by the web application, together with the data in NetCDF to provide WMS image tiles, time series data and legend graphics to the Cesium-NcWMS web application. The user can simultaneously zoom in to the very high resolution forecast results anywhere on the world, and get time series data for any point on the globe. The Cesium-ncWMS visualisation combines a global overview with local relevant information in any browser. See the visualisation live at forecast.ewatercycle.org

  20. The Relationship between Visual Attention and Visual Working Memory Encoding: A Dissociation between Covert and Overt Orienting

    PubMed Central

    Tas, A. Caglar; Luck, Steven J.; Hollingworth, Andrew

    2016-01-01

    There is substantial debate over whether visual working memory (VWM) and visual attention constitute a single system for the selection of task-relevant perceptual information or whether they are distinct systems that can be dissociated when their representational demands diverge. In the present study, we focused on the relationship between visual attention and the encoding of objects into visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a secondary object, irrelevant to the memory task, was presented. Participants were instructed either to execute an overt shift of gaze to this object (Experiments 1–3) or to attend it covertly (Experiments 4 and 5). Our goal was to determine whether these overt and covert shifts of attention disrupted the information held in VWM. We hypothesized that saccades, which typically introduce a memorial demand to bridge perceptual disruption, would lead to automatic encoding of the secondary object. However, purely covert shifts of attention, which introduce no such demand, would not result in automatic memory encoding. The results supported these predictions. Saccades to the secondary object produced substantial interference with VWM performance, but covert shifts of attention to this object produced no interference with VWM performance. These results challenge prevailing theories that consider attention and VWM to reflect a common mechanism. In addition, they indicate that the relationship between attention and VWM is dependent on the memorial demands of the orienting behavior. PMID:26854532

  1. Psychological and neural mechanisms associated with effort-related cardiovascular reactivity and cognitive control: An integrative approach.

    PubMed

    Silvestrini, Nicolas

    2017-09-01

    Numerous studies have assessed cardiovascular (CV) reactivity as a measure of effort mobilization during cognitive tasks. However, psychological and neural processes underlying effort-related CV reactivity are still relatively unclear. Previous research reliably found that CV reactivity during cognitive tasks is mainly determined by one region of the brain, the dorsal anterior cingulate cortex (dACC), and that this region is systematically engaged during cognitively demanding tasks. The present integrative approach builds on the research on cognitive control and its brain correlates that shows that dACC function can be related to conflict monitoring and integration of information related to task difficulty and success importance-two key variables in determining effort mobilization. In contrast, evidence also indicates that executive cognitive functioning is processed in more lateral regions of the prefrontal cortex. The resulting model suggests that, when automatic cognitive processes are insufficient to sustain behavior, the dACC determines the amount of required and justified effort according to task difficulty and success importance, which leads to proportional adjustments in CV reactivity and executive cognitive functioning. These propositions are discussed in relation to previous findings on effort-related CV reactivity and cognitive performance, new predictions for future studies, and relevance for other self-regulatory processes. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Automatic photometric titrations of calcium and magnesium in carbonate rocks

    USGS Publications Warehouse

    Shapiro, L.; Brannock, W.W.

    1955-01-01

    Rapid nonsubjective methods have been developed for the determination of calcium and magnesium in carbonate rocks. From a single solution of the sample, calcium is titrated directly, and magnesium is titrated after a rapid removal of R2O3 and precipitation of calcium as the tungstate. A concentrated and a dilute solution of disodium ethylenediamine tetraacetate are used as titrants. The concentrated solution is added almost to the end point, then the weak solution is added in an automatic titrator to determine the end point precisely.

  3. Automated determination of dust particles trajectories in the coma of comet 67P

    NASA Astrophysics Data System (ADS)

    Marín-Yaseli de la Parra, J.; Küppers, M.; Perez Lopez, F.; Besse, S.; Moissl, R.

    2017-09-01

    During more than two years Rosetta spent at comet 67P, it took thousands of images that contain individual dust particles. To arrive at a statistics of the dust properties, automatic image analysis is required. We present a new methodology for fast-dust identification using a star mask reference system for matching a set of images automatically. The main goal is to derive particle size distributions and to determine if traces of the size distribution of primordial pebbles are still present in today's cometary dust [1].

  4. Automated myocardial perfusion from coronary x-ray angiography

    NASA Astrophysics Data System (ADS)

    Storm, Corstiaan J.; Slump, Cornelis H.

    2010-03-01

    The purpose of our study is the evaluation of an algorithm to determine the physiological relevance of a coronary lesion as seen in a coronary angiogram. The aim is to extract as much as possible information from a standard coronary angiogram to decide if an abnormality, percentage of stenosis, as seen in the angiogram, results in physiological impairment of the blood supply of the region nourished by the coronary artery. Coronary angiography, still the golden standard, is used to determine the cause of angina pectoris based on the demonstration of an important stenose in a coronary artery. Dimensions of a lesion such as length and percentage of narrowing can at present easily be calculated by using an automatic computer algorithm such as Quantitative Coronary Angiography (QCA) techniques resulting in just anatomical information ignoring the physiological relevance of the lesion. In our study we analyze myocardial perfusion images in standard coronary angiograms in rest and in artificial hyperemic phases, using a drug e.g. papaverine intracoronary. Setting a Region of Interest (ROI) in the angiogram without overlying major vessels makes it possible to calculate contrast differences as a function of time, so called time-density curves, in the basal and hyperemic phases. In minimizing motion artifacts, end diastolic images are selected ECG based in basal and hyperemic phase in an identical ROI in the same angiographic projection. The development of new algorithms for calculating differences in blood supply in the region as set are presented together with the results of a small clinical case study using the standard angiographic procedure.

  5. Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon

    2014-01-01

    Approaches used in Earth science research such as case study analysis and climatology studies involve discovering and gathering diverse data sets and information to support the research goals. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. In cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. This paper presents a specialized search, aggregation and curation tool for Earth science to address these challenges. The search rool automatically creates curated 'Data Albums', aggregated collections of information related to a specific event, containing links to relevant data files [granules] from different instruments, tools and services for visualization and analysis, and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non relevant information and data.

  6. Automatic processing of political preferences in the human brain.

    PubMed

    Tusche, Anita; Kahnt, Thorsten; Wisniewski, David; Haynes, John-Dylan

    2013-05-15

    Individual political preferences as expressed, for instance, in votes or donations are fundamental to democratic societies. However, the relevance of deliberative processing for political preferences has been highly debated, putting automatic processes in the focus of attention. Based on this notion, the present study tested whether brain responses reflect participants' preferences for politicians and their associated political parties in the absence of explicit deliberation and attention. Participants were instructed to perform a demanding visual fixation task while their brain responses were measured using fMRI. Occasionally, task-irrelevant images of German politicians from two major competing parties were presented in the background while the distraction task was continued. Subsequent to scanning, participants' political preferences for these politicians and their affiliated parties were obtained. Brain responses in distinct brain areas predicted automatic political preferences at the different levels of abstraction: activation in the ventral striatum was positively correlated with preference ranks for unattended politicians, whereas participants' preferences for the affiliated political parties were reflected in activity in the insula and the cingulate cortex. Using an additional donation task, we showed that the automatic preference-related processing in the brain extended to real-world behavior that involved actual financial loss to participants. Together, these findings indicate that brain responses triggered by unattended and task-irrelevant political images reflect individual political preferences at different levels of abstraction. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Automatic physical inference with information maximizing neural networks

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.

  8. Argo: enabling the development of bespoke workflows and services for disease annotation.

    PubMed

    Batista-Navarro, Riza; Carter, Jacob; Ananiadou, Sophia

    2016-01-01

    Argo (http://argo.nactem.ac.uk) is a generic text mining workbench that can cater to a variety of use cases, including the semi-automatic annotation of literature. It enables its technical users to build their own customised text mining solutions by providing a wide array of interoperable and configurable elementary components that can be seamlessly integrated into processing workflows. With Argo's graphical annotation interface, domain experts can then make use of the workflows' automatically generated output to curate information of interest.With the continuously rising need to understand the aetiology of diseases as well as the demand for their informed diagnosis and personalised treatment, the curation of disease-relevant information from medical and clinical documents has become an indispensable scientific activity. In the Fifth BioCreative Challenge Evaluation Workshop (BioCreative V), there was substantial interest in the mining of literature for disease-relevant information. Apart from a panel discussion focussed on disease annotations, the chemical-disease relations (CDR) track was also organised to foster the sharing and advancement of disease annotation tools and resources.This article presents the application of Argo's capabilities to the literature-based annotation of diseases. As part of our participation in BioCreative V's User Interactive Track (IAT), we demonstrated and evaluated Argo's suitability to the semi-automatic curation of chronic obstructive pulmonary disease (COPD) phenotypes. Furthermore, the workbench facilitated the development of some of the CDR track's top-performing web services for normalising disease mentions against the Medical Subject Headings (MeSH) database. In this work, we highlight Argo's support for developing various types of bespoke workflows ranging from ones which enabled us to easily incorporate information from various databases, to those which train and apply machine learning-based concept recognition models, through to user-interactive ones which allow human curators to manually provide their corrections to automatically generated annotations. Our participation in the BioCreative V challenges shows Argo's potential as an enabling technology for curating disease and phenotypic information from literature.Database URL: http://argo.nactem.ac.uk. © The Author(s) 2016. Published by Oxford University Press.

  9. Argo: enabling the development of bespoke workflows and services for disease annotation

    PubMed Central

    Batista-Navarro, Riza; Carter, Jacob; Ananiadou, Sophia

    2016-01-01

    Argo (http://argo.nactem.ac.uk) is a generic text mining workbench that can cater to a variety of use cases, including the semi-automatic annotation of literature. It enables its technical users to build their own customised text mining solutions by providing a wide array of interoperable and configurable elementary components that can be seamlessly integrated into processing workflows. With Argo's graphical annotation interface, domain experts can then make use of the workflows' automatically generated output to curate information of interest. With the continuously rising need to understand the aetiology of diseases as well as the demand for their informed diagnosis and personalised treatment, the curation of disease-relevant information from medical and clinical documents has become an indispensable scientific activity. In the Fifth BioCreative Challenge Evaluation Workshop (BioCreative V), there was substantial interest in the mining of literature for disease-relevant information. Apart from a panel discussion focussed on disease annotations, the chemical-disease relations (CDR) track was also organised to foster the sharing and advancement of disease annotation tools and resources. This article presents the application of Argo’s capabilities to the literature-based annotation of diseases. As part of our participation in BioCreative V’s User Interactive Track (IAT), we demonstrated and evaluated Argo’s suitability to the semi-automatic curation of chronic obstructive pulmonary disease (COPD) phenotypes. Furthermore, the workbench facilitated the development of some of the CDR track’s top-performing web services for normalising disease mentions against the Medical Subject Headings (MeSH) database. In this work, we highlight Argo’s support for developing various types of bespoke workflows ranging from ones which enabled us to easily incorporate information from various databases, to those which train and apply machine learning-based concept recognition models, through to user-interactive ones which allow human curators to manually provide their corrections to automatically generated annotations. Our participation in the BioCreative V challenges shows Argo’s potential as an enabling technology for curating disease and phenotypic information from literature. Database URL: http://argo.nactem.ac.uk PMID:27189607

  10. Overview of the gene ontology task at BioCreative IV.

    PubMed

    Mao, Yuqing; Van Auken, Kimberly; Li, Donghui; Arighi, Cecilia N; McQuilton, Peter; Hayman, G Thomas; Tweedie, Susan; Schaeffer, Mary L; Laulederkind, Stanley J F; Wang, Shur-Jen; Gobeill, Julien; Ruch, Patrick; Luu, Anh Tuan; Kim, Jung-Jae; Chiang, Jung-Hsien; Chen, Yu-De; Yang, Chia-Jung; Liu, Hongfang; Zhu, Dongqing; Li, Yanpeng; Yu, Hong; Emadzadeh, Ehsan; Gonzalez, Graciela; Chen, Jian-Ming; Dai, Hong-Jie; Lu, Zhiyong

    2014-01-01

    Gene ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  11. Evaluation of rotor axial vibrations in a turbo pump unit equipped with an automatic unloading machine

    NASA Astrophysics Data System (ADS)

    Martsynkovskyy, V. A.; Deineka, A.; Kovalenko, V.

    2017-08-01

    The article presents forced axial vibrations of the rotor with an automatic unloading machine in an oxidizer pump. A feature of the design is the use in the autoloading system of slotted throttles with mutually inverse throttling. Their conductivity is determined by a numerical experiment in the ANSYS CFX software package.

  12. 20 CFR 661.260 - What are the requirements for automatic designation of workforce investment areas relating to...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., DEPARTMENT OF LABOR STATEWIDE AND LOCAL GOVERNANCE OF THE WORKFORCE INVESTMENT SYSTEM UNDER TITLE I OF THE WORKFORCE INVESTMENT ACT State Governance Provisions § 661.260 What are the requirements for automatic...(a)(2). The Governor has authority to determine the source of population data to use in making these...

  13. Automatic Content Analysis; Part I of Scientific Report No. ISR-18, Information Storage and Retrieval...

    ERIC Educational Resources Information Center

    Cornell Univ., Ithaca, NY. Dept. of Computer Science.

    Four papers are included in Part One of the eighteenth report on Salton's Magical Automatic Retriever of Texts (SMART) project. The first paper: "Content Analysis in Information Retrieval" by S. F. Weiss presents the results of experiments aimed at determining the conditions under which content analysis improves retrieval results as well…

  14. Task relevance modulates the behavioural and neural effects of sensory predictions

    PubMed Central

    Friston, Karl J.; Nobre, Anna C.

    2017-01-01

    The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants’ brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling. PMID:29206225

  15. Great SEP events and space weather: 2. Automatic determination of the solar energetic particle spectrum

    NASA Astrophysics Data System (ADS)

    Applbaum, David; Dorman, Lev; Pustil'Nik, Lev; Sternlieb, Abraham; Zagnetko, Alexander; Zukerman, Igor

    In Applbaum et al. (2010) it was described how the "SEP-Search" program works automat-ically, determining on the basis of on-line one-minute NM data the beginning of a great SEP event. The "SEP-Search" next uses one-minute data in order to check whether or not the observed increase reflects the beginning of a real great SEP event. If yes, the program "SEP-Research/Spectrum" automatically starts to work on line. We consider two variants: 1) quiet period (no change in cut-off rigidity), 2) disturbed period (characterized with possible changing of cut-off rigidity). We describe the method of determining the spectrum of SEP in the 1st vari-ant (for this we need data for at least two components with different coupling functions). For the 2nd variant we need data for at least three components with different coupling functions. We show that for these purposes one can use data of the total intensity and some different mul-tiplicities, but that it is better to use data from two or three NM with different cut-off rigidities. We describe in detail the algorithms of the program "SEP-Research/Spectrum." We show how this program worked on examples of some historical great SEP events. The work of NM on Mt. Hermon is supported by Israel (Tel Aviv University and ISA) -Italian (UNIRoma-Tre and IFSI-CNR) collaboration.

  16. Decaying relevance of clinical data towards future decisions in data-driven inpatient clinical order sets.

    PubMed

    Chen, Jonathan H; Alagappan, Muthuraman; Goldstein, Mary K; Asch, Steven M; Altman, Russ B

    2017-06-01

    Determine how varying longitudinal historical training data can impact prediction of future clinical decisions. Estimate the "decay rate" of clinical data source relevance. We trained a clinical order recommender system, analogous to Netflix or Amazon's "Customers who bought A also bought B..." product recommenders, based on a tertiary academic hospital's structured electronic health record data. We used this system to predict future (2013) admission orders based on different subsets of historical training data (2009 through 2012), relative to existing human-authored order sets. Predicting future (2013) inpatient orders is more accurate with models trained on just one month of recent (2012) data than with 12 months of older (2009) data (ROC AUC 0.91 vs. 0.88, precision 27% vs. 22%, recall 52% vs. 43%, all P<10 -10 ). Algorithmically learned models from even the older (2009) data was still more effective than existing human-authored order sets (ROC AUC 0.81, precision 16% recall 35%). Training with more longitudinal data (2009-2012) was no better than using only the most recent (2012) data, unless applying a decaying weighting scheme with a "half-life" of data relevance about 4 months. Clinical practice patterns (automatically) learned from electronic health record data can vary substantially across years. Gold standards for clinical decision support are elusive moving targets, reinforcing the need for automated methods that can adapt to evolving information. Prioritizing small amounts of recent data is more effective than using larger amounts of older data towards future clinical predictions. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Automatic Decision Support for Clinical Diagnostic Literature Using Link Analysis in a Weighted Keyword Network.

    PubMed

    Li, Shuqing; Sun, Ying; Soergel, Dagobert

    2017-12-23

    We present a novel approach to recommending articles from the medical literature that support clinical diagnostic decision-making, giving detailed descriptions of the associated ideas and principles. The specific goal is to retrieve biomedical articles that help answer questions of a specified type about a particular case. Based on the filtered keywords, MeSH(Medical Subject Headings) lexicon and the automatically extracted acronyms, the relationship between keywords and articles was built. The paper gives a detailed description of the process of by which keywords were measured and relevant articles identified based on link analysis in a weighted keywords network. Some important challenges identified in this study include the extraction of diagnosis-related keywords and a collection of valid sentences based on the keyword co-occurrence analysis and existing descriptions of symptoms. All data were taken from medical articles provided in the TREC (Text Retrieval Conference) clinical decision support track 2015. Ten standard topics and one demonstration topic were tested. In each case, a maximum of five articles with the highest relevance were returned. The total user satisfaction of 3.98 was 33% higher than average. The results also suggested that the smaller the number of results, the higher the average satisfaction. However, a few shortcomings were also revealed since medical literature recommendation for clinical diagnostic decision support is so complex a topic that it cannot be fully addressed through the semantic information carried solely by keywords in existing descriptions of symptoms. Nevertheless, the fact that these articles are actually relevant will no doubt inspire future research.

  18. Evaluation of odometry algorithm performances using a railway vehicle dynamic model

    NASA Astrophysics Data System (ADS)

    Allotta, B.; Pugi, L.; Ridolfi, A.; Malvezzi, M.; Vettori, G.; Rindi, A.

    2012-05-01

    In modern railway Automatic Train Protection and Automatic Train Control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. Simplified two-dimensional models of railway vehicles have been usually used for Hardware in the Loop test rig testing of conventional odometry algorithms and of on-board safety relevant subsystems (like the Wheel Slide Protection braking system) in which the train speed is estimated from the measures of the wheel angular speed. Two-dimensional models are not suitable to develop solutions like the inertial type localisation algorithms (using 3D accelerometers and 3D gyroscopes) and the introduction of Global Positioning System (or similar) or the magnetometer. In order to test these algorithms correctly and increase odometry performances, a three-dimensional multibody model of a railway vehicle has been developed, using Matlab-Simulink™, including an efficient contact model which can simulate degraded adhesion conditions (the development and prototyping of odometry algorithms involve the simulation of realistic environmental conditions). In this paper, the authors show how a 3D railway vehicle model, able to simulate the complex interactions arising between different on-board subsystems, can be useful to evaluate the odometry algorithm and safety relevant to on-board subsystem performances.

  19. Predictors of Mental Health Symptoms, Automatic Thoughts, and Self-Esteem Among University Students.

    PubMed

    Hiçdurmaz, Duygu; İnci, Figen; Karahan, Sevilay

    2017-01-01

    University youth is a risk group regarding mental health, and many mental health problems are frequent in this group. Sociodemographic factors such as level of income and familial factors such as relationship with father are reported to be associated with mental health symptoms, automatic thoughts, and self-esteem. Also, there are interrelations between mental health problems, automatic thoughts, and self-esteem. The extent of predictive effect of each of these variables on automatic thoughts, self-esteem, and mental health symptoms is not known. We aimed to determine the predictive factors of mental health symptoms, automatic thoughts, and self-esteem in university students. Participants were 530 students enrolled at a university in Turkey, during 2014-2015 academic year. Data were collected using the student information form, the Brief Symptom Inventory, the Automatic Thoughts Questionnaire, and the Rosenberg Self-Esteem Scale. Mental health symptoms, self-esteem, perception of the relationship with the father, and level of income as a student significantly predicted automatic thoughts. Automatic thoughts, mental health symptoms, participation in family decisions, and age had significant predictive effects on self-esteem. Finally, automatic thoughts, self-esteem, age, and perception of the relationship with the father had significant predictive effects on mental health symptoms. The predictive factors revealed in our study provide important information to practitioners and researchers by showing the elements that need to be screened for mental health of university students and issues that need to be included in counseling activities.

  20. On the Automatic Decellularisation of Porcine Aortae: A Repeatability Study Using a Non-Enzymatic Approach.

    PubMed

    O'Connor Mooney, Rory; Davis, Niall Francis; Hoey, David; Hogan, Lisa; McGloughlin, Timothy M; Walsh, Michael T

    2016-01-01

    To investigate the repeatability of automatic decellularisation of porcine aortae using a non-enzymatic approach, addressing current limitations associated with other automatic decellularisation processes. Individual porcine aortae (n = 3) were resected and every third segment (n = 4) was allocated to one of three different groups: a control or a manually or automatically decellularised group. Manual and automatic decellularisation was performed using Triton X-100 (2% v/v) and sodium deoxycholate. Protein preservation and the elimination of a galactosyl-α(1,3)galactose (GAL) epitope were measured using immunohistochemistry and protein binding assays. The presence of residual DNA was determined with gel electrophoresis and spectrophotometry. Scaffold integrity was characterised with scanning electron microscopy and uni-axial tensile testing. Manual and automatic results were compared to one another, to control groups and to current gold standards. The results were comparable to those of current gold standard decellularisation techniques. Successful repeatability was achieved, both manually and automatically, with little effect on mechanical characteristics. Complete acellularity was not confirmed in either decellularisation group. Protein preservation was consistent in both the manually and automatically decellularised groups and between each individual aorta. Elimination of GAL was not achieved. Repeatable automatic decellularisation of porcine aortae is feasible using a Triton X-100-sodium deoxycholate protocol. Protein preservation was satisfactory; however, gold standard thresholds for permissible residual DNA levels were not achieved. Future research will focus on addressing this issue by optimisation of the existing protocol for thick tissues. © 2016 S. Karger AG, Basel.

  1. What will happen to retirement income for 401(k) participants after the market decline?

    PubMed

    VanDerhei, Jack

    2010-04-01

    This paper uses administrative data from millions of 401(k) participants dating back to 1996 as well as several simulation models to determine 401(k) plans' susceptibility to several alleged limitations as well as its potential for significant retirement wealth accumulation for employees working for employers who have chosen to sponsor these plans. What will happen to 401(k) participants after the 2008 market decline will be largely determined by the extent to which the features of automatic enrollment, automatic escalation of contributions, and automatic investment are allowed to play out. Simulation results suggest that the first two features will significantly improve retirement wealth for the lowest-income quartiles going forward, and the third feature (primarily target-date funds) suggest that a large percentage of those on the verge of retirement would benefit significantly by a reduction of equity concentrations to a more age-appropriate level.

  2. Optimization and automation of quantitative NMR data extraction.

    PubMed

    Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos

    2013-06-18

    NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.

  3. Automatic miniaturized fluorometric flow system for chemical and toxicological control of glibenclamide.

    PubMed

    Ribeiro, David S M; Prior, João A V; Taveira, Christian J M; Mendes, José M A F S; Santos, João L M

    2011-06-15

    In this work, and for the first time, it was developed an automatic and fast screening miniaturized flow system for the toxicological control of glibenclamide in beverages, with application in forensic laboratory investigations, and also, for the chemical control of commercially available pharmaceutical formulations. The automatic system exploited the multipumping flow (MPFS) concept and allowed the implementation of a new glibenclamide determination method based on the fluorometric monitoring of the drug in acidic medium (λ(ex)=301 nm; λ(em)=404 nm), in the presence of an anionic surfactant (SDS), promoting an organized micellar medium to enhance the fluorometric measurements. The developed approach assured good recoveries in the analysis of five spiked alcoholic beverages. Additionally, a good agreement was verified when comparing the results obtained in the determination of glibenclamide in five commercial pharmaceutical formulations by the proposed method and by the pharmacopoeia reference procedure. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Imagination and society: the role of visual sociology.

    PubMed

    Cipriani, Roberto; Del Re, Emanuela C

    2012-10-01

    The paper presents the field of Visual Sociology as an approach that makes use of photographs, films, documentaries, videos, to capture and assess aspects of social life and social signals. It overviews some relevant works in the field, it deals with methodological and epistemological issues, by raising the question of the relation between the observer and the observed, and makes reference to some methods of analysis, such as those proposed by the Grounded Theory, and to some connected tools for automatic qualitative analysis, like NVivo. The relevance of visual sociology to the study of social signals lies in the fact that it can validly integrate the information, introducing a multi-modal approach in the analysis of social signals.

  5. Impact of Advanced Avionics Technology on Ground Attack Weapon Systems.

    DTIC Science & Technology

    1982-02-01

    as the relevant feature. 3.0 Problem The task is to perform the automatic cueing of moving objects in a natural environment . Additional problems...views on this subject to the American Defense Preparedness Association (ADPA) on 11 February 1981 in Orlando, Florida. ENVIRONMENTAL CONDITIONS OUR...the operating window or the environmental conditions of combat that our forces may encounter worldwide. The three areas selected were Europe, the

  6. Earlinet single calculus chain: new products overview

    NASA Astrophysics Data System (ADS)

    D'Amico, Giuseppe; Mattis, Ina; Binietoglou, Ioannis; Baars, Holger; Mona, Lucia; Amato, Francesco; Kokkalis, Panos; Rodríguez-Gómez, Alejandro; Soupiona, Ourania; Kalliopi-Artemis, Voudouri

    2018-04-01

    The Single Calculus Chain (SCC) is an automatic and flexible tool to analyze raw lidar data using EARLINET quality assured retrieval algorithms. It has been already demonstrated the SCC can retrieve reliable aerosol backscatter and extinction coefficient profiles for different lidar systems. In this paper we provide an overview of new SCC products like particle linear depolarization ratio, cloud masking, aerosol layering allowing relevant improvements in the atmospheric aerosol characterization.

  7. Nonconscious Control Mimics a Purposeful Strategy: Strength of Stroop-Like Interference Is Automatically Modulated by Proportion of Compatible Trials

    ERIC Educational Resources Information Center

    Klapp, Stuart T.

    2007-01-01

    The magnitude of the Stroop effect is known to be modulated by the proportion of trials on which the irrelevant word and relevant ink color correspond. This has often been attributed to a conscious strategy of increased (or decreased) reliance on the irrelevant words when these are more likely (or less likely) to correspond to the ink colors.…

  8. Kinematic and Dynamic Analysis of High-Speed Intermittent-Motion Mechanisms.

    DTIC Science & Technology

    1984-01-16

    intermittent-motion mechanisms which -"have potential application to the high-speed automatic weapon system , and an investigation on the workspace of a robotic...manipulator system . The problems of this investigation belong to a selected group of unsolved or partially solved problems which are relevant and...design of high-speed machinery and automated manufacturing systems . Accession For IiTIS GRA&I DTIC TAB Unamounced 0 Justificatio By_, Distribut ion

  9. Selected aspects of microelectronics technology and applications: Numerically controlled machine tools. Technology trends series no. 2

    NASA Astrophysics Data System (ADS)

    Sigurdson, J.; Tagerud, J.

    1986-05-01

    A UNIDO publication about machine tools with automatic control discusses the following: (1) numerical control (NC) machine tool perspectives, definition of NC, flexible manufacturing systems, robots and their industrial application, research and development, and sensors; (2) experience in developing a capability in NC machine tools; (3) policy issues; (4) procedures for retrieval of relevant documentation from data bases. Diagrams, statistics, bibliography are included.

  10. An HL7/CDA Framework for the Design and Deployment of Telemedicine Services

    DTIC Science & Technology

    2001-10-25

    schemes and prescription databases. Furthermore, interoperability with the Electronic Health Re- cord ( EHR ) facilitates automatic retrieval of relevant...local EHR system or the integrated electronic health record (I- EHR ) [9], which indexes all medical contacts of a patient in the regional net- work...suspected medical problem. Interoperability with middleware services of the HII and other data sources such as the local EHR sys- tem affects

  11. Semi-automatic feedback using concurrence between mixture vectors for general databases

    NASA Astrophysics Data System (ADS)

    Larabi, Mohamed-Chaker; Richard, Noel; Colot, Olivier; Fernandez-Maloigne, Christine

    2001-12-01

    This paper describes how a query system can exploit the basic knowledge by employing semi-automatic relevance feedback to refine queries and runtimes. For general databases, it is often useless to call complex attributes, because we have not sufficient information about images in the database. Moreover, these images can be topologically very different from one to each other and an attribute that is powerful for a database category may be very powerless for the other categories. The idea is to use very simple features, such as color histogram, correlograms, Color Coherence Vectors (CCV), to fill out the signature vector. Then, a number of mixture vectors is prepared depending on the number of very distinctive categories in the database. Knowing that a mixture vector is a vector containing the weight of each attribute that will be used to compute a similarity distance. We post a query in the database using successively all the mixture vectors defined previously. We retain then the N first images for each vector in order to make a mapping using the following information: Is image I present in several mixture vectors results? What is its rank in the results? These informations allow us to switch the system on an unsupervised relevance feedback or user's feedback (supervised feedback).

  12. Hierarchthis: An Interactive Interface for Identifying Mission-Relevant Components of the Advanced Multi-Mission Operations System

    NASA Technical Reports Server (NTRS)

    Litomisky, Krystof

    2012-01-01

    Even though NASA's space missions are many and varied, there are some tasks that are common to all of them. For example, all spacecraft need to communicate with other entities, and all spacecraft need to know where they are. These tasks use tools and services that can be inherited and reused between missions, reducing systems engineering effort and therefore reducing cost.The Advanced Multi-Mission Operations System, or AMMOS, is a collection of multimission tools and services, whose development and maintenance are funded by NASA. I created HierarchThis, a plugin designed to provide an interactive interface to help customers identify mission-relevant tools and services. HierarchThis automatically creates diagrams of the AMMOS database, and then allows users to show/hide specific details through a graphical interface. Once customers identify tools and services they want for a specific mission, HierarchThis can automatically generate a contract between the Multimission Ground Systems and Services Office, which manages AMMOS, and the customer. The document contains the selected AMMOS components, along with their capabilities and satisfied requirements. HierarchThis reduces the time needed for the process from service selections to having a mission-specific contract from the order of days to the order of minutes.

  13. Performance evaluation of a digital mammography unit using a contrast-detail phantom

    NASA Astrophysics Data System (ADS)

    Elizalde-Cabrera, J.; Brandan, M.-E.

    2015-01-01

    The relation between image quality and mean glandular dose (MGD) has been studied for a Senographe 2000D mammographic unit used for research in our laboratory. The magnitudes were evaluated for a clinically relevant range of acrylic thicknesses and radiological techniques. The CDMAM phantom was used to determine the contrast-detail curve. Also, an alternative method based on the analysis of signal-to-noise (SNR) and contrast-to-noise (CNR) ratios from the CDMAM image was proposed and applied. A simple numerical model was utilized to successfully interpret the results. Optimum radiological techniques were determined using the figures-of-merit FOMSNR=SNR2/MGD and FOMCNR=CNR2/MGD. Main results were: the evaluation of the detector response flattening process (it reduces by about one half the spatial non-homogeneities due to the X- ray field), MGD measurements (the values comply with standards), and verification of the automatic exposure control performance (it is sensitive to fluence attenuation, not to contrast). For 4-5 cm phantom thicknesses, the optimum radiological techniques were Rh/Rh 34 kV to optimize SNR, and Rh/Rh 28 kV to optimize CNR.

  14. Assessment of local pulse wave velocity distribution in mice using k-t BLAST PC-CMR with semi-automatic area segmentation.

    PubMed

    Herold, Volker; Herz, Stefan; Winter, Patrick; Gutjahr, Fabian Tobias; Andelovic, Kristina; Bauer, Wolfgang Rudolf; Jakob, Peter Michael

    2017-10-16

    Local aortic pulse wave velocity (PWV) is a measure for vascular stiffness and has a predictive value for cardiovascular events. Ultra high field CMR scanners allow the quantification of local PWV in mice, however these systems are yet unable to monitor the distribution of local elasticities. In the present study we provide a new accelerated method to quantify local aortic PWV in mice with phase-contrast cardiovascular magnetic resonance imaging (PC-CMR) at 17.6 T. Based on a k-t BLAST (Broad-use Linear Acquisition Speed-up Technique) undersampling scheme, total measurement time could be reduced by a factor of 6. The fast data acquisition enables to quantify the local PWV at several locations along the aortic blood vessel based on the evaluation of local temporal changes in blood flow and vessel cross sectional area. To speed up post processing and to eliminate operator bias, we introduce a new semi-automatic segmentation algorithm to quantify cross-sectional areas of the aortic vessel. The new methods were applied in 10 eight-month-old mice (4 C57BL/6J-mice and 6 ApoE (-/-) -mice) at 12 adjacent locations along the abdominal aorta. Accelerated data acquisition and semi-automatic post-processing delivered reliable measures for the local PWV, similiar to those obtained with full data sampling and manual segmentation. No statistically significant differences of the mean values could be detected for the different measurement approaches. Mean PWV values were elevated for the ApoE (-/-) -group compared to the C57BL/6J-group (3.5 ± 0.7 m/s vs. 2.2 ± 0.4 m/s, p < 0.01). A more heterogeneous PWV-distribution in the ApoE (-/-) -animals could be observed compared to the C57BL/6J-mice, representing the local character of lesion development in atherosclerosis. In the present work, we showed that k-t BLAST PC-MRI enables the measurement of the local PWV distribution in the mouse aorta. The semi-automatic segmentation method based on PC-CMR data allowed rapid determination of local PWV. The findings of this study demonstrate the ability of the proposed methods to non-invasively quantify the spatial variations in local PWV along the aorta of ApoE (-/-) -mice as a relevant model of atherosclerosis.

  15. Piloted Simulation Evaluation of a Model-Predictive Automatic Recovery System to Prevent Vehicle Loss of Control on Approach

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Liu, Yuan; Sowers, T. Shane; Owen, A. Karl; Guo, Ten-Huei

    2014-01-01

    This paper describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  16. An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization

    NASA Astrophysics Data System (ADS)

    Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc

    2002-09-01

    A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.

  17. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    NASA Astrophysics Data System (ADS)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  18. Effectiveness of sequential automatic-manual home respiratory polygraphy scoring.

    PubMed

    Masa, Juan F; Corral, Jaime; Pereira, Ricardo; Duran-Cantolla, Joaquin; Cabello, Marta; Hernández-Blasco, Luis; Monasterio, Carmen; Alonso-Fernandez, Alberto; Chiner, Eusebi; Vázquez-Polo, Francisco-José; Montserrat, Jose M

    2013-04-01

    Automatic home respiratory polygraphy (HRP) scoring functions can potentially confirm the diagnosis of sleep apnoea-hypopnoea syndrome (SAHS) (obviating technician scoring) in a substantial number of patients. The result would have important management and cost implications. The aim of this study was to determine the diagnostic cost-effectiveness of a sequential HRP scoring protocol (automatic and then manual for residual cases) compared with manual HRP scoring, and with in-hospital polysomnography. We included suspected SAHS patients in a multicentre study and assigned them to home and hospital protocols at random. We constructed receiver operating characteristic (ROC) curves for manual and automatic scoring. Diagnostic agreement for several cut-off points was explored and costs for two equally effective alternatives were calculated. Of 366 randomised patients, 348 completed the protocol. Manual scoring produced better ROC curves than automatic scoring. There was no sensitive automatic or subsequent manual HRP apnoea-hypopnoea index (AHI) cut-off point. The specific cut-off points for automatic and subsequent manual HRP scorings (AHI >25 and >20, respectively) had a specificity of 93% for automatic and 94% for manual scorings. The costs of manual protocol were 9% higher than sequential HRP protocol; these were 69% and 64%, respectively, of the cost of the polysomnography. A sequential HRP scoring protocol is a cost-effective alternative to polysomnography, although with limited cost savings compared to HRP manual scoring.

  19. Automatic location of disruption times in JET

    NASA Astrophysics Data System (ADS)

    Moreno, R.; Vega, J.; Murari, A.

    2014-11-01

    The loss of stability and confinement in tokamak plasmas can induce critical events known as disruptions. Disruptions produce strong electromagnetic forces and thermal loads which can damage fundamental components of the devices. Determining the disruption time is extremely important for various disruption studies: theoretical models, physics-driven models, or disruption predictors. In JET, during the experimental campaigns with the JET-C (Carbon Fiber Composite) wall, a common criterion to determine the disruption time consisted of locating the time of the thermal quench. However, with the metallic ITER-like wall (JET-ILW), this criterion is usually not valid. Several thermal quenches may occur previous to the current quench but the temperature recovers. Therefore, a new criterion has to be defined. A possibility is to use the start of the current quench as disruption time. This work describes the implementation of an automatic data processing method to estimate the disruption time according to this new definition. This automatic determination allows both reducing human efforts to locate the disruption times and standardizing the estimates (with the benefit of being less vulnerable to human errors).

  20. iAnn: an event sharing platform for the life sciences.

    PubMed

    Jimenez, Rafael C; Albar, Juan P; Bhak, Jong; Blatter, Marie-Claude; Blicher, Thomas; Brazas, Michelle D; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; van Driel, Marc A; Dunn, Michael J; Fernandes, Pedro L; van Gelder, Celia W G; Hermjakob, Henning; Ioannidis, Vassilios; Judge, David P; Kahlem, Pascal; Korpelainen, Eija; Kraus, Hans-Joachim; Loveland, Jane; Mayer, Christine; McDowall, Jennifer; Moran, Federico; Mulder, Nicola; Nyronen, Tommi; Rother, Kristian; Salazar, Gustavo A; Schneider, Reinhard; Via, Allegra; Villaveces, Jose M; Yu, Ping; Schneider, Maria V; Attwood, Teresa K; Corpas, Manuel

    2013-08-01

    We present iAnn, an open source community-driven platform for dissemination of life science events, such as courses, conferences and workshops. iAnn allows automatic visualisation and integration of customised event reports. A central repository lies at the core of the platform: curators add submitted events, and these are subsequently accessed via web services. Thus, once an iAnn widget is incorporated into a website, it permanently shows timely relevant information as if it were native to the remote site. At the same time, announcements submitted to the repository are automatically disseminated to all portals that query the system. To facilitate the visualization of announcements, iAnn provides powerful filtering options and views, integrated in Google Maps and Google Calendar. All iAnn widgets are freely available. http://iann.pro/iannviewer manuel.corpas@tgac.ac.uk.

  1. Accessibility assessment of assistive technology for the hearing impaired.

    PubMed

    Áfio, Aline Cruz Esmeraldo; Carvalho, Aline Tomaz de; Caravalho, Luciana Vieira de; Silva, Andréa Soares Rocha da; Pagliuca, Lorita Marlena Freitag

    2016-01-01

    to assess the automatic accessibility of assistive technology in online courses for the hearing impaired. evaluation study guided by the Assessment and Maintenance step proposed in the Model of Development of Digital Educational Material. The software Assessor and Simulator for the Accessibility of Sites (ASES) was used to analyze the online course "Education on Sexual and Reproductive Health: the use of condoms" according to the accessibility standards of national and international websites. an error report generated by the program identified, in each didactic module, one error and two warnings related to two international principles and six warnings involved with six national recommendations. The warnings relevant to hearing-impaired people were corrected, and the course was considered accessible by automatic assessment. we concluded that the pages of the course were considered, by the software used, appropriate to the standards of web accessibility.

  2. PROBLEMS OF CYBERNETICS AND SPACE MEDICINE (in Russian)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parin, V.V.; Baevskii, R.M.

    1963-01-01

    Problems of cybernetics are discussed with reference to space medicine. The information theory is widely used for solving the problems relevant to radiotelemetric transmission of biological data. Construction of devices for automatic medical control of the condition of the crew of the space ship has a direct bearing to electron diagnostic machines. Mathematical methods and the computing technic are used for analyzing experimental evidence. The theory of automatic regulation was applied for modeling physiological reactions, for developing closed ecological systems, and for solving the problems of driving space ships. The problems bearing on the modifications undergone by the information inmore » the brain are of primary importance for the study of the effect of the space flight conditions upon the efficiency of man, the activity of his nervous system and of his analyzers. (P.C.H.)« less

  3. Effectiveness of home single-channel nasal pressure for sleep apnea diagnosis.

    PubMed

    Masa, Juan F; Duran-Cantolla, Joaquin; Capote, Francisco; Cabello, Marta; Abad, Jorge; Garcia-Rio, Francisco; Ferrer, Antoni; Mayos, Merche; Gonzalez-Mangado, Nicolas; de la Peña, Monica; Aizpuru, Felipe; Barbe, Ferran; Montserrat, Jose M; Larrateguy, Luis D; de Castro, Jorge Rey; Garcia-Ledesma, Estefania; Utrabo, Isabel; Corral, Jaime; Martinez-Null, Cristina; Egea, Carlos; Cancelo, Laura; García-Díaz, Emilio; Carmona-Bernal, Carmen; Sánchez-Armengol, Angeles; Fortuna, Ana M; Miralda, Rosa M; Troncoso, Maria F; Monica, Gonzalez; Martinez-Martinez, Marian; Cantalejo, Olga; Piérola, Javier; Vigil, Laura; Embid, Cristina; Del Mar Centelles, Mireia; Prieto, Teresa Ramírez; Rojo, Blas; Vanesa, Lores

    2014-12-01

    Home single-channel nasal pressure (HNP) may be an alternative to polysomnography (PSG) for obstructive sleep apnea (OSA) diagnosis, but no cost studies have yet been carried out. Automatic scoring is simpler but generally less effective than manual scoring. To determine the diagnostic efficacy and cost of both scorings (automatic and manual) compared with PSG, taking as a polysomnographic OSA diagnosis several apnea-hypopnea index (AHI) cutoff points. We included suspected OSA patients in a multicenter study. They were randomized to home and hospital protocols. We constructed receiver operating characteristic (ROC) curves for both scorings. Diagnostic efficacy was explored for several HNP AHI cutoff points, and costs were calculated for equally effective alternatives. Of 787 randomized patients, 752 underwent HNP. Manual scoring produced better ROC curves than automatic for AHI < 15; similar curves were obtained for AHI ≥ 15. A valid HNP with manual scoring would determine the presence of OSA (or otherwise) in 90% of patients with a polysomnographic AHI ≥ 5 cutoff point, in 74% of patients with a polysomnographic AHI ≥ 10 cutoff point, and in 61% of patients with a polysomnographic AHI ≥ 15 cutoff point. In the same way, a valid HNP with automatic scoring would determine the presence of OSA (or otherwise) in 73% of patients with a polysomnographic AHI ≥ 5 cutoff point, in 64% of patients with a polysomnographic AHI ≥ 10 cutoff point, and in 57% of patients with a polysomnographic AHI ≥ 15 cutoff point. The costs of either HNP approaches were 40% to 70% lower than those of PSG at the same level of diagnostic efficacy. Manual HNP had the lowest cost for low polysomnographic AHI levels (≥ 5 and ≥ 10), and manual and automatic scorings had similar costs for higher polysomnographic cutoff points (AHI ≥ 15) of diagnosis. Home single-channel nasal pressure (HNP) is a cheaper alternative than polysomnography for obstructive sleep apnea diagnosis. HNP with manual scoring seems to have better diagnostic accuracy and a lower cost than automatic scoring for patients with low apnea-hypopnea index (AHI) levels, although automatic scoring has similar diagnostic accuracy and cost as manual scoring for intermediate and high AHI levels. Therefore, automatic scoring can be appropriately used, although diagnostic efficacy could improve if we carried out manual scoring on patients with AHI < 15. Clinicaltrials.gov identifier: NCT01347398. © 2014 Associated Professional Sleep Societies, LLC.

  4. Vertical-Control Subsystem for Automatic Coal Mining

    NASA Technical Reports Server (NTRS)

    Griffiths, W. R.; Smirlock, M.; Aplin, J.; Fish, R. B.; Fish, D.

    1984-01-01

    Guidance and control system automatically positions cutting drums of double-ended longwall shearer so they follow coal seam. System determines location of upper interface between coal and shale and continuously adjusts cutting-drum positions, upward or downward, to track undulating interface. Objective to keep cutting edges as close as practicable to interface and thus extract as much coal as possible from seam.

  5. Automatic Color Sorting of Hardwood Edge-Glued Panel Parts

    Treesearch

    D. Earl Kline; Richard Conners; Qiang Lu; Philip A. Araman

    1997-01-01

    This paper describes an automatic color sorting system for red oak edge-glued panel parts. The color sorting system simultaneously examines both faces of a panel part and then determines which face has the "best" color, and sorts the part into one of a number of color classes at plant production speeds. Initial test results show that the system generated over...

  6. How Do Movements to Produce Letters Become Automatic during Writing Acquisition? Investigating the Development of Motor Anticipation

    ERIC Educational Resources Information Center

    Kandel, Sonia; Perret, Cyril

    2015-01-01

    Learning how to write involves the automation of grapho-motor skills. One of the factors that determine automaticity is "motor anticipation." This is the ability to write a letter while processing information on how to produce following letters. It is essential for writing fast and smoothly. We investigated how motor anticipation…

  7. Machine Beats Experts: Automatic Discovery of Skill Models for Data-Driven Online Course Refinement

    ERIC Educational Resources Information Center

    Matsuda, Noboru; Furukawa, Tadanobu; Bier, Norman; Faloutsos, Christos

    2015-01-01

    How can we automatically determine which skills must be mastered for the successful completion of an online course? Large-scale online courses (e.g., MOOCs) often contain a broad range of contents frequently intended to be a semester's worth of materials; this breadth often makes it difficult to articulate an accurate set of skills and knowledge…

  8. Use of Automatic Interaction Detector in Monitoring Faculty Salaries. AIR 1983 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Cohen, Margaret E.

    A university's use of the Automatic Interaction Detector (AID) to monitor faculty salary data is described. The first step consists of examining a tree diagram and summary table produced by AID. The tree is used to identify the characteristics of faculty at different salary levels. The table is used to determine the explanatory power of the…

  9. SHIELD: FITGALAXY -- A Software Package for Automatic Aperture Photometry of Extended Sources

    NASA Astrophysics Data System (ADS)

    Marshall, Melissa

    2013-01-01

    Determining the parameters of extended sources, such as galaxies, is a common but time-consuming task. Finding a photometric aperture that encompasses the majority of the flux of a source and identifying and excluding contaminating objects is often done by hand - a lengthy and difficult to reproduce process. To make extracting information from large data sets both quick and repeatable, I have developed a program called FITGALAXY, written in IDL. This program uses minimal user input to automatically fit an aperture to, and perform aperture and surface photometry on, an extended source. FITGALAXY also automatically traces the outlines of surface brightness thresholds and creates surface brightness profiles, which can then be used to determine the radial properties of a source. Finally, the program performs automatic masking of contaminating sources. Masks and apertures can be applied to multiple images (regardless of the WCS solution or plate scale) in order to accurately measure the same source at different wavelengths. I present the fluxes, as measured by the program, of a selection of galaxies from the Local Volume Legacy Survey. I then compare these results with the fluxes given by Dale et al. (2009) in order to assess the accuracy of FITGALAXY.

  10. Automatic approach to deriving fuzzy slope positions

    NASA Astrophysics Data System (ADS)

    Zhu, Liang-Jun; Zhu, A.-Xing; Qin, Cheng-Zhi; Liu, Jun-Zhi

    2018-03-01

    Fuzzy characterization of slope positions is important for geographic modeling. Most of the existing fuzzy classification-based methods for fuzzy characterization require extensive user intervention in data preparation and parameter setting, which is tedious and time-consuming. This paper presents an automatic approach to overcoming these limitations in the prototype-based inference method for deriving fuzzy membership value (or similarity) to slope positions. The key contribution is a procedure for finding the typical locations and setting the fuzzy inference parameters for each slope position type. Instead of being determined totally by users in the prototype-based inference method, in the proposed approach the typical locations and fuzzy inference parameters for each slope position type are automatically determined by a rule set based on prior domain knowledge and the frequency distributions of topographic attributes. Furthermore, the preparation of topographic attributes (e.g., slope gradient, curvature, and relative position index) is automated, so the proposed automatic approach has only one necessary input, i.e., the gridded digital elevation model of the study area. All compute-intensive algorithms in the proposed approach were speeded up by parallel computing. Two study cases were provided to demonstrate that this approach can properly, conveniently and quickly derive the fuzzy slope positions.

  11. Sampling theory and automated simulations for vertical sections, applied to human brain.

    PubMed

    Cruz-Orive, L M; Gelšvartas, J; Roberts, N

    2014-02-01

    In recent years, there have been substantial developments in both magnetic resonance imaging techniques and automatic image analysis software. The purpose of this paper is to develop stereological image sampling theory (i.e. unbiased sampling rules) that can be used by image analysts for estimating geometric quantities such as surface area and volume, and to illustrate its implementation. The methods will ideally be applied automatically on segmented, properly sampled 2D images - although convenient manual application is always an option - and they are of wide applicability in many disciplines. In particular, the vertical sections design to estimate surface area is described in detail and applied to estimate the area of the pial surface and of the boundary between cortex and underlying white matter (i.e. subcortical surface area). For completeness, cortical volume and mean cortical thickness are also estimated. The aforementioned surfaces were triangulated in 3D with the aid of FreeSurfer software, which provided accurate surface area measures that served as gold standards. Furthermore, a software was developed to produce digitized trace curves of the triangulated target surfaces automatically from virtual sections. From such traces, a new method (called the 'lambda method') is presented to estimate surface area automatically. In addition, with the new software, intersections could be counted automatically between the relevant surface traces and a cycloid test grid for the classical design. This capability, together with the aforementioned gold standard, enabled us to thoroughly check the performance and the variability of the different estimators by Monte Carlo simulations for studying the human brain. In particular, new methods are offered to split the total error variance into the orientations, sectioning and cycloid components. The latter prediction was hitherto unavailable--one is proposed here and checked by way of simulations on a given set of digitized vertical sections with automatically superimposed cycloid grids of three different sizes. Concrete and detailed recommendations are given to implement the methods. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  12. A Graph-Based Recovery and Decomposition of Swanson’s Hypothesis using Semantic Predications

    PubMed Central

    Cameron, Delroy; Bodenreider, Olivier; Yalamanchili, Hima; Danh, Tu; Vallabhaneni, Sreeram; Thirunarayan, Krishnaprasad; Sheth, Amit P.; Rindflesch, Thomas C.

    2014-01-01

    Objectives This paper presents a methodology for recovering and decomposing Swanson’s Raynaud Syndrome–Fish Oil Hypothesis semi-automatically. The methodology leverages the semantics of assertions extracted from biomedical literature (called semantic predications) along with structured background knowledge and graph-based algorithms to semi-automatically capture the informative associations originally discovered manually by Swanson. Demonstrating that Swanson’s manually intensive techniques can be undertaken semi-automatically, paves the way for fully automatic semantics-based hypothesis generation from scientific literature. Methods Semantic predications obtained from biomedical literature allow the construction of labeled directed graphs which contain various associations among concepts from the literature. By aggregating such associations into informative subgraphs, some of the relevant details originally articulated by Swanson has been uncovered. However, by leveraging background knowledge to bridge important knowledge gaps in the literature, a methodology for semi-automatically capturing the detailed associations originally explicated in natural language by Swanson has been developed. Results Our methodology not only recovered the 3 associations commonly recognized as Swanson’s Hypothesis, but also decomposed them into an additional 16 detailed associations, formulated as chains of semantic predications. Altogether, 14 out of the 19 associations that can be attributed to Swanson were retrieved using our approach. To the best of our knowledge, such an in-depth recovery and decomposition of Swanson’s Hypothesis has never been attempted. Conclusion In this work therefore, we presented a methodology for semi- automatically recovering and decomposing Swanson’s RS-DFO Hypothesis using semantic representations and graph algorithms. Our methodology provides new insights into potential prerequisites for semantics-driven Literature-Based Discovery (LBD). These suggest that three critical aspects of LBD include: 1) the need for more expressive representations beyond Swanson’s ABC model; 2) an ability to accurately extract semantic information from text; and 3) the semantic integration of scientific literature with structured background knowledge. PMID:23026233

  13. Discovering relevance knowledge in data: a growing cell structures approach.

    PubMed

    Azuaje, F; Dubitzky, W; Black, N; Adamson, K

    2000-01-01

    Both information retrieval and case-based reasoning systems rely on effective and efficient selection of relevant data. Typically, relevance in such systems is approximated by similarity or indexing models. However, the definition of what makes data items similar or how they should be indexed is often nontrivial and time-consuming. Based on growing cell structure artificial neural networks, this paper presents a method that automatically constructs a case retrieval model from existing data. Within the case-based reasoning (CBR) framework, the method is evaluated for two medical prognosis tasks, namely, colorectal cancer survival and coronary heart disease risk prognosis. The results of the experiments suggest that the proposed method is effective and robust. To gain a deeper insight and understanding of the underlying mechanisms of the proposed model, a detailed empirical analysis of the models structural and behavioral properties is also provided.

  14. [Development of a Compared Software for Automatically Generated DVH in Eclipse TPS].

    PubMed

    Xie, Zhao; Luo, Kelin; Zou, Lian; Hu, Jinyou

    2016-03-01

    This study is to automatically calculate the dose volume histogram(DVH) for the treatment plan, then to compare it with requirements of doctor's prescriptions. The scripting language Autohotkey and programming language C# were used to develop a compared software for automatically generated DVH in Eclipse TPS. This software is named Show Dose Volume Histogram (ShowDVH), which is composed of prescription documents generation, operation functions of DVH, software visualization and DVH compared report generation. Ten cases in different cancers have been separately selected, in Eclipse TPS 11.0 ShowDVH could not only automatically generate DVH reports but also accurately determine whether treatment plans meet the requirements of doctor’s prescriptions, then reports gave direction for setting optimization parameters of intensity modulated radiated therapy. The ShowDVH is an user-friendly and powerful software, and can automatically generated compared DVH reports fast in Eclipse TPS 11.0. With the help of ShowDVH, it greatly saves plan designing time and improves working efficiency of radiation therapy physicists.

  15. Theoretical Study of Various Airplane Motions After Initial Disturbance

    NASA Technical Reports Server (NTRS)

    Haus, FR

    1938-01-01

    The present investigation may be considered as preliminary to the study of automatic stabilizers. We have sought to determine first how an airplane of average characteristics reacts against the principal disturbances it may encounter. Without entering into the general study of automatic stabilizers, the present work suggests the idea of a stabilizer whose sensitive member would be a wind vane or pressure plate. The elements considered as variable were the coefficients of static stability - that is, the derivatives of the coefficients of the moments with respect to the angles of attack and of yaw; these angles may be determined by the vanes.

  16. Method of automatic measurement and focus of an electron beam and apparatus therefore

    DOEpatents

    Giedt, W.H.; Campiotti, R.

    1996-01-09

    An electron beam focusing system, including a plural slit-type Faraday beam trap, for measuring the diameter of an electron beam and automatically focusing the beam for welding is disclosed. Beam size is determined from profiles of the current measured as the beam is swept over at least two narrow slits of the beam trap. An automated procedure changes the focus coil current until the focal point location is just below a workpiece surface. A parabolic equation is fitted to the calculated beam sizes from which optimal focus coil current and optimal beam diameter are determined. 12 figs.

  17. Method of automatic measurement and focus of an electron beam and apparatus therefor

    DOEpatents

    Giedt, Warren H.; Campiotti, Richard

    1996-01-01

    An electron beam focusing system, including a plural slit-type Faraday beam trap, for measuring the diameter of an electron beam and automatically focusing the beam for welding. Beam size is determined from profiles of the current measured as the beam is swept over at least two narrow slits of the beam trap. An automated procedure changes the focus coil current until the focal point location is just below a workpiece surface. A parabolic equation is fitted to the calculated beam sizes from which optimal focus coil current and optimal beam diameter are determined.

  18. Automatic discovery of optimal classes

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew

    1986-01-01

    A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.

  19. An Envelope Based Feedback Control System for Earthquake Early Warning: Reality Check Algorithm

    NASA Astrophysics Data System (ADS)

    Heaton, T. H.; Karakus, G.; Beck, J. L.

    2016-12-01

    Earthquake early warning systems are, in general, designed to be open loop control systems in such a way that the output, i.e., the warning messages, only depend on the input, i.e., recorded ground motions, up to the moment when the message is issued in real-time. We propose an algorithm, which is called Reality Check Algorithm (RCA), which would assess the accuracy of issued warning messages, and then feed the outcome of the assessment back into the system. Then, the system would modify its messages if necessary. That is, we are proposing to convert earthquake early warning systems into feedback control systems by integrating them with RCA. RCA works by continuously monitoring and comparing the observed ground motions' envelopes to the predicted envelopes of Virtual Seismologist (Cua 2005). Accuracy of magnitude and location (both spatial and temporal) estimations of the system are assessed separately by probabilistic classification models, which are trained by a Sparse Bayesian Learning technique called Automatic Relevance Determination prior.

  20. Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography.

    PubMed

    Ahlers, C; Simader, C; Geitzenauer, W; Stock, G; Stetson, P; Dastmalchi, S; Schmidt-Erfurth, U

    2008-02-01

    A limited number of scans compromise conventional optical coherence tomography (OCT) to track chorioretinal disease in its full extension. Failures in edge-detection algorithms falsify the results of retinal mapping even further. High-definition-OCT (HD-OCT) is based on raster scanning and was used to visualise the localisation and volume of intra- and sub-pigment-epithelial (RPE) changes in fibrovascular pigment epithelial detachments (fPED). Two different scanning patterns were evaluated. 22 eyes with fPED were imaged using a frequency-domain, high-speed prototype of the Cirrus HD-OCT. The axial resolution was 6 mum, and the scanning speed was 25 kA scans/s. Two different scanning patterns covering an area of 6 x 6 mm in the macular retina were compared. Three-dimensional topographic reconstructions and volume calculations were performed using MATLAB-based automatic segmentation software. Detailed information about layer-specific distribution of fluid accumulation and volumetric measurements can be obtained for retinal- and sub-RPE volumes. Both raster scans show a high correlation (p<0.01; R2>0.89) of measured values, that is PED volume/area, retinal volume and mean retinal thickness. Quality control of the automatic segmentation revealed reasonable results in over 90% of the examinations. Automatic segmentation allows for detailed quantitative and topographic analysis of the RPE and the overlying retina. In fPED, the 128 x 512 scanning-pattern shows mild advantages when compared with the 256 x 256 scan. Together with the ability for automatic segmentation, HD-OCT clearly improves the clinical monitoring of chorioretinal disease by adding relevant new parameters. HD-OCT is likely capable of enhancing the understanding of pathophysiology and benefits of treatment for current anti-CNV strategies in future.

  1. Automatic Evidence Retrieval for Systematic Reviews

    PubMed Central

    Choong, Miew Keen; Galgani, Filippo; Dunn, Adam G

    2014-01-01

    Background Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing’s effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Objective Our goal was to evaluate an automatic method for citation snowballing’s capacity to identify and retrieve the full text and/or abstracts of cited articles. Methods Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. Results The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. Conclusions The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews. PMID:25274020

  2. Determinants of wood dust exposure in the Danish furniture industry.

    PubMed

    Mikkelsen, Anders B; Schlunssen, Vivi; Sigsgaard, Torben; Schaumburg, Inger

    2002-11-01

    This paper investigates the relation between wood dust exposure in the furniture industry and occupational hygiene variables. During the winter 1997-98 54 factories were visited and 2362 personal, passive inhalable dust samples were obtained; the geometric mean was 0.95 mg/m(3) and the geometric standard deviation was 2.08. In a first measuring round 1685 dust concentrations were obtained. For some of the workers repeated measurements were carried out 1 (351) and 2 weeks (326) after the first measurement. Hygiene variables like job, exhaust ventilation, cleaning procedures, etc., were documented. A multivariate analysis based on mixed effects models was used with hygiene variables being fixed effects and worker, machine, department and factory being random effects. A modified stepwise strategy of model making was adopted taking into account the hierarchically structured variables and making possible the exclusion of non-influential random as well as fixed effects. For woodworking, the following determinants of exposure increase the dust concentration: manual and automatic sanding and use of compressed air with fully automatic and semi-automatic machines and for cleaning of work pieces. Decreased dust exposure resulted from the use of compressed air with manual machines, working at fully automatic or semi-automatic machines, functioning exhaust ventilation, work on the night shift, daily cleaning of rooms, cleaning of work pieces with a brush, vacuum cleaning of machines, supplementary fresh air intake and safety representative elected within the last 2 yr. For handling and assembling, increased exposure results from work at automatic machines and presence of wood dust on the workpieces. Work on the evening shift, supplementary fresh air intake, work in a chair factory and special cleaning staff produced decreased exposure to wood dust. The implications of the results for the prevention of wood dust exposure are discussed.

  3. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  4. Effects of social and affective content on exogenous attention as revealed by event-related potentials.

    PubMed

    Kosonogov, Vladimir; Martinez-Selva, Jose M; Carrillo-Verdejo, Eduvigis; Torrente, Ginesa; Carretié, Luis; Sanchez-Navarro, Juan P

    2018-06-18

    The social content of affective stimuli has been proposed as having an influence on cognitive processing and behaviour. This research was aimed, therefore, at studying whether automatic exogenous attention demanded by affective pictures was related to their social value. We hypothesised that affective social pictures would capture attention to a greater extent than non-social affective stimuli. For this purpose, we recorded event-related potentials in a sample of 24 participants engaged in a digit categorisation task. Distracters were affective pictures varying in social content, in addition to affective valence and arousal, which appeared in the background during the task. Our data revealed that pictures depicting high social content captured greater automatic attention than other pictures, as reflected by the greater amplitude and shorter latency of anterior P2, and anterior and posterior N2 components of the ERPs. In addition, social content also provoked greater allocation of processing resources as manifested by P3 amplitude, likely related to the high arousal they elicited. These results extend data from previous research by showing the relevance of the social value of the affective stimuli on automatic attentional processing.

  5. Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Zhang, Kai; Liu, Xiyang; Long, Erping; Jiang, Jiewei; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Li, Wangting; Lin, Haotian

    2017-01-01

    There are many image classification methods, but it remains unclear which methods are most helpful for analyzing and intelligently identifying ophthalmic images. We select representative slit-lamp images which show the complexity of ocular images as research material to compare image classification algorithms for diagnosing ophthalmic diseases. To facilitate this study, some feature extraction algorithms and classifiers are combined to automatic diagnose pediatric cataract with same dataset and then their performance are compared using multiple criteria. This comparative study reveals the general characteristics of the existing methods for automatic identification of ophthalmic images and provides new insights into the strengths and shortcomings of these methods. The relevant methods (local binary pattern +SVMs, wavelet transformation +SVMs) which achieve an average accuracy of 87% and can be adopted in specific situations to aid doctors in preliminarily disease screening. Furthermore, some methods requiring fewer computational resources and less time could be applied in remote places or mobile devices to assist individuals in understanding the condition of their body. In addition, it would be helpful to accelerate the development of innovative approaches and to apply these methods to assist doctors in diagnosing ophthalmic disease.

  6. Supporting the education evidence portal via text mining

    PubMed Central

    Ananiadou, Sophia; Thompson, Paul; Thomas, James; Mu, Tingting; Oliver, Sandy; Rickinson, Mark; Sasaki, Yutaka; Weissenbacher, Davy; McNaught, John

    2010-01-01

    The UK Education Evidence Portal (eep) provides a single, searchable, point of access to the contents of the websites of 33 organizations relating to education, with the aim of revolutionizing work practices for the education community. Use of the portal alleviates the need to spend time searching multiple resources to find relevant information. However, the combined content of the websites of interest is still very large (over 500 000 documents and growing). This means that searches using the portal can produce very large numbers of hits. As users often have limited time, they would benefit from enhanced methods of performing searches and viewing results, allowing them to drill down to information of interest more efficiently, without having to sift through potentially long lists of irrelevant documents. The Joint Information Systems Committee (JISC)-funded ASSIST project has produced a prototype web interface to demonstrate the applicability of integrating a number of text-mining tools and methods into the eep, to facilitate an enhanced searching, browsing and document-viewing experience. New features include automatic classification of documents according to a taxonomy, automatic clustering of search results according to similar document content, and automatic identification and highlighting of key terms within documents. PMID:20643679

  7. Automated detection of diabetic retinopathy in retinal images.

    PubMed

    Valverde, Carmen; Garcia, Maria; Hornero, Roberto; Lopez-Galvez, Maria I

    2016-01-01

    Diabetic retinopathy (DR) is a disease with an increasing prevalence and the main cause of blindness among working-age population. The risk of severe vision loss can be significantly reduced by timely diagnosis and treatment. Systematic screening for DR has been identified as a cost-effective way to save health services resources. Automatic retinal image analysis is emerging as an important screening tool for early DR detection, which can reduce the workload associated to manual grading as well as save diagnosis costs and time. Many research efforts in the last years have been devoted to developing automatic tools to help in the detection and evaluation of DR lesions. However, there is a large variability in the databases and evaluation criteria used in the literature, which hampers a direct comparison of the different studies. This work is aimed at summarizing the results of the available algorithms for the detection and classification of DR pathology. A detailed literature search was conducted using PubMed. Selected relevant studies in the last 10 years were scrutinized and included in the review. Furthermore, we will try to give an overview of the available commercial software for automatic retinal image analysis.

  8. An Automatic Prediction of Epileptic Seizures Using Cloud Computing and Wireless Sensor Networks.

    PubMed

    Sareen, Sanjay; Sood, Sandeep K; Gupta, Sunil Kumar

    2016-11-01

    Epilepsy is one of the most common neurological disorders which is characterized by the spontaneous and unforeseeable occurrence of seizures. An automatic prediction of seizure can protect the patients from accidents and save their life. In this article, we proposed a mobile-based framework that automatically predict seizures using the information contained in electroencephalography (EEG) signals. The wireless sensor technology is used to capture the EEG signals of patients. The cloud-based services are used to collect and analyze the EEG data from the patient's mobile phone. The features from the EEG signal are extracted using the fast Walsh-Hadamard transform (FWHT). The Higher Order Spectral Analysis (HOSA) is applied to FWHT coefficients in order to select the features set relevant to normal, preictal and ictal states of seizure. We subsequently exploit the selected features as input to a k-means classifier to detect epileptic seizure states in a reasonable time. The performance of the proposed model is tested on Amazon EC2 cloud and compared in terms of execution time and accuracy. The findings show that with selected HOS based features, we were able to achieve a classification accuracy of 94.6 %.

  9. Alexithymia is associated with attenuated automatic brain response to facial emotion in clinical depression.

    PubMed

    Suslow, Thomas; Kugel, Harald; Rufer, Michael; Redlich, Ronny; Dohm, Katharina; Grotegerd, Dominik; Zaremba, Dario; Dannlowski, Udo

    2016-02-04

    Alexithymia is a clinically relevant personality trait related to difficulties in recognizing and describing emotions. Previous studies examining the neural correlates of alexithymia have shown mainly decreased response of several brain areas during emotion processing in healthy samples and patients suffering from autism or post-traumatic stress disorder. In the present study, we examined the effect of alexithymia on automatic brain reactivity to negative and positive facial expressions in clinical depression. Brain activation in response to sad, happy, neutral, and no facial expression (presented for 33 ms and masked by neutral faces) was measured by functional magnetic resonance imaging at 3 T in 26 alexithymic and 26 non-alexithymic patients with major depression. Alexithymic patients manifested less activation in response to masked sad and happy (compared to neutral) faces in right frontal regions and right caudate nuclei than non-alexithymic patients. Our neuroimaging study provides evidence that the personality trait alexithymia has a modulating effect on automatic emotion processing in clinical depression. Our findings support the idea that alexithymia could be associated with functional deficits of the right hemisphere. Future research on the neural substrates of emotion processing in depression should assess and control alexithymia in their analyses.

  10. Automatic Thread-Level Parallelization in the Chombo AMR Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christen, Matthias; Keen, Noel; Ligocki, Terry

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less

  11. Automatic Lamp and Fan Control Based on Microcontroller

    NASA Astrophysics Data System (ADS)

    Widyaningrum, V. T.; Pramudita, Y. D.

    2018-01-01

    In general, automation can be described as a process following pre-determined sequential steps with a little or without any human exertion. Automation is provided with the use of various sensors suitable to observe the production processes, actuators and different techniques and devices. In this research, the automation system developed is an automatic lamp and an automatic fan on the smart home. Both of these systems will be processed using an Arduino Mega 2560 microcontroller. A microcontroller is used to obtain values of physical conditions through sensors connected to it. In the automatic lamp system required sensors to detect the light of the LDR (Light Dependent Resistor) sensor. While the automatic fan system required sensors to detect the temperature of the DHT11 sensor. In tests that have been done lamps and fans can work properly. The lamp can turn on automatically when the light begins to darken, and the lamp can also turn off automatically when the light begins to bright again. In addition, it can concluded also that the readings of LDR sensors are placed outside the room is different from the readings of LDR sensors placed in the room. This is because the light intensity received by the existing LDR sensor in the room is blocked by the wall of the house or by other objects. Then for the fan, it can also turn on automatically when the temperature is greater than 25°C, and the fan speed can also be adjusted. The fan may also turn off automatically when the temperature is less than equal to 25°C.

  12. Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.

    PubMed

    Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J

    2016-03-01

    Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Enhancing Comparative Effectiveness Research With Automated Pediatric Pneumonia Detection in a Multi-Institutional Clinical Repository: A PHIS+ Pilot Study.

    PubMed

    Meystre, Stephane; Gouripeddi, Ramkiran; Tieder, Joel; Simmons, Jeffrey; Srivastava, Rajendu; Shah, Samir

    2017-05-15

    Community-acquired pneumonia is a leading cause of pediatric morbidity. Administrative data are often used to conduct comparative effectiveness research (CER) with sufficient sample sizes to enhance detection of important outcomes. However, such studies are prone to misclassification errors because of the variable accuracy of discharge diagnosis codes. The aim of this study was to develop an automated, scalable, and accurate method to determine the presence or absence of pneumonia in children using chest imaging reports. The multi-institutional PHIS+ clinical repository was developed to support pediatric CER by expanding an administrative database of children's hospitals with detailed clinical data. To develop a scalable approach to find patients with bacterial pneumonia more accurately, we developed a Natural Language Processing (NLP) application to extract relevant information from chest diagnostic imaging reports. Domain experts established a reference standard by manually annotating 282 reports to train and then test the NLP application. Findings of pleural effusion, pulmonary infiltrate, and pneumonia were automatically extracted from the reports and then used to automatically classify whether a report was consistent with bacterial pneumonia. Compared with the annotated diagnostic imaging reports reference standard, the most accurate implementation of machine learning algorithms in our NLP application allowed extracting relevant findings with a sensitivity of .939 and a positive predictive value of .925. It allowed classifying reports with a sensitivity of .71, a positive predictive value of .86, and a specificity of .962. When compared with each of the domain experts manually annotating these reports, the NLP application allowed for significantly higher sensitivity (.71 vs .527) and similar positive predictive value and specificity . NLP-based pneumonia information extraction of pediatric diagnostic imaging reports performed better than domain experts in this pilot study. NLP is an efficient method to extract information from a large collection of imaging reports to facilitate CER. ©Stephane Meystre, Ramkiran Gouripeddi, Joel Tieder, Jeffrey Simmons, Rajendu Srivastava, Samir Shah. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.05.2017.

  14. Volumetric breast density affects performance of digital screening mammography.

    PubMed

    Wanders, Johanna O P; Holland, Katharina; Veldhuis, Wouter B; Mann, Ritse M; Pijnappel, Ruud M; Peeters, Petra H M; van Gils, Carla H; Karssemeijer, Nico

    2017-02-01

    To determine to what extent automatically measured volumetric mammographic density influences screening performance when using digital mammography (DM). We collected a consecutive series of 111,898 DM examinations (2003-2011) from one screening unit of the Dutch biennial screening program (age 50-75 years). Volumetric mammographic density was automatically assessed using Volpara. We determined screening performance measures for four density categories comparable to the American College of Radiology (ACR) breast density categories. Of all the examinations, 21.6% were categorized as density category 1 ('almost entirely fatty') and 41.5, 28.9, and 8.0% as category 2-4 ('extremely dense'), respectively. We identified 667 screen-detected and 234 interval cancers. Interval cancer rates were 0.7, 1.9, 2.9, and 4.4‰ and false positive rates were 11.2, 15.1, 18.2, and 23.8‰ for categories 1-4, respectively (both p-trend < 0.001). The screening sensitivity, calculated as the proportion of screen-detected among the total of screen-detected and interval tumors, was lower in higher density categories: 85.7, 77.6, 69.5, and 61.0% for categories 1-4, respectively (p-trend < 0.001). Volumetric mammographic density, automatically measured on digital mammograms, impacts screening performance measures along the same patterns as established with ACR breast density categories. Since measuring breast density fully automatically has much higher reproducibility than visual assessment, this automatic method could help with implementing density-based supplemental screening.

  15. Automatic Hypocenter Determination Method in JMA Catalog and its Application

    NASA Astrophysics Data System (ADS)

    Tamaribuchi, K.

    2017-12-01

    The number of detectable earthquakes around Japan has increased by developing the high-sensitivity seismic observation network. After the 2011 Tohoku-oki earthquake, the number of detectable earthquakes have dramatically increased due to its aftershocks and induced earthquakes. This enormous number of earthquakes caused inability of manually determination of all the hypocenters. The Japan Meteorological Agency (JMA), which produces the earthquake catalog in Japan, has developed a new automatic hypocenter determination method and started its operation from April 1, 2016. This method (named PF method; Phase combination Forward search method) can determine the hypocenters of earthquakes that occur simultaneously by searching for the optimal combination of P- and S-wave arrival times and the maximum amplitudes using a Bayesian estimation technique. In the 2016 Kumamoto earthquake sequence, we successfully detected about 70,000 aftershocks automatically during the period from April 14 to the end of May, and this method contributed to the real-time monitoring of the seismic activity. Furthermore, this method can be also applied to the Earthquake Early Warning (EEW). Application of this method for EEW is called the IPF method and has been used as the hypocenter determination method of the EEW system in JMA from December 2016. By developing this method further, it is possible to contribute to not only speeding up the catalog production, but also improving reliability of the early warning.

  16. Automatic welding of stainless steel tubing

    NASA Technical Reports Server (NTRS)

    Clautice, W. E.

    1978-01-01

    The use of automatic welding for making girth welds in stainless steel tubing was investigated as well as the reduction in fabrication costs resulting from the elimination of radiographic inspection. Test methodology, materials, and techniques are discussed, and data sheets for individual tests are included. Process variables studied include welding amperes, revolutions per minute, and shielding gas flow. Strip chart recordings, as a definitive method of insuring weld quality, are studied. Test results, determined by both radiographic and visual inspection, are presented and indicate that once optimum welding procedures for specific sizes of tubing are established, and the welding machine operations are certified, then the automatic tube welding process produces good quality welds repeatedly, with a high degree of reliability. Revised specifications for welding tubing using the automatic process and weld visual inspection requirements at the Kennedy Space Center are enumerated.

  17. Bianchi identities and the automatic conservation of energy-momentum and angular momentum in general-relativistic field theories

    NASA Astrophysics Data System (ADS)

    Hehl, Friedrich W.; McCrea, J. Dermott

    1986-03-01

    Automatic conservation of energy-momentum and angular momentum is guaranteed in a gravitational theory if, via the field equations, the conservation laws for the material currents are reduced to the contracted Bianchi identities. We first execute an irreducible decomposition of the Bianchi identities in a Riemann-Cartan space-time. Then, starting from a Riemannian space-time with or without torsion, we determine those gravitational theories which have automatic conservation: general relativity and the Einstein-Cartan-Sciama-Kibble theory, both with cosmological constant, and the nonviable pseudoscalar model. The Poincaré gauge theory of gravity, like gauge theories of internal groups, has no automatic conservation in the sense defined above. This does not lead to any difficulties in principle. Analogies to 3-dimensional continuum mechanics are stressed throughout the article.

  18. Quality Control of True Height Profiles Obtained Automatically from Digital Ionograms.

    DTIC Science & Technology

    1982-05-01

    nece.,ssary and Identify by block number) Ionosphere Digisonde Electron Density Profile Ionogram Autoscaling ARTIST 2 , ABSTRACT (Continue on reverae...analysis technique currently used with the ionogram traces scaled automatically by the ARTIST software [Reinisch and Huang, 1983; Reinisch et al...19841, and the generalized polynomial analysis technique POLAN [Titheridge, 1985], using the same ARTIST -identified ionogram traces. 2. To determine how

  19. Automatic Methods in Image Processing and Their Relevance to Map-Making.

    DTIC Science & Technology

    1981-02-11

    23b) and ECfg ) = DC1 1 reIc (5-24) Is an example, let the image function f be white noise so that Cf( ) = s, ,), the Dirac impulse . Then (5-24...based on image and correlator models which describe the behavior of correlation processors under condi- tions of low image contrast or signal-to- noise ...71 Sensor Noise ......................... 74 Self Noise .7.................. 6 Ma chine Noise ................ 81 Fixed Point Processing

  20. Automatic segmentation of the left ventricle in a cardiac MR short axis image using blind morphological operation

    NASA Astrophysics Data System (ADS)

    Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat

    2018-04-01

    Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.

  1. Reevaluation of pollen quantitation by an automatic pollen counter.

    PubMed

    Muradil, Mutarifu; Okamoto, Yoshitaka; Yonekura, Syuji; Chazono, Hideaki; Hisamitsu, Minako; Horiguchi, Shigetoshi; Hanazawa, Toyoyuki; Takahashi, Yukie; Yokota, Kunihiko; Okumura, Satoshi

    2010-01-01

    Accurate and detailed pollen monitoring is useful for selection of medication and for allergen avoidance in patients with allergic rhinitis. Burkard and Durham pollen samplers are commonly used, but are labor and time intensive. In contrast, automatic pollen counters allow simple real-time pollen counting; however, these instruments have difficulty in distinguishing pollen from small nonpollen airborne particles. Misidentification and underestimation rates for an automatic pollen counter were examined to improve the accuracy of the pollen count. The characteristics of the automatic pollen counter were determined in a chamber study with exposure to cedar pollens or soil grains. The cedar pollen counts were monitored in 2006 and 2007, and compared with those from a Durham sampler. The pollen counts from the automatic counter showed a good correlation (r > 0.7) with those from the Durham sampler when pollen dispersal was high, but a poor correlation (r < 0.5) when pollen dispersal was low. The new correction method, which took into account the misidentification and underestimation, improved this correlation to r > 0.7 during the pollen season. The accuracy of automatic pollen counting can be improved using a correction to include rates of underestimation and misidentification in a particular geographical area.

  2. Automatic, Rapid Replanning of Satellite Operations for Space Situational Awareness (SSA)

    NASA Astrophysics Data System (ADS)

    Stottler, D.; Mahan, K.

    An important component of Space Situational Awareness (SSA) is knowledge of the status and tasking of blue forces (e.g. satellites and ground stations) and the rapid determination of the impacts of real or hypothetical changes and the ability to quickly replan based on those changes. For example, if an antenna goes down (either for benign reasons or from purposeful interference) determining which missions will be impacted is important. It is not simply the set of missions that were scheduled to utilize that antenna, because highly expert human schedulers will respond to the outage by intelligently replanning the real-time schedule. We have developed an automatic scheduling and deconfliction engine, called MIDAS (for Managed Intelligent Deconfliction And Scheduling) that interfaces to the current legacy system (ESD 2.7) which can perform this replanning function automatically. In addition to determining the impact of failed resources, MIDAS can also replan in response to a satellite under attack. In this situation, additional supports must be quickly scheduled and executed (while minimizing impacts to other missions). Because MIDAS is a fully automatic system, replacing a current human labor-intensive process, and provides very rapid turnaround (seconds) it can also be used by commanders to consider what-if questions and focus limited protection resources on the most critical resources. For example, the commander can determine the impact of a successful attack on one of two ground stations and place heavier emphasis on protecting the station whose loss would create the most severe impacts. The system is currently transitioning to operational use. The MIDAS system and its interface to the legacy ESD 2.7 system will be described along with the ConOps for different types of detailed operational scenarios.

  3. Implicit measures of beliefs about sport ability in swimming and basketball.

    PubMed

    Mascret, Nicolas; Falconetti, Jean-Louis; Cury, François

    2016-01-01

    Sport ability may be seen as relatively stable, genetically determined and not easily modified by practice, or as increasable with training, work and effort. Using the Implicit Association Test (IAT), the purpose of the present study is to examine whether the practice of a particular sport (swimming or basketball) can influence automatic beliefs about sport ability in these two sports. The IAT scores evidence that swimmers and basketball players automatically and implicitly associate their own sport with training rather than genetics, whereas non-sportspersons have no significant automatic association. This result is strengthened when perceived competence and intrinsic motivation in swimming or basketball are high.

  4. Semi-automatic, octave-spanning optical frequency counter.

    PubMed

    Liu, Tze-An; Shu, Ren-Huei; Peng, Jin-Long

    2008-07-07

    This work presents and demonstrates a semi-automatic optical frequency counter with octave-spanning counting capability using two fiber laser combs operated at different repetition rates. Monochromators are utilized to provide an approximate frequency of the laser under measurement to determine the mode number difference between the two laser combs. The exact mode number of the beating comb line is obtained from the mode number difference and the measured beat frequencies. The entire measurement process, except the frequency stabilization of the laser combs and the optimization of the beat signal-to-noise ratio, is controlled by a computer running a semi-automatic optical frequency counter.

  5. ADA perceived disability claims: a decision-tree analysis.

    PubMed

    Draper, William R; Hawley, Carolyn E; McMahon, Brian T; Reid, Christine A; Barbir, Lara A

    2014-06-01

    The purpose of this study is to examine the possible interactions of predictor variables pertaining to perceived disability claims contained in a large governmental database. Specifically, it is a retrospective analysis of US Equal Employment Opportunity Commission (EEOC) data for the entire population of workplace discrimination claims based on the "regarded as disabled" prong of the Americans with Disabilities Act (ADA) definition of disability. The study utilized records extracted from a "master database" of over two million charges of workplace discrimination in the Integrated Mission System of the EEOC. This database includes all ADA-related discrimination allegations filed from July 26, 1992 through December 31, 2008. Chi squared automatic interaction detection (CHAID) was employed to analyze interaction effects of relevant variables, such as issue (grievance) and industry type. The research question addressed by CHAID is: What combination of factors are associated with merit outcomes for people making ADA EEOC allegations who are "regarded as" having disabilities? The CHAID analysis shows how merit outcome is predicted by the interaction of relevant variables. Issue was found to be the most prominent variable in determining merit outcome, followed by industry type, but the picture is made more complex by qualifications regarding age and race data. Although discharge was the most frequent grievance among charging parties in the perceived disability group, its merit outcome was significantly less than that for the leading factor of hiring.

  6. Effectiveness-weighted control method for a cooling system

    DOEpatents

    Campbell, Levi A.; Chu, Richard C.; David, Milnes P.; Ellsworth Jr., Michael J.; Iyengar, Madhusudan K.; Schmidt, Roger R.; Simons, Robert E.

    2015-12-15

    Energy efficient control of cooling system cooling of an electronic system is provided based, in part, on weighted cooling effectiveness of the components. The control includes automatically determining speed control settings for multiple adjustable cooling components of the cooling system. The automatically determining is based, at least in part, on weighted cooling effectiveness of the components of the cooling system, and the determining operates to limit power consumption of at least the cooling system, while ensuring that a target temperature associated with at least one of the cooling system or the electronic system is within a desired range by provisioning, based on the weighted cooling effectiveness, a desired target temperature change among the multiple adjustable cooling components of the cooling system. The provisioning includes provisioning applied power to the multiple adjustable cooling components via, at least in part, the determined control settings.

  7. Effectiveness-weighted control of cooling system components

    DOEpatents

    Campbell, Levi A.; Chu, Richard C.; David, Milnes P.; Ellsworth Jr., Michael J.; Iyengar, Madhusudan K.; Schmidt, Roger R.; Simmons, Robert E.

    2015-12-22

    Energy efficient control of cooling system cooling of an electronic system is provided based, in part, on weighted cooling effectiveness of the components. The control includes automatically determining speed control settings for multiple adjustable cooling components of the cooling system. The automatically determining is based, at least in part, on weighted cooling effectiveness of the components of the cooling system, and the determining operates to limit power consumption of at least the cooling system, while ensuring that a target temperature associated with at least one of the cooling system or the electronic system is within a desired range by provisioning, based on the weighted cooling effectiveness, a desired target temperature change among the multiple adjustable cooling components of the cooling system. The provisioning includes provisioning applied power to the multiple adjustable cooling components via, at least in part, the determined control settings.

  8. New operator assistance features in the CMS Run Control System

    NASA Astrophysics Data System (ADS)

    Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. R.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Vougioukas, M.; Zejdl, P.

    2017-10-01

    During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.

  9. New Operator Assistance Features in the CMS Run Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, J.M.; et al.

    During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potentialmore » clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.« less

  10. Automaticity in acute ischemia: Bifurcation analysis of a human ventricular model

    NASA Astrophysics Data System (ADS)

    Bouchard, Sylvain; Jacquemet, Vincent; Vinet, Alain

    2011-01-01

    Acute ischemia (restriction in blood supply to part of the heart as a result of myocardial infarction) induces major changes in the electrophysiological properties of the ventricular tissue. Extracellular potassium concentration ([Ko+]) increases in the ischemic zone, leading to an elevation of the resting membrane potential that creates an “injury current” (IS) between the infarcted and the healthy zone. In addition, the lack of oxygen impairs the metabolic activity of the myocytes and decreases ATP production, thereby affecting ATP-sensitive potassium channels (IKatp). Frequent complications of myocardial infarction are tachycardia, fibrillation, and sudden cardiac death, but the mechanisms underlying their initiation are still debated. One hypothesis is that these arrhythmias may be triggered by abnormal automaticity. We investigated the effect of ischemia on myocyte automaticity by performing a comprehensive bifurcation analysis (fixed points, cycles, and their stability) of a human ventricular myocyte model [K. H. W. J. ten Tusscher and A. V. Panfilov, Am. J. Physiol. Heart Circ. Physiol.AJPHAP0363-613510.1152/ajpheart.00109.2006 291, H1088 (2006)] as a function of three ischemia-relevant parameters [Ko+], IS, and IKatp. In this single-cell model, we found that automatic activity was possible only in the presence of an injury current. Changes in [Ko+] and IKatp significantly altered the bifurcation structure of IS, including the occurrence of early-after depolarization. The results provide a sound basis for studying higher-dimensional tissue structures representing an ischemic heart.

  11. Automatic 3D segmentation of multiphoton images: a key step for the quantification of human skin.

    PubMed

    Decencière, Etienne; Tancrède-Bohin, Emmanuelle; Dokládal, Petr; Koudoro, Serge; Pena, Ana-Maria; Baldeweck, Thérèse

    2013-05-01

    Multiphoton microscopy has emerged in the past decade as a useful noninvasive imaging technique for in vivo human skin characterization. However, it has not been used until now in evaluation clinical trials, mainly because of the lack of specific image processing tools that would allow the investigator to extract pertinent quantitative three-dimensional (3D) information from the different skin components. We propose a 3D automatic segmentation method of multiphoton images which is a key step for epidermis and dermis quantification. This method, based on the morphological watershed and graph cuts algorithms, takes into account the real shape of the skin surface and of the dermal-epidermal junction, and allows separating in 3D the epidermis and the superficial dermis. The automatic segmentation method and the associated quantitative measurements have been developed and validated on a clinical database designed for aging characterization. The segmentation achieves its goals for epidermis-dermis separation and allows quantitative measurements inside the different skin compartments with sufficient relevance. This study shows that multiphoton microscopy associated with specific image processing tools provides access to new quantitative measurements on the various skin components. The proposed 3D automatic segmentation method will contribute to build a powerful tool for characterizing human skin condition. To our knowledge, this is the first 3D approach to the segmentation and quantification of these original images. © 2013 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.

  12. A quality score for coronary artery tree extraction results

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2018-02-01

    Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.

  13. LAMBADA and InflateGRO2: efficient membrane alignment and insertion of membrane proteins for molecular dynamics simulations.

    PubMed

    Schmidt, Thomas H; Kandt, Christian

    2012-10-22

    At the beginning of each molecular dynamics membrane simulation stands the generation of a suitable starting structure which includes the working steps of aligning membrane and protein and seamlessly accommodating the protein in the membrane. Here we introduce two efficient and complementary methods based on pre-equilibrated membrane patches, automating these steps. Using a voxel-based cast of the coarse-grained protein, LAMBADA computes a hydrophilicity profile-derived scoring function based on which the optimal rotation and translation operations are determined to align protein and membrane. Employing an entirely geometrical approach, LAMBADA is independent from any precalculated data and aligns even large membrane proteins within minutes on a regular workstation. LAMBADA is the first tool performing the entire alignment process automatically while providing the user with the explicit 3D coordinates of the aligned protein and membrane. The second tool is an extension of the InflateGRO method addressing the shortcomings of its predecessor in a fully automated workflow. Determining the exact number of overlapping lipids based on the area occupied by the protein and restricting expansion, compression and energy minimization steps to a subset of relevant lipids through automatically calculated and system-optimized operation parameters, InflateGRO2 yields optimal lipid packing and reduces lipid vacuum exposure to a minimum preserving as much of the equilibrated membrane structure as possible. Applicable to atomistic and coarse grain structures in MARTINI format, InflateGRO2 offers high accuracy, fast performance, and increased application flexibility permitting the easy preparation of systems exhibiting heterogeneous lipid composition as well as embedding proteins into multiple membranes. Both tools can be used separately, in combination with other methods, or in tandem permitting a fully automated workflow while retaining a maximum level of usage control and flexibility. To assess the performance of both methods, we carried out test runs using 22 membrane proteins of different size and transmembrane structure.

  14. Automatic determination of fault effects on aircraft functionality

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    1989-01-01

    The problem of determining the behavior of physical systems subsequent to the occurrence of malfunctions is discussed. It is established that while it was reasonable to assume that the most important fault behavior modes of primitive components and simple subsystems could be known and predicted, interactions within composite systems reached levels of complexity that precluded the use of traditional rule-based expert system techniques. Reasoning from first principles, i.e., on the basis of causal models of the physical system, was required. The first question that arises is, of course, how the causal information required for such reasoning should be represented. The bond graphs presented here occupy a position intermediate between qualitative and quantitative models, allowing the automatic derivation of Kuipers-like qualitative constraint models as well as state equations. Their most salient feature, however, is that entities corresponding to components and interactions in the physical system are explicitly represented in the bond graph model, thus permitting systematic model updates to reflect malfunctions. Researchers show how this is done, as well as presenting a number of techniques for obtaining qualitative information from the state equations derivable from bond graph models. One insight is the fact that one of the most important advantages of the bond graph ontology is the highly systematic approach to model construction it imposes on the modeler, who is forced to classify the relevant physical entities into a small number of categories, and to look for two highly specific types of interactions among them. The systematic nature of bond graph model construction facilitates the process to the point where the guidelines are sufficiently specific to be followed by modelers who are not domain experts. As a result, models of a given system constructed by different modelers will have extensive similarities. Researchers conclude by pointing out that the ease of updating bond graph models to reflect malfunctions is a manifestation of the systematic nature of bond graph construction, and the regularity of the relationship between bond graph models and physical reality.

  15. The "Sigmoid Sniffer” and the "Advanced Automated Solar Filament Detection and Characterization Code” Modules

    NASA Astrophysics Data System (ADS)

    Raouafi, Noureddine; Bernasconi, P. N.; Georgoulis, M. K.

    2010-05-01

    We present two pattern recognition algorithms, the "Sigmoid Sniffer” and the "Advanced Automated Solar Filament Detection and Characterization Code,” that are among the Feature Finding modules of the Solar Dynamic Observatory: 1) Coronal sigmoids visible in X-rays and the EUV are the result of highly twisted magnetic fields. They can occur anywhere on the solar disk and are closely related to solar eruptive activity (e.g., flares, CMEs). Their appearance is typically synonym of imminent solar eruptions, so they can serve as a tool to forecast solar activity. Automatic X-ray sigmoid identification offers an unbiased way of detecting short-to-mid term CME precursors. The "Sigmoid Sniffer” module is capable of automatically detecting sigmoids in full-disk X-ray images and determining their chirality, as well as other characteristics. It uses multiple thresholds to identify persistent bright structures on a full-disk X-ray image of the Sun. We plan to apply the code to X-ray images from Hinode/XRT, as well as on SDO/AIA images. When implemented in a near real-time environment, the Sigmoid Sniffer could allow 3-7 day forecasts of CMEs and their potential to cause major geomagnetic storms. 2)The "Advanced Automated Solar Filament Detection and Characterization Code” aims to identify, classify, and track solar filaments in full-disk Hα images. The code can reliably identify filaments; determine their chirality and other relevant parameters like filament area, length, and average orientation with respect to the equator. It is also capable of tracking the day-by-day evolution of filaments as they traverse the visible disk. The code was tested by analyzing daily Hα images taken at the Big Bear Solar Observatory from mid-2000 to early-2005. It identified and established the chirality of thousands of filaments without human intervention.

  16. Genital automatisms: Reappraisal of a remarkable but ignored symptom of focal seizures.

    PubMed

    Dede, Hava Özlem; Bebek, Nerses; Gürses, Candan; Baysal-Kıraç, Leyla; Baykan, Betül; Gökyiğit, Ayşen

    2018-03-01

    Genital automatisms (GAs) are uncommon clinical phenomena of focal seizures. They are defined as repeated fondling, grabbing, or scratching of the genitals. The aim of this study was to determine the lateralizing and localizing value and associated clinical characteristics of GAs. Three hundred thirteen consecutive patients with drug-resistant seizures who were referred to our tertiary center for presurgical evaluation between 2009 and 2016 were investigated. The incidence of specific kinds of behavior, clinical semiology, associated symptoms/signs with corresponding ictal electroencephalography (EEG) findings, and their potential role in seizure localization and lateralization were evaluated. Fifteen (4.8%) of 313 patients had GAs. Genital automatisms were identified in 19 (16.4%) of a total 116 seizures. Genital automatisms were observed to occur more often in men than in women (M/F: 10/5). Nine of fifteen patients (60%) had temporal lobe epilepsy (right/left: 4/5) and three (20%) had frontal lobe epilepsy (right/left: 1/2), whereas the remaining two patients could not be classified. One patient was diagnosed as having Rasmussen encephalitis. Genital automatisms were ipsilateral to epileptic focus in 12 patients and contralateral in only one patient according to ictal-interictal EEG and neuroimaging findings. Epileptic focus could not be lateralized in the last 2 patients. Genital automatisms were associated with unilateral hand automatisms such as postictal nose wiping or manual automatisms in 13 (86.7%) of 15 and contralateral dystonia was seen in 6 patients. All patients had amnesia of the performance of GAs. Genital automatisms are more frequent in seizures originating from the temporal lobe, and they can also be seen in frontal lobe seizures. Genital automatisms seem to have a high lateralizing value to the ipsilateral hemisphere and are mostly concordant with other unilateral hand automatisms. Men exhibit GAs more often than women. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Flow rate of some pharmaceutical diluents through die-orifices relevant to mini-tableting.

    PubMed

    Kachrimanis, K; Petrides, M; Malamataris, S

    2005-10-13

    The effects of cylindrical orifice length and diameter on the flow rate of three commonly used pharmaceutical direct compression diluents (lactose, dibasic calcium phosphate dihydrate and pregelatinised starch) were investigated, besides the powder particle characteristics (particle size, aspect ratio, roundness and convexity) and the packing properties (true, bulk and tapped density). Flow rate was determined for three different sieve fractions through a series of miniature tableting dies of different orifice diameter (0.4, 0.3 and 0.2 cm) and thickness (1.5, 1.0 and 0.5 cm). It was found that flow rate decreased with the increase of the orifice length for the small diameter (0.2 cm) but for the large diameter (0.4 cm) was increased with the orifice length (die thickness). Flow rate changes with the orifice length are attributed to the flow regime (transitional arch formation) and possible alterations in the position of the free flowing zone caused by pressure gradients arising from the flow of self-entrained air, both above the entrance in the die orifice and across it. Modelling by the conventional Jones-Pilpel non-linear equation and by two machine learning algorithms (lazy learning, LL, and feed-forward back-propagation, FBP) was applied and predictive performance of the fitted models was compared. It was found that both FBP and LL algorithms have significantly higher predictive performance than the Jones-Pilpel non-linear equation, because they account both dimensions of the cylindrical die opening (diameter and length). The automatic relevance determination for FBP revealed that orifice length is the third most influential variable after the orifice diameter and particle size, followed by the bulk density, the difference between bulk and tapped densities and the particle convexity.

  18. Quality of life in older individuals with joint contractures in geriatric care settings.

    PubMed

    Heise, Marco; Müller, Martin; Fischer, Uli; Grill, Eva

    2016-09-01

    The purpose of this study was to analyze the association between functioning and disability and quality of life (QoL) in older individuals with joint contractures in the geriatric care setting. More specifically, this study aimed to identify determinants of QoL out of a defined set of contracture-related categories of the International Classification of Functioning, Disability and Health (ICF). Participants for this multicenter cross-sectional survey were recruited from acute geriatric rehabilitation hospitals, nursing homes, and community nursing facilities in Germany between February and October 2013. QoL was assessed using the validated German version of the EQ-5D index score and the EQ-5D visual analog scale (VAS). Manual and automatic variable selection methods were used to identify the most relevant variables out of 125 contracture-related ICF categories. A total of 241 eligible participants (34.9 % male, mean age 80.1 years) were included. The final models contained 14 ICF categories as predictors of the EQ-5D index score and 15 categories as predictors of the EQ-5D VAS. The statistically significant ICF categories from both models were 'muscle power functions (b730),' 'memory functions (b144),' 'taking care of plants (d6505),' 'recreation and leisure (d920),' 'religion and spirituality (d930),' 'drugs (e1101),' and 'products and technology for personal use in daily living (e115).' We identified the most relevant ICF categories for older individuals with joint contractures and their health-related quality of life. These items describe potential determinants of QoL which may provide the basis for future health interventions aiming to improve QoL for the patients with joint contractures.

  19. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  20. DecoFungi: a web application for automatic characterisation of dye decolorisation in fungal strains.

    PubMed

    Domínguez, César; Heras, Jónathan; Mata, Eloy; Pascual, Vico

    2018-02-27

    Fungi have diverse biotechnological applications in, among others, agriculture, bioenergy generation, or remediation of polluted soil and water. In this context, culture media based on color change in response to degradation of dyes are particularly relevant; but measuring dye decolorisation of fungal strains mainly relies on a visual and semiquantitative classification of color intensity changes. Such a classification is a subjective, time-consuming and difficult to reproduce process. DecoFungi is the first, at least up to the best of our knowledge, application to automatically characterise dye decolorisation level of fungal strains from images of inoculated plates. In order to deal with this task, DecoFungi employs a deep-learning model, accessible through a user-friendly web interface, with an accuracy of 96.5%. DecoFungi is an easy to use system for characterising dye decolorisation level of fungal strains from images of inoculated plates.

  1. Efficient Verification of Holograms Using Mobile Augmented Reality.

    PubMed

    Hartl, Andreas Daniel; Arth, Clemens; Grubert, Jens; Schmalstieg, Dieter

    2016-07-01

    Paper documents such as passports, visas and banknotes are frequently checked by inspection of security elements. In particular, optically variable devices such as holograms are important, but difficult to inspect. Augmented Reality can provide all relevant information on standard mobile devices. However, hologram verification on mobiles still takes long and provides lower accuracy than inspection by human individuals using appropriate reference information. We aim to address these drawbacks by automatic matching combined with a special parametrization of an efficient goal-oriented user interface which supports constrained navigation. We first evaluate a series of similarity measures for matching hologram patches to provide a sound basis for automatic decisions. Then a re-parametrized user interface is proposed based on observations of typical user behavior during document capture. These measures help to reduce capture time to approximately 15 s with better decisions regarding the evaluated samples than what can be achieved by untrained users.

  2. Facilitating Goal-Oriented Behaviour in the Stroop Task: When Executive Control Is Influenced by Automatic Processing

    PubMed Central

    Parris, Benjamin A.; Bate, Sarah; Brown, Scott D.; Hodgson, Timothy L.

    2012-01-01

    A portion of Stroop interference is thought to arise from a failure to maintain goal-oriented behaviour (or goal neglect). The aim of the present study was to investigate whether goal- relevant primes could enhance goal maintenance and reduce the Stroop interference effect. Here it is shown that primes related to the goal of responding quickly in the Stroop task (e.g. fast, quick, hurry) substantially reduced Stroop interference by reducing reaction times to incongruent trials but increasing reaction times to congruent and neutral trials. No effects of the primes were observed on errors. The effects on incongruent, congruent and neutral trials are explained in terms of the influence of the primes on goal maintenance. The results show that goal priming can facilitate goal-oriented behaviour and indicate that automatic processing can modulate executive control. PMID:23056553

  3. [Design and implementation of mobile terminal data acquisition for Chinese materia medica resources survey].

    PubMed

    Qi, Yuan-Hua; Wang, Hui; Zhang, Xiao-Bo; Jin, Yan; Ge, Xiao-Guang; Jing, Zhi-Xian; Wang, Ling; Zhao, Yu-Ping; Guo, Lan-Ping; Huang, Lu-Qi

    2017-11-01

    In this paper, a data acquisition system based on mobile terminal combining GPS, offset correction, automatic speech recognition and database networking technology was designed implemented with the function of locating the latitude and elevation information fast, taking conveniently various types of Chinese herbal plant photos, photos, samples habitat photos and so on. The mobile system realizes automatic association with Chinese medicine source information, through the voice recognition function it records the information of plant characteristics and environmental characteristics, and record relevant plant specimen information. The data processing platform based on Chinese medicine resources survey data reporting client can effectively assists in indoor data processing, derives the mobile terminal data to computer terminal. The established data acquisition system provides strong technical support for the fourth national survey of the Chinese materia medica resources (CMMR). Copyright© by the Chinese Pharmaceutical Association.

  4. Evaluation of an automatic brain segmentation method developed for neonates on adult MR brain images

    NASA Astrophysics Data System (ADS)

    Moeskops, Pim; Viergever, Max A.; Benders, Manon J. N. L.; Išgum, Ivana

    2015-03-01

    Automatic brain tissue segmentation is of clinical relevance in images acquired at all ages. The literature presents a clear distinction between methods developed for MR images of infants, and methods developed for images of adults. The aim of this work is to evaluate a method developed for neonatal images in the segmentation of adult images. The evaluated method employs supervised voxel classification in subsequent stages, exploiting spatial and intensity information. Evaluation was performed using images available within the MRBrainS13 challenge. The obtained average Dice coefficients were 85.77% for grey matter, 88.66% for white matter, 81.08% for cerebrospinal fluid, 95.65% for cerebrum, and 96.92% for intracranial cavity, currently resulting in the best overall ranking. The possibility of applying the same method to neonatal as well as adult images can be of great value in cross-sectional studies that include a wide age range.

  5. Automatic detection of slight parameter changes associated to complex biomedical signals using multiresolution q-entropy1.

    PubMed

    Torres, M E; Añino, M M; Schlotthauer, G

    2003-12-01

    It is well known that, from a dynamical point of view, sudden variations in physiological parameters which govern certain diseases can cause qualitative changes in the dynamics of the corresponding physiological process. The purpose of this paper is to introduce a technique that allows the automated temporal localization of slight changes in a parameter of the law that governs the nonlinear dynamics of a given signal. This tool takes, from the multiresolution entropies, the ability to show these changes as statistical variations at each scale. These variations are held in the corresponding principal component. Appropriately combining these techniques with a statistical changes detector, a complexity change detection algorithm is obtained. The relevance of the approach, together with its robustness in the presence of moderate noise, is discussed in numerical simulations and the automatic detector is applied to real and simulated biological signals.

  6. Automatic single questionnaire intensity (SQI, EMS98 scale) estimation using ranking models built on the existing BCSF database

    NASA Astrophysics Data System (ADS)

    Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.

    2013-12-01

    In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to the fact that each definitive BCSF SQIs is determined by an expert analysis. We compare the SQIs obtained by these methods from our database and discuss the coherency and variations between automatic and manual processes. These methods lead to high scores with up to 85% of the forms well classified and most of the remaining forms classified with only a shift of one intensity degree. This allows us to use the ranking methods as the best automatic methods to fast SQIs estimation and to produce fast shakemaps. The next step, to improve the use of these methods, will be to identify explanations for the forms not classified at the correct value and a way to select the few remaining forms that should be analyzed by the expert. Note that beyond intensity VI, on-line questionnaires are insufficient and a field survey is indispensable to estimate intensity. For such survey, in France, BCSF leads a macroseismic intervention group (GIM).

  7. The neural correlates of implicit self-relevant processing in low self-esteem: an ERP study.

    PubMed

    Yang, Juan; Guan, Lili; Dedovic, Katarina; Qi, Mingming; Zhang, Qinglin

    2012-08-30

    Previous neuroimaging studies have shown that implicit and explicit processing of self-relevant (schematic) material elicit activity in many of the same brain regions. Electrophysiological studies on the neural processing of explicit self-relevant cues have generally supported the view that P300 is an index of attention to self-relevant stimuli; however, there has been no study to date investigating the temporal course of implicit self-relevant processing. The current study seeks to investigate the time course involved in implicit self-processing by comparing processing of self-relevant with non-self-relevant words while subjects are making a judgment about color of the words in an implicit attention task. Sixteen low self-esteem participants were examined using event-related potentials technology (ERP). We hypothesized that this implicit attention task would involve P2 component rather than the P300 component. Indeed, P2 component has been associated with perceptual analysis and attentional allocation and may be more likely to occur in unconscious conditions such as this task. Results showed that latency of P2 component, which indexes the time required for perceptual analysis, was more prolonged in processing self-relevant words compared to processing non-self-relevant words. Our results suggested that the judgment of the color of the word interfered with automatic processing of self-relevant information and resulted in less efficient processing of self-relevant word. Together with previous ERP studies examining processing of explicit self-relevant cues, these findings suggest that the explicit and the implicit processing of self-relevant information would not elicit the same ERP components. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Structuring Legacy Pathology Reports by openEHR Archetypes to Enable Semantic Querying.

    PubMed

    Kropf, Stefan; Krücken, Peter; Mueller, Wolf; Denecke, Kerstin

    2017-05-18

    Clinical information is often stored as free text, e.g. in discharge summaries or pathology reports. These documents are semi-structured using section headers, numbered lists, items and classification strings. However, it is still challenging to retrieve relevant documents since keyword searches applied on complete unstructured documents result in many false positive retrieval results. We are concentrating on the processing of pathology reports as an example for unstructured clinical documents. The objective is to transform reports semi-automatically into an information structure that enables an improved access and retrieval of relevant data. The data is expected to be stored in a standardized, structured way to make it accessible for queries that are applied to specific sections of a document (section-sensitive queries) and for information reuse. Our processing pipeline comprises information modelling, section boundary detection and section-sensitive queries. For enabling a focused search in unstructured data, documents are automatically structured and transformed into a patient information model specified through openEHR archetypes. The resulting XML-based pathology electronic health records (PEHRs) are queried by XQuery and visualized by XSLT in HTML. Pathology reports (PRs) can be reliably structured into sections by a keyword-based approach. The information modelling using openEHR allows saving time in the modelling process since many archetypes can be reused. The resulting standardized, structured PEHRs allow accessing relevant data by retrieving data matching user queries. Mapping unstructured reports into a standardized information model is a practical solution for a better access to data. Archetype-based XML enables section-sensitive retrieval and visualisation by well-established XML techniques. Focussing the retrieval to particular sections has the potential of saving retrieval time and improving the accuracy of the retrieval.

  9. FragIdent--automatic identification and characterisation of cDNA-fragments.

    PubMed

    Seelow, Dominik; Goehler, Heike; Hoffmann, Katrin

    2009-03-02

    Many genetic studies and functional assays are based on cDNA fragments. After the generation of cDNA fragments from an mRNA sample, their content is at first unknown and must be assigned by sequencing reactions or hybridisation experiments. Even in characterised libraries, a considerable number of clones are wrongly annotated. Furthermore, mix-ups can happen in the laboratory. It is therefore essential to the relevance of experimental results to confirm or determine the identity of the employed cDNA fragments. However, the manual approach for the characterisation of these fragments using BLAST web interfaces is not suited for larger number of sequences and so far, no user-friendly software is publicly available. Here we present the development of FragIdent, an application for the automatic identification of open reading frames (ORFs) within cDNA-fragments. The software performs BLAST analyses to identify the genes represented by the sequences and suggests primers to complete the sequencing of the whole insert. Gene-specific information as well as the protein domains encoded by the cDNA fragment are retrieved from Internet-based databases and included in the output. The application features an intuitive graphical interface and is designed for researchers without any bioinformatics skills. It is suited for projects comprising up to several hundred different clones. We used FragIdent to identify 84 cDNA clones from a yeast two-hybrid experiment. Furthermore, we identified 131 protein domains within our analysed clones. The source code is freely available from our homepage at http://compbio.charite.de/genetik/FragIdent/.

  10. Neural Network Classification of Receiver Functions as a Step Towards Automatic Crustal Parameter Determination

    NASA Astrophysics Data System (ADS)

    Jemberie, A.; Dugda, M. T.; Reusch, D.; Nyblade, A.

    2006-12-01

    Neural networks are decision making mathematical/engineering tools, which if trained properly, can do jobs automatically (and objectively) that normally require particular expertise and/or tedious repetition. Here we explore two techniques from the field of artificial neural networks (ANNs) that seek to reduce the time requirements and increase the objectivity of quality control (QC) and Event Identification (EI) on seismic datasets. We explore to apply the multiplayer Feed Forward (FF) Artificial Neural Networks (ANN) and Self- Organizing Maps (SOM) in combination with Hk stacking of receiver functions in an attempt to test the extent of the usefulness of automatic classification of receiver functions for crustal parameter determination. Feed- forward ANNs (FFNNs) are a supervised classification tool while self-organizing maps (SOMs) are able to provide unsupervised classification of large, complex geophysical data sets into a fixed number of distinct generalized patterns or modes. Hk stacking is a methodology that is used to stack receiver functions based on the relative arrival times of P-to-S converted phase and next two reverberations to determine crustal thickness H and Vp-to-Vs ratio (k). We use receiver functions from teleseismic events recorded by the 2000- 2002 Ethiopia Broadband Seismic Experiment. Preliminary results of applying FFNN neural network and Hk stacking of receiver functions for automatic receiver functions classification as a step towards an effort of automatic crustal parameter determination look encouraging. After training a FFNN neural network, the network could classify the best receiver functions from bad ones with a success rate of about 75 to 95%. Applying H? stacking on the receiver functions classified by this FFNN as the best receiver functions, we could obtain crustal thickness and Vp/Vs ratio of 31±4 km and 1.75±0.05, respectively, for the crust beneath station ARBA in the Main Ethiopian Rift. To make comparison, we applied Hk stacking on the receiver functions which we ourselves classified as the best set and found that the crustal thickness and Vp/Vs ratio are 31±2 km and 1.75±0.02, respectively.

  11. CT-based patient modeling for head and neck hyperthermia treatment planning: manual versus automatic normal-tissue-segmentation.

    PubMed

    Verhaart, René F; Fortunati, Valerio; Verduijn, Gerda M; van Walsum, Theo; Veenland, Jifke F; Paulides, Margarethus M

    2014-04-01

    Clinical trials have shown that hyperthermia, as adjuvant to radiotherapy and/or chemotherapy, improves treatment of patients with locally advanced or recurrent head and neck (H&N) carcinoma. Hyperthermia treatment planning (HTP) guided H&N hyperthermia is being investigated, which requires patient specific 3D patient models derived from Computed Tomography (CT)-images. To decide whether a recently developed automatic-segmentation algorithm can be introduced in the clinic, we compared the impact of manual- and automatic normal-tissue-segmentation variations on HTP quality. CT images of seven patients were segmented automatically and manually by four observers, to study inter-observer and intra-observer geometrical variation. To determine the impact of this variation on HTP quality, HTP was performed using the automatic and manual segmentation of each observer, for each patient. This impact was compared to other sources of patient model uncertainties, i.e. varying gridsizes and dielectric tissue properties. Despite geometrical variations, manual and automatic generated 3D patient models resulted in an equal, i.e. 1%, variation in HTP quality. This variation was minor with respect to the total of other sources of patient model uncertainties, i.e. 11.7%. Automatically generated 3D patient models can be introduced in the clinic for H&N HTP. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Information processing requirements for on-board monitoring of automatic landing

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Karmarkar, J. S.

    1977-01-01

    A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.

  13. Prioritization of brain MRI volumes using medical image perception model and tumor region segmentation.

    PubMed

    Mehmood, Irfan; Ejaz, Naveed; Sajjad, Muhammad; Baik, Sung Wook

    2013-10-01

    The objective of the present study is to explore prioritization methods in diagnostic imaging modalities to automatically determine the contents of medical images. In this paper, we propose an efficient prioritization of brain MRI. First, the visual perception of the radiologists is adapted to identify salient regions. Then this saliency information is used as an automatic label for accurate segmentation of brain lesion to determine the scientific value of that image. The qualitative and quantitative results prove that the rankings generated by the proposed method are closer to the rankings created by radiologists. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Method and apparatus for automatically generating airfoil performance tables

    NASA Technical Reports Server (NTRS)

    van Dam, Cornelis P. (Inventor); Mayda, Edward A. (Inventor); Strawn, Roger Clayton (Inventor)

    2006-01-01

    One embodiment of the present invention provides a system that facilitates automatically generating a performance table for an object, wherein the object is subject to fluid flow. The system operates by first receiving a description of the object and testing parameters for the object. The system executes a flow solver using the testing parameters and the description of the object to produce an output. Next, the system determines if the output of the flow solver indicates negative density or pressure. If not, the system analyzes the output to determine if the output is converging. If converging, the system writes the output to the performance table for the object.

  15. Virtual Instrument for Determining Rate Constant of Second-Order Reaction by pX Based on LabVIEW 8.0.

    PubMed

    Meng, Hu; Li, Jiang-Yuan; Tang, Yong-Huai

    2009-01-01

    The virtual instrument system based on LabVIEW 8.0 for ion analyzer which can measure and analyze ion concentrations in solution is developed and comprises homemade conditioning circuit, data acquiring board, and computer. It can calibrate slope, temperature, and positioning automatically. When applied to determine the reaction rate constant by pX, it achieved live acquiring, real-time displaying, automatical processing of testing data, generating the report of results; and other functions. This method simplifies the experimental operation greatly, avoids complicated procedures of manual processing data and personal error, and improves veracity and repeatability of the experiment results.

  16. Automatic protein structure solution from weak X-ray data

    NASA Astrophysics Data System (ADS)

    Skubák, Pavol; Pannu, Navraj S.

    2013-11-01

    Determining new protein structures from X-ray diffraction data at low resolution or with a weak anomalous signal is a difficult and often an impossible task. Here we propose a multivariate algorithm that simultaneously combines the structure determination steps. In tests on over 140 real data sets from the protein data bank, we show that this combined approach can automatically build models where current algorithms fail, including an anisotropically diffracting 3.88 Å RNA polymerase II data set. The method seamlessly automates the process, is ideal for non-specialists and provides a mathematical framework for successfully combining various sources of information in image processing.

  17. Automatic grid azimuth by hour angle of the sun, a star or a planet using an electronic theodolite Kern E2

    NASA Astrophysics Data System (ADS)

    Solaric, Nikola

    1991-03-01

    The paper describes a procedure for automatic determinations of the grid azimuth of an object on the earth surface by the hour angle of a celestial object (the sun, a star, or a planet), using the electronic theodolite Kern E2. The observation procedure is simple because the electronic calculator is directing the procedure, and the degree of accuracy is immediately determined. With this method, the external rms error of a single set is approximately two times smaller than in the case of the altitude method. The paper includes a flowchart of the program.

  18. SCAN+

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenneth Krebs, John Svoboda

    2009-11-01

    SCAN+ is a software application specifically designed to control the positioning of a gamma spectrometer by a two dimensional translation system above spent fuel bundles located in a sealed spent fuel cask. The gamma spectrometer collects gamma spectrum information for the purpose of spent fuel cask fuel loading verification. SCAN+ performs manual and automatic gamma spectrometer positioning functions as-well-as exercising control of the gamma spectrometer data acquisitioning functions. Cask configuration files are used to determine the positions of spent fuel bundles. Cask scanning files are used to determine the desired scan paths for scanning a spent fuel cask allowing formore » automatic unattended cask scanning that may take several hours.« less

  19. An accurate laser radiometer for determining visible exposure times.

    PubMed

    Royston, D D

    1985-01-01

    A laser light radiometer has been developed for the Electro-Optics Branch of the Center for Devices and Radiological Health (CDRH). The radiometer measures direct laser radiation emitted in the visible spectrum. Based upon this measurement, the instrument's microprocessor automatically determines at what time duration the exposure to the measured laser radiation would exceed either the class I accessible emission limits of the Federal Performance Standard for laser products or the maximum permissible exposure limits of laser user safety standards. The instrument also features automatic background level compensation, pulse measurement capability, and self-diagnosis. Measurement of forward surface illumination levels preceding HpD photoradiation therapy is possible.

  20. 2D/3D fetal cardiac dataset segmentation using a deformable model.

    PubMed

    Dindoyal, Irving; Lambrou, Tryphon; Deng, Jing; Todd-Pokropek, Andrew

    2011-07-01

    To segment the fetal heart in order to facilitate the 3D assessment of the cardiac function and structure. Ultrasound acquisition typically results in drop-out artifacts of the chamber walls. The authors outline a level set deformable model to automatically delineate the small fetal cardiac chambers. The level set is penalized from growing into an adjacent cardiac compartment using a novel collision detection term. The region based model allows simultaneous segmentation of all four cardiac chambers from a user defined seed point placed in each chamber. The segmented boundaries are automatically penalized from intersecting at walls with signal dropout. Root mean square errors of the perpendicular distances between the algorithm's delineation and manual tracings are within 2 mm which is less than 10% of the length of a typical fetal heart. The ejection fractions were determined from the 3D datasets. We validate the algorithm using a physical phantom and obtain volumes that are comparable to those from physically determined means. The algorithm segments volumes with an error of within 13% as determined using a physical phantom. Our original work in fetal cardiac segmentation compares automatic and manual tracings to a physical phantom and also measures inter observer variation.

  1. Realtime Knowledge Management (RKM): From an International Space Station (ISS) Point of View

    NASA Technical Reports Server (NTRS)

    Robinson, Peter I.; McDermott, William; Alena, Richard L.

    2004-01-01

    We are developing automated methods to provide realtime access to spacecraft domain knowledge relevant a spacecraft's current operational state. The method is based upon analyzing state-transition signatures in the telemetry stream. A key insight is that documentation relevant to a specific failure mode or operational state is related to the structure and function of spacecraft systems. This means that diagnostic dependency and state models can provide a roadmap for effective documentation navigation and presentation. Diagnostic models consume the telemetry and derive a high-level state description of the spacecraft. Each potential spacecraft state description is matched against the predictions of models that were developed from information found in the pages and sections in the relevant International Space Station (ISS) documentation and reference materials. By annotating each model fragment with the domain knowledge sources from which it was derived we can develop a system that automatically selects those documents representing the domain knowledge encapsulated by the models that compute the current spacecraft state. In this manner, when the spacecraft state changes, the relevant documentation context and presentation will also change.

  2. Determination of vanadium(V) by direct automatic potentiometric titration with EDTA using a chemically modified electrode as a potentiometric sensor.

    PubMed

    Quintar, S E; Santagata, J P; Cortinez, V A

    2005-10-15

    A chemically modified electrode (CME) was prepared and studied as a potentiometric sensor for the end-point detection in the automatic titration of vanadium(V) with EDTA. The CME was constructed with a paste prepared by mixing spectral-grade graphite powder, Nujol oil and N-2-naphthoyl-N-p-tolylhydroxamic acid (NTHA). Buffer systems, pH effects and the concentration range were studied. Interference ions were separated by applying a liquid-liquid extraction procedure. The CME did not require any special conditioning before using. The electrode was constructed with very inexpensive materials and was easily made. It could be continuously used, at least two months without removing the paste. Automatic potentiometric titration curves were obtained for V(V) within 5 x 10(-5) to 2 x 10(-3)M with acceptable accuracy and precision. The developed method was applied to V(V) determination in alloys for hip prosthesis.

  3. Characteristics of an aerosol photometer while automatically controlling chamber dilution-air flow rate.

    PubMed

    O'Shaughnessy, P T; Hemenway, D R

    2000-10-01

    Trials were conducted to determine those factors that affect the accuracy of a direct-reading aerosol photometer when automatically controlling airflow rate within an exposure chamber to regulate airborne dust concentrations. Photometer response was affected by a shift in the aerosol size distribution caused by changes in chamber flow rate. In addition to a dilution effect, flow rate also determined the relative amount of aerosol lost to sedimentation within the chamber. Additional calculations were added to a computer control algorithm to compensate for these effects when attempting to automatically regulate flow based on a proportional-integral-derivative (PID) feedback control algorithm. A comparison between PID-controlled trials and those performed with a constant generator output rate and dilution-air flow rate demonstrated that there was no significant decrease in photometer accuracy despite the many changes in flow rate produced when using PID control. Likewise, the PID-controlled trials produced chamber aerosol concentrations within 1% of a desired level.

  4. Automatic energy calibration algorithm for an RBS setup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Tiago F.; Moro, Marcos V.; Added, Nemitala

    2013-05-06

    This work describes a computer algorithm for automatic extraction of the energy calibration parameters from a Rutherford Back-Scattering Spectroscopy (RBS) spectrum. Parameters like the electronic gain, electronic offset and detection resolution (FWHM) of a RBS setup are usually determined using a standard sample. In our case, the standard sample comprises of a multi-elemental thin film made of a mixture of Ti-Al-Ta that is analyzed at the beginning of each run at defined beam energy. A computer program has been developed to extract automatically the calibration parameters from the spectrum of the standard sample. The code evaluates the first derivative ofmore » the energy spectrum, locates the trailing edges of the Al, Ti and Ta peaks and fits a first order polynomial for the energy-channel relation. The detection resolution is determined fitting the convolution of a pre-calculated theoretical spectrum. To test the code, data of two years have been analyzed and the results compared with the manual calculations done previously, obtaining good agreement.« less

  5. Automatic Contour Extraction of Facial Organs for Frontal Facial Images with Various Facial Expressions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki

    This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.

  6. Application of a Novel Semi-Automatic Technique for Determining the Bilateral Symmetry Plane of the Facial Skeleton of Normal Adult Males.

    PubMed

    Roumeliotis, Grayson; Willing, Ryan; Neuert, Mark; Ahluwalia, Romy; Jenkyn, Thomas; Yazdani, Arjang

    2015-09-01

    The accurate assessment of symmetry in the craniofacial skeleton is important for cosmetic and reconstructive craniofacial surgery. Although there have been several published attempts to develop an accurate system for determining the correct plane of symmetry, all are inaccurate and time consuming. Here, the authors applied a novel semi-automatic method for the calculation of craniofacial symmetry, based on principal component analysis and iterative corrective point computation, to a large sample of normal adult male facial computerized tomography scans obtained clinically (n = 32). The authors hypothesized that this method would generate planes of symmetry that would result in less error when one side of the face was compared to the other than a symmetry plane generated using a plane defined by cephalometric landmarks. When a three-dimensional model of one side of the face was reflected across the semi-automatic plane of symmetry there was less error than when reflected across the cephalometric plane. The semi-automatic plane was also more accurate when the locations of bilateral cephalometric landmarks (eg, frontozygomatic sutures) were compared across the face. The authors conclude that this method allows for accurate and fast measurements of craniofacial symmetry. This has important implications for studying the development of the facial skeleton, and clinical application for reconstruction.

  7. Automated pharmaceutical tablet coating layer evaluation of optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Markl, Daniel; Hannesschläger, Günther; Sacher, Stephan; Leitner, Michael; Khinast, Johannes G.; Buchsbaum, Andreas

    2015-03-01

    Film coating of pharmaceutical tablets is often applied to influence the drug release behaviour. The coating characteristics such as thickness and uniformity are critical quality parameters, which need to be precisely controlled. Optical coherence tomography (OCT) shows not only high potential for off-line quality control of film-coated tablets but also for in-line monitoring of coating processes. However, an in-line quality control tool must be able to determine coating thickness measurements automatically and in real-time. This study proposes an automatic thickness evaluation algorithm for bi-convex tables, which provides about 1000 thickness measurements within 1 s. Beside the segmentation of the coating layer, optical distortions due to refraction of the beam by the air/coating interface are corrected. Moreover, during in-line monitoring the tablets might be in oblique orientation, which needs to be considered in the algorithm design. Experiments were conducted where the tablet was rotated to specified angles. Manual and automatic thickness measurements were compared for varying coating thicknesses, angles of rotations, and beam displacements (i.e. lateral displacement between successive depth scans). The automatic thickness determination algorithm provides highly accurate results up to an angle of rotation of 30°. The computation time was reduced to 0.53 s for 700 thickness measurements by introducing feasibility constraints in the algorithm.

  8. Relevance of Trust Marks and CE Labels in German-Language Store Descriptions of Health Apps: Analysis

    PubMed Central

    Hillebrand, Uta; von Jan, Ute

    2018-01-01

    Background In addition to mandatory CE marking (“CE” representing Conformité Européenne, with the CE marking being a symbol of free marketability in the European Economic Area) for medical devices, there are various seals, initiatives, action groups, etc, in the health app context. However, whether manufacturers use them to distinguish their apps and attach relevance to them is unclear. Objective The objective was to take a snapshot of quality seals, regulatory marks, and other orientation aids available on the German app market and to determine whether manufacturers deem such labels relevant enough to apply them to their apps, namely as reflected by mentions in app description texts in a typical app store (ie, Apple’s App Store). Methods A full survey of the metadata of 103,046 apps from Apple’s German App Store in the Medicine and Health & Fitness categories was carried out. For apps with German-language store descriptions (N=8767), these were automatically searched for the occurrence of relevant keywords and validated manually (N=41). In addition, the websites of various app seal providers were checked for assigned seals. Results Few manufacturers referenced seals in the descriptions (5/41), although this would have been expected more often based on the seals we were able to identify from the seal providers’ Web pages, and there were 34 of 41 that mentioned CE status in the descriptions. Two apps referenced an app directory curated by experts; however, this is not an alternative to CE marks and seals of approval. Conclusions Currently, quality seals seem to be irrelevant for manufacturers. In line with regulatory requirements, mentions of medical device status are more frequent; however, neither characteristic is effective for identifying high-quality apps. To improve this situation, a possibly legally obligatory, standardized reporting system should be implemented. PMID:29695374

  9. Relevance of Trust Marks and CE Labels in German-Language Store Descriptions of Health Apps: Analysis.

    PubMed

    Albrecht, Urs-Vito; Hillebrand, Uta; von Jan, Ute

    2018-04-25

    In addition to mandatory CE marking ("CE" representing Conformité Européenne, with the CE marking being a symbol of free marketability in the European Economic Area) for medical devices, there are various seals, initiatives, action groups, etc, in the health app context. However, whether manufacturers use them to distinguish their apps and attach relevance to them is unclear. The objective was to take a snapshot of quality seals, regulatory marks, and other orientation aids available on the German app market and to determine whether manufacturers deem such labels relevant enough to apply them to their apps, namely as reflected by mentions in app description texts in a typical app store (ie, Apple's App Store). A full survey of the metadata of 103,046 apps from Apple's German App Store in the Medicine and Health & Fitness categories was carried out. For apps with German-language store descriptions (N=8767), these were automatically searched for the occurrence of relevant keywords and validated manually (N=41). In addition, the websites of various app seal providers were checked for assigned seals. Few manufacturers referenced seals in the descriptions (5/41), although this would have been expected more often based on the seals we were able to identify from the seal providers' Web pages, and there were 34 of 41 that mentioned CE status in the descriptions. Two apps referenced an app directory curated by experts; however, this is not an alternative to CE marks and seals of approval. Currently, quality seals seem to be irrelevant for manufacturers. In line with regulatory requirements, mentions of medical device status are more frequent; however, neither characteristic is effective for identifying high-quality apps. To improve this situation, a possibly legally obligatory, standardized reporting system should be implemented. ©Urs-Vito Albrecht, Uta Hillebrand, Ute von Jan. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 25.04.2018.

  10. Comparison of three web-scale discovery services for health sciences research.

    PubMed

    Hanneke, Rosie; O'Brien, Kelly K

    2016-04-01

    The purpose of this study was to investigate the relative effectiveness of three web-scale discovery (WSD) tools in answering health sciences search queries. Simple keyword searches, based on topics from six health sciences disciplines, were run at multiple real-world implementations of EBSCO Discovery Service (EDS), Ex Libris's Primo, and ProQuest's Summon. Each WSD tool was evaluated in its ability to retrieve relevant results and in its coverage of MEDLINE content. All WSD tools returned between 50%-60% relevant results. Primo returned a higher number of duplicate results than the other 2 WSD products. Summon results were more relevant when search terms were automatically mapped to controlled vocabulary. EDS indexed the largest number of MEDLINE citations, followed closely by Summon. Additionally, keyword searches in all 3 WSD tools retrieved relevant material that was not found with precision (Medical Subject Headings) searches in MEDLINE. None of the 3 WSD products studied was overwhelmingly more effective in returning relevant results. While difficult to place the figure of 50%-60% relevance in context, it implies a strong likelihood that the average user would be able to find satisfactory sources on the first page of search results using a rudimentary keyword search. The discovery of additional relevant material beyond that retrieved from MEDLINE indicates WSD tools' value as a supplement to traditional resources for health sciences researchers.

  11. Comparison of three web-scale discovery services for health sciences research*

    PubMed Central

    Hanneke, Rosie; O'Brien, Kelly K.

    2016-01-01

    Objective The purpose of this study was to investigate the relative effectiveness of three web-scale discovery (WSD) tools in answering health sciences search queries. Methods Simple keyword searches, based on topics from six health sciences disciplines, were run at multiple real-world implementations of EBSCO Discovery Service (EDS), Ex Libris's Primo, and ProQuest's Summon. Each WSD tool was evaluated in its ability to retrieve relevant results and in its coverage of MEDLINE content. Results All WSD tools returned between 50%–60% relevant results. Primo returned a higher number of duplicate results than the other 2 WSD products. Summon results were more relevant when search terms were automatically mapped to controlled vocabulary. EDS indexed the largest number of MEDLINE citations, followed closely by Summon. Additionally, keyword searches in all 3 WSD tools retrieved relevant material that was not found with precision (Medical Subject Headings) searches in MEDLINE. Conclusions None of the 3 WSD products studied was overwhelmingly more effective in returning relevant results. While difficult to place the figure of 50%–60% relevance in context, it implies a strong likelihood that the average user would be able to find satisfactory sources on the first page of search results using a rudimentary keyword search. The discovery of additional relevant material beyond that retrieved from MEDLINE indicates WSD tools' value as a supplement to traditional resources for health sciences researchers. PMID:27076797

  12. Real-time piloted simulation of fully automatic guidance and control for rotorcraft nap-of-the-earth (NOE) flight following planned profiles

    NASA Technical Reports Server (NTRS)

    Clement, Warren F.; Gorder, Pater J.; Jewell, Wayne F.; Coppenbarger, Richard

    1990-01-01

    Developing a single-pilot all-weather NOE capability requires fully automatic NOE navigation and flight control. Innovative guidance and control concepts are being investigated to (1) organize the onboard computer-based storage and real-time updating of NOE terrain profiles and obstacles; (2) define a class of automatic anticipative pursuit guidance algorithms to follow the vertical, lateral, and longitudinal guidance commands; (3) automate a decision-making process for unexpected obstacle avoidance; and (4) provide several rapid response maneuvers. Acquired knowledge from the sensed environment is correlated with the recorded environment which is then used to determine an appropriate evasive maneuver if a nonconformity is observed. This research effort has been evaluated in both fixed-base and moving-base real-time piloted simulations thereby evaluating pilot acceptance of the automated concepts, supervisory override, manual operation, and reengagement of the automatic system.

  13. Application of the concept of dynamic trim control to automatic landing of carrier aircraft. [utilizing digital feedforeward control

    NASA Technical Reports Server (NTRS)

    Smith, G. A.; Meyer, G.

    1980-01-01

    The results of a simulation study of an alternative design concept for an automatic landing control system are presented. The alternative design concept for an automatic landing control system is described. The design concept is the total aircraft flight control system (TAFCOS). TAFCOS is an open loop, feed forward system that commands the proper instantaneous thrust, angle of attack, and roll angle to achieve the forces required to follow the desired trajector. These dynamic trim conditions are determined by an inversion of the aircraft nonlinear force characteristics. The concept was applied to an A-7E aircraft approaching an aircraft carrier. The implementation details with an airborne digital computer are discussed. The automatic carrier landing situation is described. The simulation results are presented for a carrier approach with atmospheric disturbances, an approach with no disturbances, and for tailwind and headwind gusts.

  14. Research in interactive scene analysis

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.

    1975-01-01

    An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.

  15. Semi-automatic volume measurement for orbital fat and total extraocular muscles based on Cube FSE-flex sequence in patients with thyroid-associated ophthalmopathy.

    PubMed

    Tang, X; Liu, H; Chen, L; Wang, Q; Luo, B; Xiang, N; He, Y; Zhu, W; Zhang, J

    2018-05-24

    To investigate the accuracy of two semi-automatic segmentation measurements based on magnetic resonance imaging (MRI) three-dimensional (3D) Cube fast spin echo (FSE)-flex sequence in phantoms, and to evaluate the feasibility of determining the volumetric alterations of orbital fat (OF) and total extraocular muscles (TEM) in patients with thyroid-associated ophthalmopathy (TAO) by semi-automatic segmentation. Forty-four fatty (n=22) and lean (n=22) phantoms were scanned by using Cube FSE-flex sequence with a 3 T MRI system. Their volumes were measured by manual segmentation (MS) and two semi-automatic segmentation algorithms (regional growing [RG], multi-dimensional threshold [MDT]). Pearson correlation and Bland-Altman analysis were used to evaluate the measuring accuracy of MS, RG, and MDT in phantoms as compared with the true volume. Then, OF and TEM volumes of 15 TAO patients and 15 normal controls were measured using MDT. Paired-sample t-tests were used to compare the volumes and volume ratios of different orbital tissues between TAO patients and controls. Each segmentation (MS RG, MDT) has a significant correlation (p<0.01) with true volume. There was a minimal bias for MS, and a stronger agreement between MDT and the true volume than RG and the true volume both in fatty and lean phantoms. The reproducibility of Cube FSE-flex determined MDT was adequate. The volumetric ratios of OF/globe (p<0.01), TEM/globe (p<0.01), whole orbit/globe (p<0.01) and bone orbit/globe (p<0.01) were significantly greater in TAO patients than those in healthy controls. MRI Cube FSE-flex determined MDT is a relatively accurate semi-automatic segmentation that can be used to evaluate OF and TEM volumes in clinic. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  16. Automatic Event Detection in Search for Inter-Moss Loops in IRIS Si IV Slit-Jaw Images

    NASA Technical Reports Server (NTRS)

    Fayock, Brian; Winebarger, Amy R.; De Pontieu, Bart

    2015-01-01

    The high-resolution capabilities of the Interface Region Imaging Spectrometer (IRIS) mission have allowed the exploration of the finer details of the solar magnetic structure from the chromosphere to the lower corona that have previously been unresolved. Of particular interest to us are the relatively short-lived, low-lying magnetic loops that have foot points in neighboring moss regions. These inter-moss loops have also appeared in several AIA pass bands, which are generally associated with temperatures that are at least an order of magnitude higher than that of the Si IV emission seen in the 1400 angstrom pass band of IRIS. While the emission lines seen in these pass bands can be associated with a range of temperatures, the simultaneous appearance of these loops in IRIS 1400 and AIA 171, 193, and 211 suggest that they are not in ionization equilibrium. To study these structures in detail, we have developed a series of algorithms to automatically detect signal brightening or events on a pixel-by-pixel basis and group them together as structures for each of the above data sets. These algorithms have successfully picked out all activity fitting certain adjustable criteria. The resulting groups of events are then statistically analyzed to determine which characteristics can be used to distinguish the inter-moss loops from all other structures. While a few characteristic histograms reveal that manually selected inter-moss loops lie outside the norm, a combination of several characteristics will need to be used to determine the statistical likelihood that a group of events be categorized automatically as a loop of interest. The goal of this project is to be able to automatically pick out inter-moss loops from an entire data set and calculate the characteristics that have previously been determined manually, such as length, intensity, and lifetime. We will discuss the algorithms, preliminary results, and current progress of automatic characterization.

  17. Affective decision-making moderates the effects of automatic associations on alcohol use among drug offenders.

    PubMed

    Cappelli, Christopher; Ames, Susan; Shono, Yusuke; Dust, Mark; Stacy, Alan

    2017-09-01

    This study used a dual-process model of cognition in order to investigate the possible influence of automatic and deliberative processes on lifetime alcohol use in a sample of drug offenders. The objective was to determine if automatic/implicit associations in memory can exert an influence over an individual's alcohol use and if decision-making ability could potentially modify the influence of these associations. 168 participants completed a battery of cognitive tests measuring implicit alcohol associations in memory (verb generation) as well as their affective decision-making ability (Iowa Gambling Task). Structural equation modeling procedures were used to test the relationship between implicit associations, decision-making, and lifetime alcohol use. Results revealed that among participants with lower levels of decision-making, implicit alcohol associations more strongly predicted higher lifetime alcohol use. These findings provide further support for the interaction between a specific decision function and its influence over automatic processes in regulating alcohol use behavior in a risky population. Understanding the interaction between automatic associations and decision processes may aid in developing more effective intervention components.

  18. Fully automated motion correction in first-pass myocardial perfusion MR image sequences.

    PubMed

    Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2008-11-01

    This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.

  19. RSAT matrix-clustering: dynamic exploration and redundancy reduction of transcription factor binding motif collections

    PubMed Central

    Jaeger, Sébastien; Thieffry, Denis

    2017-01-01

    Abstract Transcription factor (TF) databases contain multitudes of binding motifs (TFBMs) from various sources, from which non-redundant collections are derived by manual curation. The advent of high-throughput methods stimulated the production of novel collections with increasing numbers of motifs. Meta-databases, built by merging these collections, contain redundant versions, because available tools are not suited to automatically identify and explore biologically relevant clusters among thousands of motifs. Motif discovery from genome-scale data sets (e.g. ChIP-seq) also produces redundant motifs, hampering the interpretation of results. We present matrix-clustering, a versatile tool that clusters similar TFBMs into multiple trees, and automatically creates non-redundant TFBM collections. A feature unique to matrix-clustering is its dynamic visualisation of aligned TFBMs, and its capability to simultaneously treat multiple collections from various sources. We demonstrate that matrix-clustering considerably simplifies the interpretation of combined results from multiple motif discovery tools, and highlights biologically relevant variations of similar motifs. We also ran a large-scale application to cluster ∼11 000 motifs from 24 entire databases, showing that matrix-clustering correctly groups motifs belonging to the same TF families, and drastically reduced motif redundancy. matrix-clustering is integrated within the RSAT suite (http://rsat.eu/), accessible through a user-friendly web interface or command-line for its integration in pipelines. PMID:28591841

  20. Probabilistic and machine learning-based retrieval approaches for biomedical dataset retrieval

    PubMed Central

    Karisani, Payam; Qin, Zhaohui S; Agichtein, Eugene

    2018-01-01

    Abstract The bioCADDIE dataset retrieval challenge brought together different approaches to retrieval of biomedical datasets relevant to a user’s query, expressed as a text description of a needed dataset. We describe experiments in applying a data-driven, machine learning-based approach to biomedical dataset retrieval as part of this challenge. We report on a series of experiments carried out to evaluate the performance of both probabilistic and machine learning-driven techniques from information retrieval, as applied to this challenge. Our experiments with probabilistic information retrieval methods, such as query term weight optimization, automatic query expansion and simulated user relevance feedback, demonstrate that automatically boosting the weights of important keywords in a verbose query is more effective than other methods. We also show that although there is a rich space of potential representations and features available in this domain, machine learning-based re-ranking models are not able to improve on probabilistic information retrieval techniques with the currently available training data. The models and algorithms presented in this paper can serve as a viable implementation of a search engine to provide access to biomedical datasets. The retrieval performance is expected to be further improved by using additional training data that is created by expert annotation, or gathered through usage logs, clicks and other processes during natural operation of the system. Database URL: https://github.com/emory-irlab/biocaddie PMID:29688379

  1. Using classification models for the generation of disease-specific medications from biomedical literature and clinical data repository.

    PubMed

    Wang, Liqin; Haug, Peter J; Del Fiol, Guilherme

    2017-05-01

    Mining disease-specific associations from existing knowledge resources can be useful for building disease-specific ontologies and supporting knowledge-based applications. Many association mining techniques have been exploited. However, the challenge remains when those extracted associations contained much noise. It is unreliable to determine the relevance of the association by simply setting up arbitrary cut-off points on multiple scores of relevance; and it would be expensive to ask human experts to manually review a large number of associations. We propose that machine-learning-based classification can be used to separate the signal from the noise, and to provide a feasible approach to create and maintain disease-specific vocabularies. We initially focused on disease-medication associations for the purpose of simplicity. For a disease of interest, we extracted potentially treatment-related drug concepts from biomedical literature citations and from a local clinical data repository. Each concept was associated with multiple measures of relevance (i.e., features) such as frequency of occurrence. For the machine purpose of learning, we formed nine datasets for three diseases with each disease having two single-source datasets and one from the combination of previous two datasets. All the datasets were labeled using existing reference standards. Thereafter, we conducted two experiments: (1) to test if adding features from the clinical data repository would improve the performance of classification achieved using features from the biomedical literature only, and (2) to determine if classifier(s) trained with known medication-disease data sets would be generalizable to new disease(s). Simple logistic regression and LogitBoost were two classifiers identified as the preferred models separately for the biomedical-literature datasets and combined datasets. The performance of the classification using combined features provided significant improvement beyond that using biomedical-literature features alone (p-value<0.001). The performance of the classifier built from known diseases to predict associated concepts for new diseases showed no significant difference from the performance of the classifier built and tested using the new disease's dataset. It is feasible to use classification approaches to automatically predict the relevance of a concept to a disease of interest. It is useful to combine features from disparate sources for the task of classification. Classifiers built from known diseases were generalizable to new diseases. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Study of Terrestrial Radio Determination : Applications and Technology

    DOT National Transportation Integrated Search

    1979-02-01

    The report describes the results of a study of terrestrial radio determination (TRD) applications and technology. Considerable emphasis has been placed on automatic automotive vehicle location or monitoring (AVL or AVM) systems because almost all of ...

  3. Semi-Automatic Determination of Rockfall Trajectories

    PubMed Central

    Volkwein, Axel; Klette, Johannes

    2014-01-01

    In determining rockfall trajectories in the field, it is essential to calibrate and validate rockfall simulation software. This contribution presents an in situ device and a complementary Local Positioning System (LPS) that allow the determination of parts of the trajectory. An assembly of sensors (herein called rockfall sensor) is installed in the falling block recording the 3D accelerations and rotational velocities. The LPS automatically calculates the position of the block along the slope over time based on Wi-Fi signals emitted from the rockfall sensor. The velocity of the block over time is determined through post-processing. The setup of the rockfall sensor is presented followed by proposed calibration and validation procedures. The performance of the LPS is evaluated by means of different experiments. The results allow for a quality analysis of both the obtained field data and the usability of the rockfall sensor for future/further applications in the field. PMID:25268916

  4. Design Support System for Coloring Illustrations by Using the Colors Preferred by a User as Determined from the Hue Patterns of Illustrations Prepared by that User

    NASA Astrophysics Data System (ADS)

    Fukai, Hironobu; Mitsukura, Yasue

    We propose a new design support system that can color illustrations according to a person's color preferences that are determined on the basis of the color patterns of illustrations prepared by that person. Recently, many design tools for promoting free design have been developed. However, preferences for various colors differ depending on individual personality. Therefore, a system that can automatically color various designs on the basis of human preference is required. In this study, we propose an automatic modeling system that can be used to model illustrations. To verify the effectiveness of the proposed system, we simulate a coloring design experiment to determine the color patterns preferred by some subjects by using various design data. By using the design data, we determine each subjects preferred color pattern, and send feedback on these individual color patterns to the proposed system.

  5. Changed processing of visual sexual stimuli under GnRH-therapy--a single case study in pedophilia using eye tracking and fMRI.

    PubMed

    Jordan, Kirsten; Fromberger, Peter; Laubinger, Helge; Dechent, Peter; Müller, Jürgen L

    2014-05-17

    Antiandrogen therapy (ADT) has been used for 30 years to treat pedophilic patients. The aim of the treatment is a reduction in sexual drive and, in consequence, a reduced risk of recidivism. Yet the therapeutic success of antiandrogens is uncertain especially regarding recidivism. Meta-analyses and reviews report only moderate and often mutually inconsistent effects. Based on the case of a 47 year old exclusively pedophilic forensic inpatient, we examined the effectiveness of a new eye tracking method and a new functional magnetic resonance imaging (fMRI)-design in regard to the evaluation of ADT in pedophiles. We analyzed the potential of these methods in exploring the impact of ADT on automatic and controlled attentional processes in pedophiles. Eye tracking and fMRI measures were conducted before the initial ADT as well as four months after the onset of ADT. The patient simultaneously viewed an image of a child and an image of an adult while eye movements were measured. During the fMRI-measure the same stimuli were presented subliminally. Eye movements demonstrated that controlled attentional processes change under ADT, whereas automatic processes remained mostly unchanged. We assume that these results reflect either the increased ability of the patient to control his eye movements while viewing prepubertal stimuli or his better ability to manipulate his answer in a socially desirable manner. Unchanged automatic attentional processes could reflect the stable pedophilic preference of the patient. Using fMRI, the subliminal presentation of sexually relevant stimuli led to changed activation patterns under the influence of ADT in occipital and parietal brain regions, the hippocampus, and also in the orbitofrontal cortex. We suggest that even at an unconscious level ADT can lead to changed processing of sexually relevant stimuli, reflecting changes of cognitive and perceptive automatic processes. We are convinced that our experimental designs using eye tracking and fMRI could prospectively add additional and valuable information in the evaluation of ADT in paraphilic patients and sex offenders. But with respect to the limited significance of this single case study, these first results are preliminary and further studies have to be conducted with healthy subjects and patients.

  6. Changed processing of visual sexual stimuli under GnRH-therapy – a single case study in pedophilia using eye tracking and fMRI

    PubMed Central

    2014-01-01

    Background Antiandrogen therapy (ADT) has been used for 30 years to treat pedophilic patients. The aim of the treatment is a reduction in sexual drive and, in consequence, a reduced risk of recidivism. Yet the therapeutic success of antiandrogens is uncertain especially regarding recidivism. Meta-analyses and reviews report only moderate and often mutually inconsistent effects. Case presentation Based on the case of a 47 year old exclusively pedophilic forensic inpatient, we examined the effectiveness of a new eye tracking method and a new functional magnetic resonance imaging (fMRI)-design in regard to the evaluation of ADT in pedophiles. We analyzed the potential of these methods in exploring the impact of ADT on automatic and controlled attentional processes in pedophiles. Eye tracking and fMRI measures were conducted before the initial ADT as well as four months after the onset of ADT. The patient simultaneously viewed an image of a child and an image of an adult while eye movements were measured. During the fMRI-measure the same stimuli were presented subliminally. Eye movements demonstrated that controlled attentional processes change under ADT, whereas automatic processes remained mostly unchanged. We assume that these results reflect either the increased ability of the patient to control his eye movements while viewing prepubertal stimuli or his better ability to manipulate his answer in a socially desirable manner. Unchanged automatic attentional processes could reflect the stable pedophilic preference of the patient. Using fMRI, the subliminal presentation of sexually relevant stimuli led to changed activation patterns under the influence of ADT in occipital and parietal brain regions, the hippocampus, and also in the orbitofrontal cortex. We suggest that even at an unconscious level ADT can lead to changed processing of sexually relevant stimuli, reflecting changes of cognitive and perceptive automatic processes. Conclusion We are convinced that our experimental designs using eye tracking and fMRI could prospectively add additional and valuable information in the evaluation of ADT in paraphilic patients and sex offenders. But with respect to the limited significance of this single case study, these first results are preliminary and further studies have to be conducted with healthy subjects and patients. PMID:24885644

  7. Automatic segmentation of the puborectalis muscle in 3D transperineal ultrasound.

    PubMed

    van den Noort, Frieda; Grob, Anique T M; Slump, Cornelis H; van der Vaart, Carl H; van Stralen, Marijn

    2017-10-11

    The introduction of 3D analysis of the puborectalis muscle, for diagnostic purposes, into daily practice is hindered by the need for appropriate training of the observers. Automatic 3D segmentation of the puborectalis muscle in 3D transperineal ultrasound may aid to its adaption in clinical practice. A manual 3D segmentation protocol was developed to segment the puborectalis muscle. The data of 20 women, in their first trimester of pregnancy, was used to validate the reproducibility of this protocol. For automatic segmentation, active appearance models of the puborectalis muscle were developed. Those models were trained using manual segmentation data of 50 women. The performance of both manual and automatic segmentation was analyzed by measuring the overlap and distance between the segmentations. Also, the interclass correlation coefficients and their 95% confidence intervals were determined for mean echogenicity and volume of the puborectalis muscle. The ICC values of mean echogenicity (0.968-0.991) and volume (0.626-0.910) are good to very good for both automatic and manual segmentation. The results of overlap and distance for manual segmentation are as expected, showing only few pixels (2-3) mismatch on average and a reasonable overlap. Based on overlap and distance 5 mismatches in automatic segmentation were detected, resulting in an automatic segmentation a success rate of 90%. In conclusion, this study presents a reliable manual and automatic 3D segmentation of the puborectalis muscle. This will facilitate future investigation of the puborectalis muscle. It also allows for reliable measurements of clinically potentially valuable parameters like mean echogenicity. This article is protected by copyright. All rights reserved.

  8. A concurrent computer aided detection (CAD) tool for articular cartilage disease of the knee on MR imaging using active shape models

    NASA Astrophysics Data System (ADS)

    Ramakrishna, Bharath; Saiprasad, Ganesh; Safdar, Nabile; Siddiqui, Khan; Chang, Chein-I.; Siegel, Eliot

    2008-03-01

    Osteoarthritis (OA) is the most common form of arthritis and a major cause of morbidity affecting millions of adults in the US and world wide. In the knee, OA begins with the degeneration of joint articular cartilage, eventually resulting in the femur and tibia coming in contact, and leading to severe pain and stiffness. There has been extensive research examining 3D MR imaging sequences and automatic/semi-automatic techniques for 2D/3D articular cartilage extraction. However, in routine clinical practice the most popular technique still remain radiographic examination and qualitative assessment of the joint space. This may be in large part because of a lack of tools that can provide clinically relevant diagnosis in adjunct (in near real time fashion) with the radiologist and which can serve the needs of the radiologists and reduce inter-observer variation. Our work aims to fill this void by developing a CAD application that can generate clinically relevant diagnosis of the articular cartilage damage in near real time fashion. The algorithm features a 2D Active Shape Model (ASM) for modeling the bone-cartilage interface on all the slices of a Double Echo Steady State (DESS) MR sequence, followed by measurement of the cartilage thickness from the surface of the bone, and finally by the identification of regions of abnormal thinness and focal/degenerative lesions. A preliminary evaluation of CAD tool was carried out on 10 cases taken from the Osteoarthritis Initiative (OAI) database. When compared with 2 board-certified musculoskeletal radiologists, the automatic CAD application was able to get segmentation/thickness maps in little over 60 seconds for all of the cases. This observation poses interesting possibilities for increasing radiologist productivity and confidence, improving patient outcomes, and applying more sophisticated CAD algorithms to routine orthopedic imaging tasks.

  9. Coupling flood forecasting and social media crowdsourcing

    NASA Astrophysics Data System (ADS)

    Kalas, Milan; Kliment, Tomas; Salamon, Peter

    2016-04-01

    Social and mainstream media monitoring is being more and more recognized as valuable source of information in disaster management and response. The information on ongoing disasters could be detected in very short time and the social media can bring additional information to traditional data feeds (ground, remote observation schemes). Probably the biggest attempt to use the social media in the crisis management was the activation of the Digital Humanitarian Network by the United Nations Office for the Coordination of Humanitarian Affairs in response to Typhoon Yolanda. The network of volunteers performing rapid needs & damage assessment by tagging reports posted to social media which were then used by machine learning classifiers as a training set to automatically identify tweets referring to both urgent needs and offers of help. In this work we will present the potential of coupling a social media streaming and news monitoring application ( GlobalFloodNews - www.globalfloodsystem.com) with a flood forecasting system (www.globalfloods.eu) and the geo-catalogue of the OGC services discovered in the Google Search Engine (WMS, WFS, WCS, etc.) to provide a full suite of information available to crisis management centers as fast as possible. In GlobalFloodNews we use advanced filtering of the real-time Twitter stream, where the relevant information is automatically extracted using natural language and signal processing techniques. The keyword filters are adjusted and optimized automatically using machine learning algorithms as new reports are added to the system. In order to refine the search results the forecasting system will be triggering an event-based search on the social media and OGC services relevant for crisis response (population distribution, critical infrastructure, hospitals etc.). The current version of the system makes use of USHAHIDI Crowdmap platform, which is designed to easily crowdsource information using multiple channels, including SMS, email, Twitter and the web we want to show the potential of monitoring floods at the global scale.

  10. SU-D-BRD-06: Automated Population-Based Planning for Whole Brain Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, E; Fox, T; Crocker, I

    2014-06-01

    Purpose: Treatment planning for whole brain radiation treatment is technically a simple process but in practice it takes valuable clinical time of repetitive and tedious tasks. This report presents a method that automatically segments the relevant target and normal tissues and creates a treatment plan in only a few minutes after patient simulation. Methods: Segmentation is performed automatically through morphological operations on the soft tissue. The treatment plan is generated by searching a database of previous cases for patients with similar anatomy. In this search, each database case is ranked in terms of similarity using a customized metric designed formore » sensitivity by including only geometrical changes that affect the dose distribution. The database case with the best match is automatically modified to replace relevant patient info and isocenter position while maintaining original beam and MLC settings. Results: Fifteen patients were used to validate the method. In each of these cases the anatomy was accurately segmented to mean Dice coefficients of 0.970 ± 0.008 for the brain, 0.846 ± 0.009 for the eyes and 0.672 ± 0.111 for the lens as compared to clinical segmentations. Each case was then subsequently matched against a database of 70 validated treatment plans and the best matching plan (termed auto-planned), was compared retrospectively with the clinical plans in terms of brain coverage and maximum doses to critical structures. Maximum doses were reduced by a maximum of 20.809 Gy for the left eye (mean 3.533), by 13.352 (1.311) for the right eye, and by 27.471 (4.856), 25.218 (6.315) for the left and right lens. Time from simulation to auto-plan was 3-4 minutes. Conclusion: Automated database- based matching is an alternative to classical treatment planning that improves quality while providing a cost—effective solution to planning through modifying previous validated plans to match a current patient's anatomy.« less

  11. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.

  12. Automatic attention to emotional stimuli: neural correlates.

    PubMed

    Carretié, Luis; Hinojosa, José A; Martín-Loeches, Manuel; Mercado, Francisco; Tapia, Manuel

    2004-08-01

    We investigated the capability of emotional and nonemotional visual stimulation to capture automatic attention, an aspect of the interaction between cognitive and emotional processes that has received scant attention from researchers. Event-related potentials were recorded from 37 subjects using a 60-electrode array, and were submitted to temporal and spatial principal component analyses to detect and quantify the main components, and to source localization software (LORETA) to determine their spatial origin. Stimuli capturing automatic attention were of three types: emotionally positive, emotionally negative, and nonemotional pictures. Results suggest that initially (P1: 105 msec after stimulus), automatic attention is captured by negative pictures, and not by positive or nonemotional ones. Later (P2: 180 msec), automatic attention remains captured by negative pictures, but also by positive ones. Finally (N2: 240 msec), attention is captured only by positive and nonemotional stimuli. Anatomically, this sequence is characterized by decreasing activation of the visual association cortex (VAC) and by the growing involvement, from dorsal to ventral areas, of the anterior cingulate cortex (ACC). Analyses suggest that the ACC and not the VAC is responsible for experimental effects described above. Intensity, latency, and location of neural activity related to automatic attention thus depend clearly on the stimulus emotional content and on its associated biological importance. Copyright 2004 Wiley-Liss, Inc.

  13. Automatic twin vessel recrystallizer. Effective purification of acetaminophen by successive automatic recrystallization and absolute determination of purity by DSC.

    PubMed

    Nara, Osamu

    2011-01-24

    I describe an interchangeable twin vessel (J, N) automatic glass recrystallizer that eliminates the time-consuming recovery and recycling of crystals for repeated recrystallization. The sample goes in the dissolution vessel J containing a magnetic stir-bar K; J is clamped to the upper joint H of recrystallizer body D. Empty crystallization vessel N is clamped to the lower joint M. Pure solvent is delivered to the dissolution vessel and the crystallization vessel via the head of the condenser A. Crystallization vessel is heated (P). The dissolution reservoir is stirred and heated by the solvent vapor (F). Continuous outflow of filtrate E out of J keeps N at a stable boiling temperature. This results in efficient dissolution, evaporation and separation of pure crystals Q. Pure solvent in the dissolution reservoir is recovered by suction. Empty dissolution and crystallization vessels are detached. Stirrer magnet is transferred to the crystallization vessel and the role of the vessels are then reversed. Evacuating mother liquor out of the upper twin vessel, the apparatus unit is ready for the next automatic recrystallization by refilling twin vessels with pure solvent. We show successive automatic recrystallization of acetaminophen from diethyl ether obtaining acetaminophen of higher melting temperatures than USP and JP reference standards by 8× automatic recrystallization, 96% yield at each stage. Also, I demonstrate a novel approach to the determination of absolute purity by combining the successive automatic recrystallization with differential scanning calorimetry (DSC) measurement requiring no reference standards. This involves the measurement of the criterial melting temperature T(0) corresponding to the 100% pure material and quantitative ΔT in DSC based on the van't Hoff law of melting point depression. The purity of six commercial acetaminophen samples and reference standards and an eight times recrystallized product evaluated were 98.8 mol%, 97.9 mol%, 99.1 mol%, 98.3 mol%, 98.4 mol%, 98.5 mol% and 99.3 mol% respectively. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Automatic high-sensitivity control of suspended pollutants in drinking and natural water

    NASA Astrophysics Data System (ADS)

    Akopov, Edmund I.; Karabegov, M.; Ovanesyan, A.

    1993-11-01

    This article presents a description of the new instrumental method and device for automatic measurement of water turbidity (WT) by means of photoelectron flow ultramicroscope (PFU). The method presents the WT determination by measuring the number concentration (number of particles suspended in 1 cm3 of water under study) using the PFU and demonstrates much higher sensitivity and accuracy in comparison with the usual methods--turbidimetry and nephelometry.

  15. Automatic RBG-depth-pressure anthropometric analysis and individualised sleep solution prescription.

    PubMed

    Esquirol Caussa, Jordi; Palmero Cantariño, Cristina; Bayo Tallón, Vanessa; Cos Morera, Miquel Àngel; Escalera, Sergio; Sánchez, David; Sánchez Padilla, Maider; Serrano Domínguez, Noelia; Relats Vilageliu, Mireia

    2017-08-01

    Sleep surfaces must adapt to individual somatotypic features to maintain a comfortable, convenient and healthy sleep, preventing diseases and injuries. Individually determining the most adequate rest surface can often be a complex and subjective question. To design and validate an automatic multimodal somatotype determination model to automatically recommend an individually designed mattress-topper-pillow combination. Design and validation of an automated prescription model for an individualised sleep system is performed through a single-image 2 D-3 D analysis and body pressure distribution, to objectively determine optimal individual sleep surfaces combining five different mattress densities, three different toppers and three cervical pillows. A final study (n = 151) and re-analysis (n = 117) defined and validated the model, showing high correlations between calculated and real data (>85% in height and body circumferences, 89.9% in weight, 80.4% in body mass index and more than 70% in morphotype categorisation). Somatotype determination model can accurately prescribe an individualised sleep solution. This can be useful for healthy people and for health centres that need to adapt sleep surfaces to people with special needs. Next steps will increase model's accuracy and analise, if this prescribed individualised sleep solution can improve sleep quantity and quality; additionally, future studies will adapt the model to mattresses with technological improvements, tailor-made production and will define interfaces for people with special needs.

  16. Acoustics of snoring and automatic snore sound detection in children.

    PubMed

    Çavuşoğlu, M; Poets, C F; Urschitz, M S

    2017-10-31

    Acoustic analyses of snoring sounds have been used to objectively assess snoring and applied in various clinical problems for adult patients. Such studies require highly automatized tools to analyze the sound recordings of the whole night's sleep, in order to extract clinically relevant snore- related statistics. The existing techniques and software used for adults are not efficiently applicable to snoring sounds in children, basically because of different acoustic signal properties. In this paper, we present a broad range of acoustic characteristics of snoring sounds in children (N  =  38) in comparison to adult (N  =  30) patients. Acoustic characteristics of the signals were calculated, including frequency domain representations, spectrogram-based characteristics, spectral envelope analysis, formant structures and loudness of the snoring sounds. We observed significant differences in spectral features, formant structures and loudness of the snoring signals of children compared to adults that may arise from the diversity of the upper airway anatomy as the principal determinant of the snore sound generation mechanism. Furthermore, based on the specific audio features of snoring children, we proposed a novel algorithm for the automatic detection of snoring sounds from ambient acoustic data specifically in a pediatric population. The respiratory sounds were recorded using a pair of microphones and a multi-channel data acquisition system simultaneously with full-night polysomnography during sleep. Brief sound chunks of 0.5 s were classified as either belonging to a snoring event or not with a multi-layer perceptron, which was trained in a supervised fashion using stochastic gradient descent on a large hand-labeled dataset using frequency domain features. The method proposed here has been used to extract snore-related statistics that can be calculated from the detected snore episodes for the whole night's sleep, including number of snore episodes (total snoring time), ratio of snore to whole sleep time, variation of snoring rate, regularity of snoring episodes in time and amplitude and snore loudness. These statistics will ultimately serve as a clinical tool providing information for the objective evaluation of snoring for several clinical applications.

  17. Explicit attention interferes with selective emotion processing in human extrastriate cortex.

    PubMed

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2007-02-22

    Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (approximately 150-300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon.

  18. Explicit attention interferes with selective emotion processing in human extrastriate cortex

    PubMed Central

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2007-01-01

    Background Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (~150–300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Results Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. Conclusion The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest that interference of emotion processing by competing task demands is a more general phenomenon extending to the domain of feature-based attention. Furthermore, the results are inconsistent with the notion of effortlessness, i.e., early emotion discrimination despite concurrent task demands. These findings implicate to assess the presumed automatic nature of emotion processing at the level of specific aspects rather than considering automaticity as an all-or-none phenomenon. PMID:17316444

  19. Computer-aided diagnosis system: a Bayesian hybrid classification method.

    PubMed

    Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J

    2013-10-01

    A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA

    Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less

Top