Keller, Andrew; Bader, Samuel L.; Shteynberg, David; Hood, Leroy; Moritz, Robert L.
2015-01-01
Proteomics by mass spectrometry technology is widely used for identifying and quantifying peptides and proteins. The breadth and sensitivity of peptide detection have been advanced by the advent of data-independent acquisition mass spectrometry. Analysis of such data, however, is challenging due to the complexity of fragment ion spectra that have contributions from multiple co-eluting precursor ions. We present SWATHProphet software that identifies and quantifies peptide fragment ion traces in data-independent acquisition data, provides accurate probabilities to ensure results are correct, and automatically detects and removes contributions to quantitation originating from interfering precursor ions. Integration in the widely used open source Trans-Proteomic Pipeline facilitates subsequent analyses such as combining results of multiple data sets together for improved discrimination using iProphet and inferring sample proteins using ProteinProphet. This novel development should greatly help make data-independent acquisition mass spectrometry accessible to large numbers of users. PMID:25713123
Automatic physical inference with information maximizing neural networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.
Two Sources of Evidence on the Non-Automaticity of True and False Belief Ascription
ERIC Educational Resources Information Center
Back, Elisa; Apperly, Ian A.
2010-01-01
A recent study by Apperly et al. (2006) found evidence that adults do not automatically infer false beliefs while watching videos that afford such inferences. This method was extended to examine true beliefs, which are sometimes thought to be ascribed by "default" (e.g., Leslie & Thaiss, 1992). Sequences of pictures were presented in which the…
Automatic approach to deriving fuzzy slope positions
NASA Astrophysics Data System (ADS)
Zhu, Liang-Jun; Zhu, A.-Xing; Qin, Cheng-Zhi; Liu, Jun-Zhi
2018-03-01
Fuzzy characterization of slope positions is important for geographic modeling. Most of the existing fuzzy classification-based methods for fuzzy characterization require extensive user intervention in data preparation and parameter setting, which is tedious and time-consuming. This paper presents an automatic approach to overcoming these limitations in the prototype-based inference method for deriving fuzzy membership value (or similarity) to slope positions. The key contribution is a procedure for finding the typical locations and setting the fuzzy inference parameters for each slope position type. Instead of being determined totally by users in the prototype-based inference method, in the proposed approach the typical locations and fuzzy inference parameters for each slope position type are automatically determined by a rule set based on prior domain knowledge and the frequency distributions of topographic attributes. Furthermore, the preparation of topographic attributes (e.g., slope gradient, curvature, and relative position index) is automated, so the proposed automatic approach has only one necessary input, i.e., the gridded digital elevation model of the study area. All compute-intensive algorithms in the proposed approach were speeded up by parallel computing. Two study cases were provided to demonstrate that this approach can properly, conveniently and quickly derive the fuzzy slope positions.
IMNN: Information Maximizing Neural Networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.
Automatic inference of multicellular regulatory networks using informative priors.
Sun, Xiaoyun; Hong, Pengyu
2009-01-01
To fully understand the mechanisms governing animal development, computational models and algorithms are needed to enable quantitative studies of the underlying regulatory networks. We developed a mathematical model based on dynamic Bayesian networks to model multicellular regulatory networks that govern cell differentiation processes. A machine-learning method was developed to automatically infer such a model from heterogeneous data. We show that the model inference procedure can be greatly improved by incorporating interaction data across species. The proposed approach was applied to C. elegans vulval induction to reconstruct a model capable of simulating C. elegans vulval induction under 73 different genetic conditions.
Automatic inference of indexing rules for MEDLINE
Névéol, Aurélie; Shooshan, Sonya E; Claveau, Vincent
2008-01-01
Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP) to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI), a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI. PMID:19025687
Automatic inference of indexing rules for MEDLINE.
Névéol, Aurélie; Shooshan, Sonya E; Claveau, Vincent
2008-11-19
Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. In this paper, we describe the use and the customization of Inductive Logic Programming (ILP) to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI), a system producing automatic indexing recommendations for MEDLINE. We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.
Vlaic, Sebastian; Hoffmann, Bianca; Kupfer, Peter; Weber, Michael; Dräger, Andreas
2013-09-01
GRN2SBML automatically encodes gene regulatory networks derived from several inference tools in systems biology markup language. Providing a graphical user interface, the networks can be annotated via the simple object access protocol (SOAP)-based application programming interface of BioMart Central Portal and minimum information required in the annotation of models registry. Additionally, we provide an R-package, which processes the output of supported inference algorithms and automatically passes all required parameters to GRN2SBML. Therefore, GRN2SBML closes a gap in the processing pipeline between the inference of gene regulatory networks and their subsequent analysis, visualization and storage. GRN2SBML is freely available under the GNU Public License version 3 and can be downloaded from http://www.hki-jena.de/index.php/0/2/490. General information on GRN2SBML, examples and tutorials are available at the tool's web page.
Mugzach, Omri; Peleg, Mor; Bagley, Steven C; Guter, Stephen J; Cook, Edwin H; Altman, Russ B
2015-08-01
Our goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data. Knowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data. We extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94. The ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy. The ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology. Copyright © 2015 Elsevier Inc. All rights reserved.
Wang, Hong-ping; Chen, Chang; Liu, Yan; Yang, Hong-Jun; Wu, Hong-Wei; Xiao, Hong-Bin
2015-11-01
The incomplete identification of the chemical components of traditional Chinese medicinal formula has been one of the bottlenecks in the modernization of traditional Chinese medicine. Tandem mass spectrometry has been widely used for the identification of chemical substances. Current automatic tandem mass spectrometry acquisition, where precursor ions were selected according to their signal intensity, encounters a drawback in chemical substances identification when samples contain many overlapping signals. Compounds in minor or trace amounts could not be identified because most tandem mass spectrometry information was lost. Herein, a molecular feature orientated precursor ion selection and tandem mass spectrometry structure elucidation method for complex Chinese medicine chemical constituent analysis was developed. The precursor ions were selected according to their two-dimensional characteristics of retention times and mass-to-charge ratio ranges from herbal compounds, so that all precursor ions from herbal compounds were included and more minor chemical constituents in Chinese medicine were identified. Compared to the conventional automatic tandem mass spectrometry setups, the approach is novel and can overcome the drawback for chemical substances identification. As an example, 276 compounds from the Chinese Medicine of Yi-Xin-Shu capsule were identified. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Automatic Diagnosis of Fetal Heart Rate: Comparison of Different Methodological Approaches
2001-10-25
Apgar score). Each recording lasted at least 30 minutes and it contained both the cardiographic series and the toco trace. We focused on four...inference rules automatically generated by the learning procedure showed that n° Rules can be manually reduced to 37 without deteriorating so much the
Data fusion and classification using a hybrid intrinsic cellular inference network
NASA Astrophysics Data System (ADS)
Woodley, Robert; Walenz, Brett; Seiffertt, John; Robinette, Paul; Wunsch, Donald
2010-04-01
Hybrid Intrinsic Cellular Inference Network (HICIN) is designed for battlespace decision support applications. We developed an automatic method of generating hypotheses for an entity-attribute classifier. The capability and effectiveness of a domain specific ontology was used to generate automatic categories for data classification. Heterogeneous data is clustered using an Adaptive Resonance Theory (ART) inference engine on a sample (unclassified) data set. The data set is the Lahman baseball database. The actual data is immaterial to the architecture, however, parallels in the data can be easily drawn (i.e., "Team" maps to organization, "Runs scored/allowed" to Measure of organization performance (positive/negative), "Payroll" to organization resources, etc.). Results show that HICIN classifiers create known inferences from the heterogonous data. These inferences are not explicitly stated in the ontological description of the domain and are strictly data driven. HICIN uses data uncertainty handling to reduce errors in the classification. The uncertainty handling is based on subjective logic. The belief mass allows evidence from multiple sources to be mathematically combined to increase or discount an assertion. In military operations the ability to reduce uncertainty will be vital in the data fusion operation.
Shariff, Azim F; Tracy, Jessica L; Markusoff, Jeffrey L
2012-09-01
How do we decide who merits social status? According to functionalist theories of emotion, the nonverbal expressions of pride and shame play a key role, functioning as automatically perceived status signals. In this view, observers automatically make status inferences about expressers on the basis of these expressions, even when contradictory contextual information about the expressers' status is available. In four studies, the authors tested whether implicit and explicit status perceptions are influenced by pride and shame expressions even when these expressions' status-related messages are contradicted by contextual information. Results indicate that emotion expressions powerfully influence implicit and explicit status inferences, at times neutralizing or even overriding situational knowledge. These findings demonstrate the irrepressible communicative power of emotion displays and indicate that status judgments can be informed as much (and often more) by automatic responses to nonverbal expressions of emotion as by rational, contextually bound knowledge.
Fuzzy logic and image processing techniques for the interpretation of seismic data
NASA Astrophysics Data System (ADS)
Orozco-del-Castillo, M. G.; Ortiz-Alemán, C.; Urrutia-Fucugauchi, J.; Rodríguez-Castellanos, A.
2011-06-01
Since interpretation of seismic data is usually a tedious and repetitive task, the ability to do so automatically or semi-automatically has become an important objective of recent research. We believe that the vagueness and uncertainty in the interpretation process makes fuzzy logic an appropriate tool to deal with seismic data. In this work we developed a semi-automated fuzzy inference system to detect the internal architecture of a mass transport complex (MTC) in seismic images. We propose that the observed characteristics of a MTC can be expressed as fuzzy if-then rules consisting of linguistic values associated with fuzzy membership functions. The constructions of the fuzzy inference system and various image processing techniques are presented. We conclude that this is a well-suited problem for fuzzy logic since the application of the proposed methodology yields a semi-automatically interpreted MTC which closely resembles the MTC from expert manual interpretation.
Review of Medical Image Classification using the Adaptive Neuro-Fuzzy Inference System
Hosseini, Monireh Sheikh; Zekri, Maryam
2012-01-01
Image classification is an issue that utilizes image processing, pattern recognition and classification methods. Automatic medical image classification is a progressive area in image classification, and it is expected to be more developed in the future. Because of this fact, automatic diagnosis can assist pathologists by providing second opinions and reducing their workload. This paper reviews the application of the adaptive neuro-fuzzy inference system (ANFIS) as a classifier in medical image classification during the past 16 years. ANFIS is a fuzzy inference system (FIS) implemented in the framework of an adaptive fuzzy neural network. It combines the explicit knowledge representation of an FIS with the learning power of artificial neural networks. The objective of ANFIS is to integrate the best features of fuzzy systems and neural networks. A brief comparison with other classifiers, main advantages and drawbacks of this classifier are investigated. PMID:23493054
Fluency heuristic: a model of how the mind exploits a by-product of information retrieval.
Hertwig, Ralph; Herzog, Stefan M; Schooler, Lael J; Reimer, Torsten
2008-09-01
Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the most of an automatic by-product of retrieval from memory, namely, retrieval fluency. In 4 experiments, the authors show that retrieval fluency can be a proxy for real-world quantities, that people can discriminate between two objects' retrieval fluencies, and that people's inferences are in line with the fluency heuristic (in particular fast inferences) and with experimentally manipulated fluency. The authors conclude that the fluency heuristic may be one tool in the mind's repertoire of strategies that artfully probes memory for encapsulated frequency information that can veridically reflect statistical regularities in the world. (c) 2008 APA, all rights reserved.
Investigation on V2O5 Thin Films Prepared by Spray Pyrolysis Technique
NASA Astrophysics Data System (ADS)
Anasthasiya, A. Nancy Anna; Gowtham, K.; Shruthi, R.; Pandeeswari, R.; Jeyaprakash, B. G.
The spray pyrolysis technique was employed to deposit V2O5 thin films on a glass substrate. By varying the precursor solution volume from 10mL to 50mL in steps of 10mL, films of various thicknesses were prepared. Orthorhombic polycrystalline V2O5 films were inferred from the XRD pattern irrespective of precursor solution volume. The micro-Raman studies suggested that annealed V2O5 thin film has good crystallinity. The effect of precursor solution volume on morphological and optical properties were analysed and reported.
Intracranial EEG correlates of implicit relational inference within the hippocampus.
Reber, T P; Do Lam, A T A; Axmacher, N; Elger, C E; Helmstaedter, C; Henke, K; Fell, J
2016-01-01
Drawing inferences from past experiences enables adaptive behavior in future situations. Inference has been shown to depend on hippocampal processes. Usually, inference is considered a deliberate and effortful mental act which happens during retrieval, and requires the focus of our awareness. Recent fMRI studies hint at the possibility that some forms of hippocampus-dependent inference can also occur during encoding and possibly also outside of awareness. Here, we sought to further explore the feasibility of hippocampal implicit inference, and specifically address the temporal evolution of implicit inference using intracranial EEG. Presurgical epilepsy patients with hippocampal depth electrodes viewed a sequence of word pairs, and judged the semantic fit between two words in each pair. Some of the word pairs entailed a common word (e.g., "winter-red," "red-cat") such that an indirect relation was established in following word pairs (e.g., "winter-cat"). The behavioral results suggested that drawing inference implicitly from past experience is feasible because indirect relations seemed to foster "fit" judgments while the absence of indirect relations fostered "do not fit" judgments, even though the participants were unaware of the indirect relations. A event-related potential (ERP) difference emerging 400 ms post-stimulus was evident in the hippocampus during encoding, suggesting that indirect relations were already established automatically during encoding of the overlapping word pairs. Further ERP differences emerged later post-stimulus (1,500 ms), were modulated by the participants' responses and were evident during encoding and test. Furthermore, response-locked ERP effects were evident at test. These ERP effects could hence be a correlate of the interaction of implicit memory with decision-making. Together, the data map out a time-course in which the hippocampus automatically integrates memories from discrete but related episodes to implicitly influence future decision making. © 2015 Wiley Periodicals, Inc.
De Neys, Wim
2006-06-01
Human reasoning has been shown to overly rely on intuitive, heuristic processing instead of a more demanding analytic inference process. Four experiments tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically. Participants solved conjunction fallacy problems and indicative and deontic selection tasks. Experiment 1 established that making correct analytic inferences demanded more processing time than did making heuristic inferences. Experiment 2 showed that burdening the executive resources with an attention-demanding secondary task decreased correct, analytic responding and boosted the rate of conjunction fallacies and indicative matching card selections. Results were replicated in Experiments 3 and 4 with a different secondary-task procedure. Involvement of executive resources for the deontic selection task was less clear. Findings validate basic processing assumptions of the dual-process framework and complete the correlational research programme of K. E. Stanovich and R. F. West (2000).
Inference of segmented color and texture description by tensor voting.
Jia, Jiaya; Tang, Chi-Keung
2004-06-01
A robust synthesis method is proposed to automatically infer missing color and texture information from a damaged 2D image by (N)D tensor voting (N > 3). The same approach is generalized to range and 3D data in the presence of occlusion, missing data and noise. Our method translates texture information into an adaptive (N)D tensor, followed by a voting process that infers noniteratively the optimal color values in the (N)D texture space. A two-step method is proposed. First, we perform segmentation based on insufficient geometry, color, and texture information in the input, and extrapolate partitioning boundaries by either 2D or 3D tensor voting to generate a complete segmentation for the input. Missing colors are synthesized using (N)D tensor voting in each segment. Different feature scales in the input are automatically adapted by our tensor scale analysis. Results on a variety of difficult inputs demonstrate the effectiveness of our tensor voting approach.
Efficient reordering of PROLOG programs
NASA Technical Reports Server (NTRS)
Gooley, Markian M.; Wah, Benjamin W.
1989-01-01
PROLOG programs are often inefficient: execution corresponds to a depth-first traversal of an AND/OR graph; traversing subgraphs in another order can be less expensive. It is shown how the reordering of clauses within PROLOG predicates, and especially of goals within clauses, can prevent unnecessary search. The characterization and detection of restrictions on reordering is discussed. A system of calling modes for PROLOG, geared to reordering, is proposed, and ways to infer them automatically are discussed. The information needed for safe reordering is summarized, and which types can be inferred automatically and which must be provided by the user are considered. An improved method for determining a good order for the goals of PROLOG clauses is presented and used as the basis for a reordering system.
18O 16O ratios in cherts associated with the saline lake deposits of East Africa
O'Neil, J.R.; Hay, R.L.
1973-01-01
The cherts formed from sodium silicate precursors in East African saline, alkaline lakes have ??18O values ranging from 31.1 to 44.1. The ??18O values correlate in general with lake salinities as inferred from geologic evidence, indicating that most chert was formed from its precursor in contact with lake water trapped at the time of deposition. A few of the analyzed cherts probably formed in contact with dilute meteoric water. From the widely varying ??18O values we conclude that precursors were transformed to chert in fluids of widely varying salinity and aNa+/aH+ ratio. ?? 1973.
Mid-Atomic-Number Cylindrical Wire Array Precursor Plasma Studies on Zebra
Stafford, A; Safronova, A. S.; Kantsyrev, V. L.; ...
2014-12-30
The precursor plasmas from low wire number cylindrical wire arrays (CWAs) were previously shown to radiate at temperatures >300 eV for Ni-60 (94% Cu and 6% Ni) wires in experiments on the 1-MA Zebra generator. Continued research into precursor plasmas has studied additional midatomic-number materials including Cu and Alumel (95% Ni, 2% Al, 2% Mn, and 1% Si) to determine if the >300 eV temperatures are common for midatomic-number materials. Additionally, current scaling effects were observed by performing CWA precursor experiments at an increased current of 1.5 MA using a load current multiplier. Our results show an increase in amore » linear radiation yield of ~50% (16 versus 10 kJ/cm) for the experiments at increased current. However, plasma conditions inferred through the modeling of X-ray time-gated spectra are very similar for the precursor plasma in both current conditions.« less
Murata, Aiko; Saito, Hisamichi; Schug, Joanna; Ogawa, Kenji; Kameda, Tatsuya
2016-01-01
A number of studies have shown that individuals often spontaneously mimic the facial expressions of others, a tendency known as facial mimicry. This tendency has generally been considered a reflex-like "automatic" response, but several recent studies have shown that the degree of mimicry may be moderated by contextual information. However, the cognitive and motivational factors underlying the contextual moderation of facial mimicry require further empirical investigation. In this study, we present evidence that the degree to which participants spontaneously mimic a target's facial expressions depends on whether participants are motivated to infer the target's emotional state. In the first study we show that facial mimicry, assessed by facial electromyography, occurs more frequently when participants are specifically instructed to infer a target's emotional state than when given no instruction. In the second study, we replicate this effect using the Facial Action Coding System to show that participants are more likely to mimic facial expressions of emotion when they are asked to infer the target's emotional state, rather than make inferences about a physical trait unrelated to emotion. These results provide convergent evidence that the explicit goal of understanding a target's emotional state affects the degree of facial mimicry shown by the perceiver, suggesting moderation of reflex-like motor activities by higher cognitive processes.
Automatic segmentation of time-lapse microscopy images depicting a live Dharma embryo.
Zacharia, Eleni; Bondesson, Maria; Riu, Anne; Ducharme, Nicole A; Gustafsson, Jan-Åke; Kakadiaris, Ioannis A
2011-01-01
Biological inferences about the toxicity of chemicals reached during experiments on the zebrafish Dharma embryo can be greatly affected by the analysis of the time-lapse microscopy images depicting the embryo. Among the stages of image analysis, automatic and accurate segmentation of the Dharma embryo is the most crucial and challenging. In this paper, an accurate and automatic segmentation approach for the segmentation of the Dharma embryo data obtained by fluorescent time-lapse microscopy is proposed. Experiments performed in four stacks of 3D images over time have shown promising results.
Automatic Inference of Cryptographic Key Length Based on Analysis of Proof Tightness
2016-06-01
within an attack tree structure, then expand attack tree methodology to include cryptographic reductions. We then provide the algorithms for...maintaining and automatically reasoning about these expanded attack trees . We provide a software tool that utilizes machine-readable proof and attack metadata...and the attack tree methodology to provide rapid and precise answers regarding security parameters and effective security. This eliminates the need
2018-02-15
address the problem that probabilistic inference algorithms are diÿcult and tedious to implement, by expressing them in terms of a small number of...building blocks, which are automatic transformations on probabilistic programs. On one hand, our curation of these building blocks reflects the way human...reasoning with low-level computational optimization, so the speed and accuracy of the generated solvers are competitive with state-of-the-art systems. 15
Two sources of evidence on the non-automaticity of true and false belief ascription.
Back, Elisa; Apperly, Ian A
2010-04-01
A recent study by Apperly et al. (2006) found evidence that adults do not automatically infer false beliefs while watching videos that afford such inferences. This method was extended to examine true beliefs, which are sometimes thought to be ascribed by "default" (e.g., Leslie & Thaiss, 1992). Sequences of pictures were presented in which the location of an object and a character's belief about the location of the object often changed. During the picture sequences participants responded to an unpredictable probe picture about where the character believed the object to be located or where the object was located in reality. In Experiment 1 participants were not directly instructed to track the character's beliefs about the object. There was a significant reaction time cost for belief probes compared with matched reality probes, whether the character's belief was true or false. In Experiment 2, participants were asked to track where the character thought the object was located, responses to belief probes were faster than responses to reality probes, suggesting that the difference observed in Experiment 1 was not due to intrinsic differences between the probes, but was more likely to be due to participants inferring beliefs ad hoc in response to the probe. In both Experiments 1 and 2, responses to belief and reality probes were faster in the true belief condition than in the false belief condition. In Experiment 3 this difference was largely eliminated when participants had fewer reasons to make belief inferences spontaneously. These two lines of evidence are neatly explained by the proposition that neither true nor false beliefs are ascribed automatically, but that belief ascription may occur spontaneously in response to task demands. Copyright 2009 Elsevier B.V. All rights reserved.
Well-Being Tracking via Smartphone-Measured Activity and Sleep: Cohort Study
Feygin, Sidney; Dembo, Aluma; Aguilera, Adrian; Recht, Benjamin
2017-01-01
Background Automatically tracking mental well-being could facilitate personalization of treatments for mood disorders such as depression and bipolar disorder. Smartphones present a novel and ubiquitous opportunity to track individuals’ behavior and may be useful for inferring and automatically monitoring mental well-being. Objective The aim of this study was to assess the extent to which activity and sleep tracking with a smartphone can be used for monitoring individuals’ mental well-being. Methods A cohort of 106 individuals was recruited to install an app on their smartphone that would track their well-being with daily surveys and track their behavior with activity inferences from their phone’s accelerometer data. Of the participants recruited, 53 had sufficient data to infer activity and sleep measures. For this subset of individuals, we related measures of activity and sleep to the individuals’ well-being and used these measures to predict their well-being. Results We found that smartphone-measured approximations for daily physical activity were positively correlated with both mood (P=.004) and perceived energy level (P<.001). Sleep duration was positively correlated with mood (P=.02) but not energy. Our measure for sleep disturbance was not found to be significantly related to either mood or energy, which could imply too much noise in the measurement. Models predicting the well-being measures from the activity and sleep measures were found to be significantly better than naive baselines (P<.01), despite modest overall improvements. Conclusions Measures of activity and sleep inferred from smartphone activity were strongly related to and somewhat predictive of participants’ well-being. Whereas the improvement over naive models was modest, it reaffirms the importance of considering physical activity and sleep for predicting mood and for making automatic mood monitoring a reality. PMID:28982643
Deep Learning from EEG Reports for Inferring Underspecified Information
Goodwin, Travis R.; Harabagiu, Sanda M.
2017-01-01
Secondary use1of electronic health records (EHRs) often relies on the ability to automatically identify and extract information from EHRs. Unfortunately, EHRs are known to suffer from a variety of idiosyncrasies – most prevalently, they have been shown to often omit or underspecify information. Adapting traditional machine learning methods for inferring underspecified information relies on manually specifying features characterizing the specific information to recover (e.g. particular findings, test results, or physician’s impressions). By contrast, in this paper, we present a method for jointly (1) automatically extracting word- and report-level features and (2) inferring underspecified information from EHRs. Our approach accomplishes these two tasks jointly by combining recent advances in deep neural learning with access to textual data in electroencephalogram (EEG) reports. We evaluate the performance of our model on the problem of inferring the neurologist’s over-all impression (normal or abnormal) from electroencephalogram (EEG) reports and report an accuracy of 91.4% precision of 94.4% recall of 91.2% and F1 measure of 92.8% (a 40% improvement over the performance obtained using Doc2Vec). These promising results demonstrate the power of our approach, while error analysis reveals remaining obstacles as well as areas for future improvement. PMID:28815118
Precursors to language: Social cognition and pragmatic inference in primates.
Seyfarth, Robert M; Cheney, Dorothy L
2017-02-01
Despite their differences, human language and the vocal communication of nonhuman primates share many features. Both constitute forms of coordinated activity, rely on many shared neural mechanisms, and involve discrete, combinatorial cognition that includes rich pragmatic inference. These common features suggest that during evolution the ancestors of all modern primates faced similar social problems and responded with similar systems of communication and cognition. When language later evolved from this common foundation, many of its distinctive features were already present.
Damkliang, Kasikrit; Tandayya, Pichaya; Sangket, Unitsa; Pasomsub, Ekawat
2016-11-28
At the present, coding sequence (CDS) has been discovered and larger CDS is being revealed frequently. Approaches and related tools have also been developed and upgraded concurrently, especially for phylogenetic tree analysis. This paper proposes an integrated automatic Taverna workflow for the phylogenetic tree inferring analysis using public access web services at European Bioinformatics Institute (EMBL-EBI) and Swiss Institute of Bioinformatics (SIB), and our own deployed local web services. The workflow input is a set of CDS in the Fasta format. The workflow supports 1,000 to 20,000 numbers in bootstrapping replication. The workflow performs the tree inferring such as Parsimony (PARS), Distance Matrix - Neighbor Joining (DIST-NJ), and Maximum Likelihood (ML) algorithms of EMBOSS PHYLIPNEW package based on our proposed Multiple Sequence Alignment (MSA) similarity score. The local web services are implemented and deployed into two types using the Soaplab2 and Apache Axis2 deployment. There are SOAP and Java Web Service (JWS) providing WSDL endpoints to Taverna Workbench, a workflow manager. The workflow has been validated, the performance has been measured, and its results have been verified. Our workflow's execution time is less than ten minutes for inferring a tree with 10,000 replicates of the bootstrapping numbers. This paper proposes a new integrated automatic workflow which will be beneficial to the bioinformaticians with an intermediate level of knowledge and experiences. All local services have been deployed at our portal http://bioservices.sci.psu.ac.th.
Damkliang, Kasikrit; Tandayya, Pichaya; Sangket, Unitsa; Pasomsub, Ekawat
2016-03-01
At the present, coding sequence (CDS) has been discovered and larger CDS is being revealed frequently. Approaches and related tools have also been developed and upgraded concurrently, especially for phylogenetic tree analysis. This paper proposes an integrated automatic Taverna workflow for the phylogenetic tree inferring analysis using public access web services at European Bioinformatics Institute (EMBL-EBI) and Swiss Institute of Bioinformatics (SIB), and our own deployed local web services. The workflow input is a set of CDS in the Fasta format. The workflow supports 1,000 to 20,000 numbers in bootstrapping replication. The workflow performs the tree inferring such as Parsimony (PARS), Distance Matrix - Neighbor Joining (DIST-NJ), and Maximum Likelihood (ML) algorithms of EMBOSS PHYLIPNEW package based on our proposed Multiple Sequence Alignment (MSA) similarity score. The local web services are implemented and deployed into two types using the Soaplab2 and Apache Axis2 deployment. There are SOAP and Java Web Service (JWS) providing WSDL endpoints to Taverna Workbench, a workflow manager. The workflow has been validated, the performance has been measured, and its results have been verified. Our workflow's execution time is less than ten minutes for inferring a tree with 10,000 replicates of the bootstrapping numbers. This paper proposes a new integrated automatic workflow which will be beneficial to the bioinformaticians with an intermediate level of knowledge and experiences. The all local services have been deployed at our portal http://bioservices.sci.psu.ac.th.
Instance-based categorization: automatic versus intentional forms of retrieval.
Neal, A; Hesketh, B; Andrews, S
1995-03-01
Two experiments are reported which attempt to disentangle the relative contribution of intentional and automatic forms of retrieval to instance-based categorization. A financial decision-making task was used in which subjects had to decide whether a bank would approve loans for a series of applicants. Experiment 1 found that categorization was sensitive to instance-specific knowledge, even when subjects had practiced using a simple rule. L. L. Jacoby's (1991) process-dissociation procedure was adapted for use in Experiment 2 to infer the relative contribution of intentional and automatic retrieval processes to categorization decisions. The results provided (1) strong evidence that intentional retrieval processes influence categorization, and (2) some preliminary evidence suggesting that automatic retrieval processes may also contribute to categorization decisions.
Ozone process insights from field experiments - part I: overview
NASA Astrophysics Data System (ADS)
Hidy, G. M.
This paper gives an overview of selected approaches recently adopted to analyze observations from field experiments that characterize the tropospheric physics and chemistry of ozone and related oxidation products. Analysis of ambient oxidant and precursor concentration measurements, combined with meteorological observations, has provided important information about tropospheric processes. Projection of the response of tropospheric ozone concentrations to changes in precursor emissions is achieved through emissions based air quality models (AQMs). These models integrate several "process" elements from source emissions to meteorological and chemical phenomena. Through field campaigns, new knowledge has become available which has enabled workers to better understand the strengths and weaknesses of AQMs and their components. Examples of insightful results include: (a) reconciliation of ambient concentrations of speciated volatile organic compounds (VOCs) with estimates from emissions models, and inventories, (b) verification of chemical mechanisms for ozone formation from its precursors using approximations applicable in different chemical regimes, (c) inference of regimes of sensitivity in ozone concentration to changes in VOC and NO x precursors from ozone management practices, (d) conceptualization of important air mass transport and mixing processes on different spatial and temporal scales that affect ozone and precursor concentrations distributions, and (e) application of the analysis of spatial and temporal variance to infer the origins of chemical product transport, and precursor distributions. Studies from the first category have been used to improve emissions models substantially over previous forms. The remainder of the analyses has yielded valuable insight into the chemical and meteorological mechanisms at work on different spatial and temporal scales. The methods have provided an observationally based framework for effective choices to improve ozone management, notably in terms of NO x or VOC sensitive regimes. Investigation of meteorological processes relevant to ozone accumulation has illustrated the importance of accounting for both transport winds and the day-night vertical structure of the atmosphere in AQM analyses. Finally, variance analyses of O 3 concentrations with other aerometric parameters offer significant opportunities to use semi-empirically air monitoring data as a means determining space and time scales of O 3 variance, and detecting precursor emissions source-ozone receptor relationships.
NASA Astrophysics Data System (ADS)
Jo, Minsang; Ku, Heesuk; Park, Sanghyuk; Song, Junho; Kwon, Kyungjung
2018-07-01
Li[Ni1/3Co1/3Mn1/3]O2 cathode active materials are synthesized from co-precipitated hydroxide precursors Lix[Ni1/3Co1/3Mn1/3]1-x(OH)2, and the effect of residual Li in the precursors on the lithium-ion battery (LIB) performance of their corresponding cathode active materials is investigated. Three kinds of precursors that contain different amounts of Li are selected depending on different conditions of the solution composition for the co-precipitation and washing process. It is confirmed that the introduction of Li to the precursors reduces the degree of structural perfection by X-ray diffraction analysis. Undesirable cation mixing occurs with the increasing Li content of the precursors, which is inferred from a decline in lattice parameters and the calculated intensity ratio of (003) and (104) peaks. In the voltage range of 3.0-4.3 V, the initial charge/discharge capacities and the rate capability of the cathode active materials are aggravated when Li exists in the precursors. Therefore, it could be concluded that the strict control of Li in a solution for co-precipitation of precursors is necessary in the resynthesis of cathode active materials from spent LIBs.
Controlled nucleation and growth of CdS nanoparticles in a polymer matrix.
Di Luccio, Tiziana; Laera, Anna Maria; Tapfer, Leander; Kempter, Susanne; Kraus, Robert; Nickel, Bert
2006-06-29
In-situ synchrotron X-ray diffraction (XRD) was used to monitor the thermal decomposition (thermolysis) of Cd thiolates precursors embedded in a polymer matrix and the nucleation of CdS nanoparticles. A thiolate precursor/polymer solid foil was heated to 300 degrees C in the X-ray diffraction setup of beamline W1.1 at Hasylab, and the diffraction curves were each recorded at 10 degrees C. At temperatures above 240 degrees C, the precursor decomposition is complete and CdS nanoparticles grow within the polymer matrix forming a nanocomposite with interesting optical properties. The nanoparticle structural properties (size and crystal structure) depend on the annealing temperature. Transmission electron microscopy (TEM) and photoluminescence (PL) analyses were used to characterize the nanoparticles. A possible mechanism driving the structural transformation of the precursor is inferred from the diffraction features arising at the different temperatures.
Nanoscale Transforming Mineral Phases in Fresh Nacre.
DeVol, Ross T; Sun, Chang-Yu; Marcus, Matthew A; Coppersmith, Susan N; Myneni, Satish C B; Gilbert, Pupa U P A
2015-10-21
Nacre, or mother-of-pearl, the iridescent inner layer of many mollusk shells, is a biomineral lamellar composite of aragonite (CaCO3) and organic sheets. Biomineralization frequently occurs via transient amorphous precursor phases, crystallizing into the final stable biomineral. In nacre, despite extensive attempts, amorphous calcium carbonate (ACC) precursors have remained elusive. They were inferred from non-nacre-forming larval shells, or from a residue of amorphous material surrounding mature gastropod nacre tablets, and have only once been observed in bivalve nacre. Here we present the first direct observation of ACC precursors to nacre formation, obtained from the growth front of nacre in gastropod shells from red abalone (Haliotis rufescens), using synchrotron spectromicroscopy. Surprisingly, the abalone nacre data show the same ACC phases that are precursors to calcite (CaCO3) formation in sea urchin spicules, and not proto-aragonite or poorly crystalline aragonite (pAra), as expected for aragonitic nacre. In contrast, we find pAra in coral.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeVol, Ross T.; Sun, Chang-Yu; Marcus, Matthew A.
Nacre, or mother-of-pearl, the iridescent inner layer of many mollusk shells, is a biomineral lamellar composite of aragonite (CaCO 3) and organic sheets. Biomineralization frequently occurs via transient amorphous precursor phases, crystallizing into the final stable biomineral. In nacre, despite extensive attempts, amorphous calcium carbonate (ACC) precursors have remained elusive. They were inferred from non-nacre-forming larval shells, or from a residue of amorphous material surrounding mature gastropod nacre tablets, and have only once been observed in bivalve nacre. Here we present the first direct observation of ACC precursors to nacre formation, obtained from the growth front of nacre in gastropodmore » shells from red abalone (Haliotis rufescens), using synchrotron spectromicroscopy. Surprisingly, the abalone nacre data show the same ACC phases that are precursors to calcite (CaCO 3) formation in sea urchin spicules, and not proto-aragonite or poorly crystalline aragonite (pAra), as expected for aragonitic nacre. In contrast, we find pAra in coral.« less
NASA Astrophysics Data System (ADS)
Gutiérrez-Fragoso, K.; Acosta-Mesa, H. G.; Cruz-Ramírez, N.; Hernández-Jiménez, R.
2013-12-01
Cervical cancer has remained, until now, as a serious public health problem in developing countries. The most common method of screening is the Pap test or cytology. When abnormalities are reported in the result, the patient is referred to a dysplasia clinic for colposcopy. During this test, a solution of acetic acid is applied, which produces a color change in the tissue and is known as acetowhitening phenomenon. This reaction aims to obtaining a sample of tissue and its histological analysis let to establish a final diagnosis. During the colposcopy test, digital images can be acquired to analyze the behavior of the acetowhitening reaction from a temporal approach. In this way, we try to identify precursor lesions of cervical cancer through a process of automatic classification of acetowhite temporal patterns. In this paper, we present the performance analysis of three classification methods: kNN, Naïve Bayes and C4.5. The results showed that there is similarity between some acetowhite temporal patterns of normal and abnormal tissues. Therefore we conclude that it is not sufficient to only consider the temporal dynamic of the acetowhitening reaction to establish a diagnosis by an automatic method. Information from cytologic, colposcopic and histopathologic disciplines should be integrated as well.
Control Algorithms For Liquid-Cooled Garments
NASA Technical Reports Server (NTRS)
Drew, B.; Harner, K.; Hodgson, E.; Homa, J.; Jennings, D.; Yanosy, J.
1988-01-01
Three algorithms developed for control of cooling in protective garments. Metabolic rate inferred from temperatures of cooling liquid outlet and inlet, suitably filtered to account for thermal lag of human body. Temperature at inlet adjusted to value giving maximum comfort at inferred metabolic rate. Applicable to space suits, used for automatic control of cooling in suits worn by workers in radioactive, polluted, or otherwise hazardous environments. More effective than manual control, subject to frequent, overcompensated adjustments as level of activity varies.
The effect of topography of upper-mantle discontinuities on SS precursors
NASA Astrophysics Data System (ADS)
Koroni, Maria; Trampert, Jeannot
2016-01-01
Using the spectral-element method, we explored the effect of topography of upper-mantle discontinuities on the traveltimes of SS precursors recorded on transverse component seismograms. The latter are routinely used to infer the topography of mantle transition zone discontinuities. The step from precursory traveltimes to topographic changes is mainly done using linearised ray theory, or sometimes using finite-frequency kernels. We simulated exact seismograms in 1-D and 3-D elastic models of the mantle. In a second simulation, we added topography to the discontinuities. We compared the waveforms obtained with and without topography by cross correlation of the SS precursors. Since we did not add noise, the precursors are visible in individual seismograms without the need of stacking. The resulting time anomalies were then converted into topographic variations and compared to the original topographic models. Based on the correlation between initial and inferred models, and provided that ray coverage is good, we found that linearised ray theory gives a relatively good idea on the location of the uplifts and depressions of the discontinuities. It seriously underestimates the amplitude of the topographic variations by a factor ranging between 2 and 7. Real data depend on the 3-D elastic structure and the topography. All studies to date correct for the 3-D elastic effects assuming that the traveltimes can be linearly decomposed into a structure and a discontinuity part. We found a strong non-linearity in this decomposition which cannot be modelled without a fully non-linear inversion for elastic structure and discontinuities simultaneously.
The effect of topography of upper mantle discontinuities on SS precursors
NASA Astrophysics Data System (ADS)
Koroni, M.; Trampert, J.
2015-12-01
We assessed the reliability of methods used to infer the topography of the mantle transition zone discontinuities. In particular, using the spectral-element method,we explored the effect of topography of the '410' and '660' mantle discontinuities on the travel times of SS precursors recorded on transverse component seismograms.The latter are routinely used to infer the topography of mantle transition zone discontinuities. The step from precursorytravel times to topographic changes is mainly done using linearised ray theory, or sometimes using finite frequency kernels.We simulated exact seismograms in 1-D and 3-D elastic models of the mantle. In a second simulation, we added topography to the discontinuities. We compared the waveforms obtained with and without topography by cross-correlation of the SS precursors. Since we did not add noise, the precursors are visible in individual seismograms without the need of stacking. The resultingtime anomalies were then converted into topographic variations and compared to the original models of topography. We found that linearised ray theory gives a relatively good idea on the location of the uplifts and depressions of the discontinuities, provided that the ray coverage is good, although it seriously underestimates the amplitude of the topography. The amplitude of the topographic variation is underestimated in average by a factor of 2.8 for the '660' and of 4.5 for the '410'. Additionally, we found a strong non-linearity in the measured data which cannot be modelled without a fully non-linear inversion for elastic structure and discontinuities simultaneously.
Shi, Longxiang; Li, Shijian; Yang, Xiaoran; Qi, Jiaheng; Pan, Gang; Zhou, Binbin
2017-01-01
With the explosion of healthcare information, there has been a tremendous amount of heterogeneous textual medical knowledge (TMK), which plays an essential role in healthcare information systems. Existing works for integrating and utilizing the TMK mainly focus on straightforward connections establishment and pay less attention to make computers interpret and retrieve knowledge correctly and quickly. In this paper, we explore a novel model to organize and integrate the TMK into conceptual graphs. We then employ a framework to automatically retrieve knowledge in knowledge graphs with a high precision. In order to perform reasonable inference on knowledge graphs, we propose a contextual inference pruning algorithm to achieve efficient chain inference. Our algorithm achieves a better inference result with precision and recall of 92% and 96%, respectively, which can avoid most of the meaningless inferences. In addition, we implement two prototypes and provide services, and the results show our approach is practical and effective.
Yang, Xiaoran; Qi, Jiaheng; Pan, Gang; Zhou, Binbin
2017-01-01
With the explosion of healthcare information, there has been a tremendous amount of heterogeneous textual medical knowledge (TMK), which plays an essential role in healthcare information systems. Existing works for integrating and utilizing the TMK mainly focus on straightforward connections establishment and pay less attention to make computers interpret and retrieve knowledge correctly and quickly. In this paper, we explore a novel model to organize and integrate the TMK into conceptual graphs. We then employ a framework to automatically retrieve knowledge in knowledge graphs with a high precision. In order to perform reasonable inference on knowledge graphs, we propose a contextual inference pruning algorithm to achieve efficient chain inference. Our algorithm achieves a better inference result with precision and recall of 92% and 96%, respectively, which can avoid most of the meaningless inferences. In addition, we implement two prototypes and provide services, and the results show our approach is practical and effective. PMID:28299322
Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich; Waddell, Paul
2009-01-01
Urban simulation models and their visualization are used to help regional planning agencies evaluate alternative transportation investments, land use regulations, and environmental protection policies. Typical urban simulations provide spatially distributed data about number of inhabitants, land prices, traffic, and other variables. In this article, we build on a synergy of urban simulation, urban visualization, and computer graphics to automatically infer an urban layout for any time step of the simulation sequence. In addition to standard visualization tools, our method gathers data of the original street network, parcels, and aerial imagery and uses the available simulation results to infer changes to the original urban layout and produce a new and plausible layout for the simulation results. In contrast with previous work, our approach automatically updates the layout based on changes in the simulation data and thus can scale to a large simulation over many years. The method in this article offers a substantial step forward in building integrated visualization and behavioral simulation systems for use in community visioning, planning, and policy analysis. We demonstrate our method on several real cases using a 200 GB database for a 16,300 km2 area surrounding Seattle.
First order augmentation to tensor voting for boundary inference and multiscale analysis in 3D.
Tong, Wai-Shun; Tang, Chi-Keung; Mordohai, Philippos; Medioni, Gérard
2004-05-01
Most computer vision applications require the reliable detection of boundaries. In the presence of outliers, missing data, orientation discontinuities, and occlusion, this problem is particularly challenging. We propose to address it by complementing the tensor voting framework, which was limited to second order properties, with first order representation and voting. First order voting fields and a mechanism to vote for 3D surface and volume boundaries and curve endpoints in 3D are defined. Boundary inference is also useful for a second difficult problem in grouping, namely, automatic scale selection. We propose an algorithm that automatically infers the smallest scale that can preserve the finest details. Our algorithm then proceeds with progressively larger scales to ensure continuity where it has not been achieved. Therefore, the proposed approach does not oversmooth features or delay the handling of boundaries and discontinuities until model misfit occurs. The interaction of smooth features, boundaries, and outliers is accommodated by the unified representation, making possible the perceptual organization of data in curves, surfaces, volumes, and their boundaries simultaneously. We present results on a variety of data sets to show the efficacy of the improved formalism.
Well-Being Tracking via Smartphone-Measured Activity and Sleep: Cohort Study.
DeMasi, Orianna; Feygin, Sidney; Dembo, Aluma; Aguilera, Adrian; Recht, Benjamin
2017-10-05
Automatically tracking mental well-being could facilitate personalization of treatments for mood disorders such as depression and bipolar disorder. Smartphones present a novel and ubiquitous opportunity to track individuals' behavior and may be useful for inferring and automatically monitoring mental well-being. The aim of this study was to assess the extent to which activity and sleep tracking with a smartphone can be used for monitoring individuals' mental well-being. A cohort of 106 individuals was recruited to install an app on their smartphone that would track their well-being with daily surveys and track their behavior with activity inferences from their phone's accelerometer data. Of the participants recruited, 53 had sufficient data to infer activity and sleep measures. For this subset of individuals, we related measures of activity and sleep to the individuals' well-being and used these measures to predict their well-being. We found that smartphone-measured approximations for daily physical activity were positively correlated with both mood (P=.004) and perceived energy level (P<.001). Sleep duration was positively correlated with mood (P=.02) but not energy. Our measure for sleep disturbance was not found to be significantly related to either mood or energy, which could imply too much noise in the measurement. Models predicting the well-being measures from the activity and sleep measures were found to be significantly better than naive baselines (P<.01), despite modest overall improvements. Measures of activity and sleep inferred from smartphone activity were strongly related to and somewhat predictive of participants' well-being. Whereas the improvement over naive models was modest, it reaffirms the importance of considering physical activity and sleep for predicting mood and for making automatic mood monitoring a reality. ©Orianna DeMasi, Sidney Feygin, Aluma Dembo, Adrian Aguilera, Benjamin Recht. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 05.10.2017.
Knowledge requirements for automated inference of medical textbook markup.
Berrios, D. C.; Kehler, A.; Fagan, L. M.
1999-01-01
Indexing medical text in journals or textbooks requires a tremendous amount of resources. We tested two algorithms for automatically indexing nouns, noun-modifiers, and noun phrases, and inferring selected binary relations between UMLS concepts in a textbook of infectious disease. Sixty-six percent of nouns and noun-modifiers and 81% of noun phrases were correctly matched to UMLS concepts. Semantic relations were identified with 100% specificity and 94% sensitivity. For some medical sub-domains, these algorithms could permit expeditious generation of more complex indexing. PMID:10566445
Xu, Yaoshan; Li, Yongjuan; Ding, Weidong; Lu, Fan
2014-01-01
This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT) reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end.
Xu, Yaoshan; Li, Yongjuan; Ding, Weidong; Lu, Fan
2014-01-01
This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT) reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end. PMID:24520338
Nanoscale Transforming Mineral Phases in Fresh Nacre
DeVol, Ross T.; Sun, Chang-Yu; Marcus, Matthew A.; ...
2015-09-24
Nacre, or mother-of-pearl, the iridescent inner layer of many mollusk shells, is a biomineral lamellar composite of aragonite (CaCO 3) and organic sheets. Biomineralization frequently occurs via transient amorphous precursor phases, crystallizing into the final stable biomineral. In nacre, despite extensive attempts, amorphous calcium carbonate (ACC) precursors have remained elusive. They were inferred from non-nacre-forming larval shells, or from a residue of amorphous material surrounding mature gastropod nacre tablets, and have only once been observed in bivalve nacre. Here we present the first direct observation of ACC precursors to nacre formation, obtained from the growth front of nacre in gastropodmore » shells from red abalone (Haliotis rufescens), using synchrotron spectromicroscopy. Surprisingly, the abalone nacre data show the same ACC phases that are precursors to calcite (CaCO 3) formation in sea urchin spicules, and not proto-aragonite or poorly crystalline aragonite (pAra), as expected for aragonitic nacre. In contrast, we find pAra in coral.« less
Context Inference for Mobile Applications in the UPCASE Project
NASA Astrophysics Data System (ADS)
Santos, André C.; Tarrataca, Luís; Cardoso, João M. P.; Ferreira, Diogo R.; Diniz, Pedro C.; Chainho, Paulo
The growing processing capabilities of mobile devices coupled with portable and wearable sensors have enabled the development of context-aware services tailored to the user environment and its daily activities. The problem of determining the user context at each particular point in time is one of the main challenges in this area. In this paper, we describe the approach pursued in the UPCASE project, which makes use of sensors available in the mobile device as well as sensors externally connected via Bluetooth. We describe the system architecture from raw data acquisition to feature extraction and context inference. As a proof of concept, the inference of contexts is based on a decision tree to learn and identify contexts automatically and dynamically at runtime. Preliminary results suggest that this is a promising approach for context inference in several application scenarios.
NASA Technical Reports Server (NTRS)
Harrison, P. Ann
1992-01-01
The NASA VEGetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. The VEG subgoal PROPORTION.GROUND.COVER has been completed and a number of additional techniques that infer the proportion ground cover of a sample have been implemented. Some techniques operate on sample data at a single wavelength. The techniques previously incorporated in VEG for other subgoals operated on data at a single wavelength so implementing the additional single wavelength techniques required no changes to the structure of VEG. Two techniques which use data at multiple wavelengths to infer proportion ground cover were also implemented. This work involved modifying the structure of VEG so that multiple wavelength techniques could be incorporated. All the new techniques were tested using both the VEG 'Research Mode' and the 'Automatic Mode.'
DeepInfer: open-source deep learning deployment toolkit for image-guided therapy
NASA Astrophysics Data System (ADS)
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-03-01
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.
DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy.
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A; Kapur, Tina; Wells, William M; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-02-11
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.
Before They Can Speak, They Must Know.
ERIC Educational Resources Information Center
Cromie, William J.; Edson, Lee
1984-01-01
Intelligent relationships with people are among the goals for tomorrow's computers. Knowledge-based systems used and being developed to achieve these goals are discussed. Automatic learning, producing inferences, parallelism, program languages, friendly machines, computer vision, and biomodels are among the topics considered. (JN)
Spunt, Robert P; Lieberman, Matthew D
2013-01-01
Much social-cognitive processing is believed to occur automatically; however, the relative automaticity of the brain systems underlying social cognition remains largely undetermined. We used functional MRI to test for automaticity in the functioning of two brain systems that research has indicated are important for understanding other people's behavior: the mirror neuron system and the mentalizing system. Participants remembered either easy phone numbers (low cognitive load) or difficult phone numbers (high cognitive load) while observing actions after adopting one of four comprehension goals. For all four goals, mirror neuron system activation showed relatively little evidence of modulation by load; in contrast, the association of mentalizing system activation with the goal of inferring the actor's mental state was extinguished by increased cognitive load. These results support a dual-process model of the brain systems underlying action understanding and social cognition; the mirror neuron system supports automatic behavior identification, and the mentalizing system supports controlled social causal attribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael
2012-07-24
The goal of the DTRA project is to develop a mathematical framework that will provide the fundamental understanding of network survivability, algorithms for detecting/inferring pre-cursors of abnormal network behaviors, and methods for network adaptability and self-healing from cascading failures.
To better understand human exposure to perfluorinated compounds (PFCs), a model that assesses exposure to perfluorooctane sulfonate (PFOS) and its precursors from both an intake and a body burden perspective and combines the two with a simple pharmacokinetic (PK) model is demonst...
Timing of repetition suppression of event-related potentials to unattended objects.
Stefanics, Gabor; Heinzle, Jakob; Czigler, István; Valentini, Elia; Stephan, Klaas Enno
2018-05-26
Current theories of object perception emphasize the automatic nature of perceptual inference. Repetition suppression (RS), the successive decrease of brain responses to repeated stimuli, is thought to reflect the optimization of perceptual inference through neural plasticity. While functional imaging studies revealed brain regions that show suppressed responses to the repeated presentation of an object, little is known about the intra-trial time course of repetition effects to everyday objects. Here we used event-related potentials (ERP) to task-irrelevant line-drawn objects, while participants engaged in a distractor task. We quantified changes in ERPs over repetitions using three general linear models (GLM) that modelled RS by an exponential, linear, or categorical "change detection" function in each subject. Our aim was to select the model with highest evidence and determine the within-trial time-course and scalp distribution of repetition effects using that model. Model comparison revealed the superiority of the exponential model indicating that repetition effects are observable for trials beyond the first repetition. Model parameter estimates revealed a sequence of RS effects in three time windows (86-140ms, 322-360ms, and 400-446ms) and with occipital, temporo-parietal, and fronto-temporal distribution, respectively. An interval of repetition enhancement (RE) was also observed (320-340ms) over occipito-temporal sensors. Our results show that automatic processing of task-irrelevant objects involves multiple intervals of RS with distinct scalp topographies. These sequential intervals of RS and RE might reflect the short-term plasticity required for optimization of perceptual inference and the associated changes in prediction errors (PE) and predictions, respectively, over stimulus repetitions during automatic object processing. This article is protected by copyright. All rights reserved. © 2018 The Authors European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Krýsová, Hana; Krýsa, Josef; Kavan, Ladislav
2018-01-01
For proper function of the negative electrode of dye-sensitized and perovskite solar cells, the deposition of a nonporous blocking film is required on the surface of F-doped SnO 2 (FTO) glass substrates. Such a blocking film can minimise undesirable parasitic processes, for example, the back reaction of photoinjected electrons with the oxidized form of the redox mediator or with the hole-transporting medium can be avoided. In the present work, thin, transparent, blocking TiO 2 films are prepared by semi-automatic spray pyrolysis of precursors consisting of titanium diisopropoxide bis(acetylacetonate) as the main component. The variation in the layer thickness of the sprayed films is achieved by varying the number of spray cycles. The parameters investigated in this work were deposition temperature (150, 300 and 450 °C), number of spray cycles (20-200), precursor composition (with/without deliberately added acetylacetone), concentration (0.05 and 0.2 M) and subsequent post-calcination at 500 °C. The photo-electrochemical properties were evaluated in aqueous electrolyte solution under UV irradiation. The blocking properties were tested by cyclic voltammetry with a model redox probe with a simple one-electron-transfer reaction. Semi-automatic spraying resulted in the formation of transparent, homogeneous, TiO 2 films, and the technique allows for easy upscaling to large electrode areas. The deposition temperature of 450 °C was necessary for the fabrication of highly photoactive TiO 2 films. The blocking properties of the as-deposited TiO 2 films (at 450 °C) were impaired by post-calcination at 500 °C, but this problem could be addressed by increasing the number of spray cycles. The modification of the precursor by adding acetylacetone resulted in the fabrication of TiO 2 films exhibiting perfect blocking properties that were not influenced by post-calcination. These results will surely find use in the fabrication of large-scale dye-sensitized and perovskite solar cells.
Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid
2016-05-01
Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Laurence, Ted A; Bude, Jeff D; Ly, Sonny; Shen, Nan; Feit, Michael D
2012-05-07
Surface laser damage limits the lifetime of optics for systems guiding high fluence pulses, particularly damage in silica optics used for inertial confinement fusion-class lasers (nanosecond-scale high energy pulses at 355 nm/3.5 eV). The density of damage precursors at low fluence has been measured using large beams (1-3 cm); higher fluences cannot be measured easily since the high density of resulting damage initiation sites results in clustering. We developed automated experiments and analysis that allow us to damage test thousands of sites with small beams (10-30 µm), and automatically image the test sites to determine if laser damage occurred. We developed an analysis method that provides a rigorous connection between these small beam damage test results of damage probability versus laser pulse energy and the large beam damage results of damage precursor densities versus fluence. We find that for uncoated and coated fused silica samples, the distribution of precursors nearly flattens at very high fluences, up to 150 J/cm2, providing important constraints on the physical distribution and nature of these precursors.
A new method for automatic discontinuity traces sampling on rock mass 3D model
NASA Astrophysics Data System (ADS)
Umili, G.; Ferrero, A.; Einstein, H. H.
2013-02-01
A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.
Designing for Automatic Affect Inference in Learning Environments
ERIC Educational Resources Information Center
Afzal, Shazia; Robinson, Peter
2011-01-01
Emotions play a significant role in healthy cognitive functioning; they impact memory, attention, decision-making and attitude; and are therefore influential in learning and achievement. Consequently, affective diagnoses constitute an important aspect of human teacher-learner interactions motivating efforts to incorporate skills of affect…
Automated software system for checking the structure and format of ACM SIG documents
NASA Astrophysics Data System (ADS)
Mirza, Arsalan Rahman; Sah, Melike
2017-04-01
Microsoft (MS) Office Word is one of the most commonly used software tools for creating documents. MS Word 2007 and above uses XML to represent the structure of MS Word documents. Metadata about the documents are automatically created using Office Open XML (OOXML) syntax. We develop a new framework, which is called ADFCS (Automated Document Format Checking System) that takes the advantage of the OOXML metadata, in order to extract semantic information from MS Office Word documents. In particular, we develop a new ontology for Association for Computing Machinery (ACM) Special Interested Group (SIG) documents for representing the structure and format of these documents by using OWL (Web Ontology Language). Then, the metadata is extracted automatically in RDF (Resource Description Framework) according to this ontology using the developed software. Finally, we generate extensive rules in order to infer whether the documents are formatted according to ACM SIG standards. This paper, introduces ACM SIG ontology, metadata extraction process, inference engine, ADFCS online user interface, system evaluation and user study evaluations.
Galbadrakh, Bulgan; Lee, Kyung-Eun; Park, Hyun-Seok
2012-12-01
Grammatical inference methods are expected to find grammatical structures hidden in biological sequences. One hopes that studies of grammar serve as an appropriate tool for theory formation. Thus, we have developed JSequitur for automatically generating the grammatical structure of biological sequences in an inference framework of string compression algorithms. Our original motivation was to find any grammatical traits of several cancer genes that can be detected by string compression algorithms. Through this research, we could not find any meaningful unique traits of the cancer genes yet, but we could observe some interesting traits in regards to the relationship among gene length, similarity of sequences, the patterns of the generated grammar, and compression rate.
Cellular compartmentalization of secondary metabolism
Kistler, H. Corby; Broz, Karen
2015-01-01
Fungal secondary metabolism is often considered apart from the essential housekeeping functions of the cell. However, there are clear links between fundamental cellular metabolism and the biochemical pathways leading to secondary metabolite synthesis. Besides utilizing key biochemical precursors shared with the most essential processes of the cell (e.g., amino acids, acetyl CoA, NADPH), enzymes for secondary metabolite synthesis are compartmentalized at conserved subcellular sites that position pathway enzymes to use these common biochemical precursors. Co-compartmentalization of secondary metabolism pathway enzymes also may function to channel precursors, promote pathway efficiency and sequester pathway intermediates and products from the rest of the cell. In this review we discuss the compartmentalization of three well-studied fungal secondary metabolite biosynthetic pathways for penicillin G, aflatoxin and deoxynivalenol, and summarize evidence used to infer subcellular localization. We also discuss how these metabolites potentially are trafficked within the cell and may be exported. PMID:25709603
Using ADOPT Algorithm and Operational Data to Discover Precursors to Aviation Adverse Events
NASA Technical Reports Server (NTRS)
Janakiraman, Vijay; Matthews, Bryan; Oza, Nikunj
2018-01-01
The US National Airspace System (NAS) is making its transition to the NextGen system and assuring safety is one of the top priorities in NextGen. At present, safety is managed reactively (correct after occurrence of an unsafe event). While this strategy works for current operations, it may soon become ineffective for future airspace designs and high density operations. There is a need for proactive management of safety risks by identifying hidden and "unknown" risks and evaluating the impacts on future operations. To this end, NASA Ames has developed data mining algorithms that finds anomalies and precursors (high-risk states) to safety issues in the NAS. In this paper, we describe a recently developed algorithm called ADOPT that analyzes large volumes of data and automatically identifies precursors from real world data. Precursors help in detecting safety risks early so that the operator can mitigate the risk in time. In addition, precursors also help identify causal factors and help predict the safety incident. The ADOPT algorithm scales well to large data sets and to multidimensional time series, reduce analyst time significantly, quantify multiple safety risks giving a holistic view of safety among other benefits. This paper details the algorithm and includes several case studies to demonstrate its application to discover the "known" and "unknown" safety precursors in aviation operation.
English Complex Verb Constructions: Identification and Inference
ERIC Educational Resources Information Center
Tu, Yuancheng
2012-01-01
The fundamental problem faced by automatic text understanding in Natural Language Processing (NLP) is to identify semantically related pieces of text and integrate them together to compute the meaning of the whole text. However, the principle of compositionality runs into trouble very quickly when real language is examined with its frequent…
Personalized professional content recommendation
Xu, Songhua
2015-10-27
A personalized content recommendation system includes a client interface configured to automatically monitor a user's information data stream transmitted on the Internet. A hybrid contextual behavioral and collaborative personal interest inference engine resident to a non-transient media generates automatic predictions about the interests of individual users of the system. A database server retains the user's personal interest profile based on a plurality of monitored information. The system also includes a server programmed to filter items in an incoming information stream with the personal interest profile and is further programmed to identify only those items of the incoming information stream that substantially match the personal interest profile.
A knowledge-base generating hierarchical fuzzy-neural controller.
Kandadai, R M; Tien, J M
1997-01-01
We present an innovative fuzzy-neural architecture that is able to automatically generate a knowledge base, in an extractable form, for use in hierarchical knowledge-based controllers. The knowledge base is in the form of a linguistic rule base appropriate for a fuzzy inference system. First, we modify Berenji and Khedkar's (1992) GARIC architecture to enable it to automatically generate a knowledge base; a pseudosupervised learning scheme using reinforcement learning and error backpropagation is employed. Next, we further extend this architecture to a hierarchical controller that is able to generate its own knowledge base. Example applications are provided to underscore its viability.
``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis
NASA Astrophysics Data System (ADS)
Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin
Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.
Inference Engine in an Intelligent Ship Course-Keeping System
2017-01-01
The article presents an original design of an expert system, whose function is to automatically stabilize ship's course. The focus is put on the inference engine, a mechanism that consists of two functional components. One is responsible for the construction of state space regions, implemented on the basis of properly processed signals recorded by sensors from the input and output of an object. The other component is responsible for generating a control decision based on the knowledge obtained in the first module. The computing experiments described herein prove the effective and correct operation of the proposed system. PMID:29317859
A Not-So-Fundamental Limitation on Studying Complex Systems with Statistics: Comment on Rabin (2011)
NASA Astrophysics Data System (ADS)
Thomas, Drew M.
2012-12-01
Although living organisms are affected by many interrelated and unidentified variables, this complexity does not automatically impose a fundamental limitation on statistical inference. Nor need one invoke such complexity as an explanation of the "Truth Wears Off" or "decline" effect; similar "decline" effects occur with far simpler systems studied in physics. Selective reporting and publication bias, and scientists' biases in favor of reporting eye-catching results (in general) or conforming to others' results (in physics) better explain this feature of the "Truth Wears Off" effect than Rabin's suggested limitation on statistical inference.
Sensitivity analysis of seismic waveforms to upper-mantle discontinuities using the adjoint method
NASA Astrophysics Data System (ADS)
Koroni, Maria; Bozdağ, Ebru; Paulssen, Hanneke; Trampert, Jeannot
2017-09-01
Using spectral-element simulations of wave propagation, we investigated the sensitivity of seismic waveforms, recorded on transverse components, to upper-mantle discontinuities in 1-D and 3-D background models. These sensitivity kernels, or Fréchet derivatives, illustrate the spatial sensitivity to model parameters, of which those for shear wave speed and the surface topography of internal boundaries are discussed in this paper. We focus on the boundaries at 400 and 670 km depth of the mantle transition zone. SS precursors have frequently been used to infer the topography of upper-mantle discontinuities. These seismic phases are underside reflections off these boundaries and are usually analysed in the distance range of 110°-160°. This distance range is chosen to minimize the interference from other waves. We show sensitivity kernels for consecutive time windows at three characteristic epicentral distances within the 110°-160° range. The sensitivity kernels are computed with the adjoint method using synthetic data. From our simulations we can draw three main conclusions: (i) The exact Fréchet derivatives show that in all time windows, and also in those centred on the SS precursors, there is interference from other waves. This explains the difficulty reported in the literature to correct for 3-D shear wave speed perturbations, even if the 3-D structure is perfectly known. (ii) All studies attempting to map the topography of the 400 and 670 km discontinuities to date assume that the traveltimes of SS precursors can be linearly decomposed into a 3-D elastic structure and a topography part. We recently showed that such a linear decomposition is not possible for SS precursors, and the sensitivity kernels presented in this paper explain why. (iii) In agreement with previous work, we show that other parts of the seismograms have greater sensitivity to upper-mantle discontinuities than SS precursors, especially multiply bouncing S waves exploiting the S-wave triplications due to the mantle transition zone. These phases can potentially improve the inference of global topographic variations of the upper-mantle discontinuities in the context of full waveform inversion in a joint inversion for (an)elastic parameters and topography.
Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S
2015-02-01
With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.
Gas-phase kinetics modifies the CCN activity of a biogenic SOA.
Vizenor, A E; Asa-Awuku, A A
2018-02-28
Our current knowledge of cloud condensation nuclei (CCN) activity and the hygroscopicity of secondary organic aerosol (SOA) depends on the particle size and composition, explicitly, the thermodynamic properties of the aerosol solute and subsequent interactions with water. Here, we examine the CCN activation of 3 SOA systems (2 biogenic single precursor and 1 mixed precursor SOA system) in relation to gas-phase decay. Specifically, the relationship between time, gas-phase precursor decay and CCN activity of 100 nm SOA is studied. The studied SOA systems exhibit a time-dependent growth of CCN activity at an instrument supersaturation of ∼0.2%. As such, we define a critical activation time, t 50 , above which a 100 nm SOA particle will activate. The critical activation time for isoprene, longifolene and a mixture of the two precursor SOA is 2.01 hours, 2.53 hours and 3.17 hours, respectively. The activation times are then predicted with gas-phase kinetic data inferred from measurements of precursor decay. The gas-phase prediction of t 50 agrees well with CCN measured t 50 (within 0.05 hours of the actual critical times) and suggests that the gas-to-particle phase partitioning may be more significant for SOA CCN prediction than previously thought.
Affective Behavior and Nonverbal Interaction in Collaborative Virtual Environments
ERIC Educational Resources Information Center
Peña, Adriana; Rangel, Nora; Muñoz, Mirna; Mejia, Jezreel; Lara, Graciela
2016-01-01
While a person's internal state might not be easily inferred through an automatic computer system, within a group, people express themselves through their interaction with others. The group members' interaction can be then helpful to understand, to certain extent, its members' affective behavior in any case toward the task at hand. In this…
ERIC Educational Resources Information Center
Towne, Douglas M.; And Others
Simulation-based software tools that can infer system behaviors from a deep model of the system have the potential for automatically building the semantic representations required to support intelligent tutoring in fault diagnosis. The Intelligent Maintenance Training System (IMTS) is such a resource, designed for use in training troubleshooting…
NASA Astrophysics Data System (ADS)
Ban, Sang-Woo; Lee, Minho
2008-04-01
Knowledge-based clustering and autonomous mental development remains a high priority research topic, among which the learning techniques of neural networks are used to achieve optimal performance. In this paper, we present a new framework that can automatically generate a relevance map from sensory data that can represent knowledge regarding objects and infer new knowledge about novel objects. The proposed model is based on understating of the visual what pathway in our brain. A stereo saliency map model can selectively decide salient object areas by additionally considering local symmetry feature. The incremental object perception model makes clusters for the construction of an ontology map in the color and form domains in order to perceive an arbitrary object, which is implemented by the growing fuzzy topology adaptive resonant theory (GFTART) network. Log-polar transformed color and form features for a selected object are used as inputs of the GFTART. The clustered information is relevant to describe specific objects, and the proposed model can automatically infer an unknown object by using the learned information. Experimental results with real data have demonstrated the validity of this approach.
Disentangling Complexity in Bayesian Automatic Adaptive Quadrature
NASA Astrophysics Data System (ADS)
Adam, Gheorghe; Adam, Sanda
2018-02-01
The paper describes a Bayesian automatic adaptive quadrature (BAAQ) solution for numerical integration which is simultaneously robust, reliable, and efficient. Detailed discussion is provided of three main factors which contribute to the enhancement of these features: (1) refinement of the m-panel automatic adaptive scheme through the use of integration-domain-length-scale-adapted quadrature sums; (2) fast early problem complexity assessment - enables the non-transitive choice among three execution paths: (i) immediate termination (exceptional cases); (ii) pessimistic - involves time and resource consuming Bayesian inference resulting in radical reformulation of the problem to be solved; (iii) optimistic - asks exclusively for subrange subdivision by bisection; (3) use of the weaker accuracy target from the two possible ones (the input accuracy specifications and the intrinsic integrand properties respectively) - results in maximum possible solution accuracy under minimum possible computing time.
Krýsová, Hana; Kavan, Ladislav
2018-01-01
For proper function of the negative electrode of dye-sensitized and perovskite solar cells, the deposition of a nonporous blocking film is required on the surface of F-doped SnO2 (FTO) glass substrates. Such a blocking film can minimise undesirable parasitic processes, for example, the back reaction of photoinjected electrons with the oxidized form of the redox mediator or with the hole-transporting medium can be avoided. In the present work, thin, transparent, blocking TiO2 films are prepared by semi-automatic spray pyrolysis of precursors consisting of titanium diisopropoxide bis(acetylacetonate) as the main component. The variation in the layer thickness of the sprayed films is achieved by varying the number of spray cycles. The parameters investigated in this work were deposition temperature (150, 300 and 450 °C), number of spray cycles (20–200), precursor composition (with/without deliberately added acetylacetone), concentration (0.05 and 0.2 M) and subsequent post-calcination at 500 °C. The photo-electrochemical properties were evaluated in aqueous electrolyte solution under UV irradiation. The blocking properties were tested by cyclic voltammetry with a model redox probe with a simple one-electron-transfer reaction. Semi-automatic spraying resulted in the formation of transparent, homogeneous, TiO2 films, and the technique allows for easy upscaling to large electrode areas. The deposition temperature of 450 °C was necessary for the fabrication of highly photoactive TiO2 films. The blocking properties of the as-deposited TiO2 films (at 450 °C) were impaired by post-calcination at 500 °C, but this problem could be addressed by increasing the number of spray cycles. The modification of the precursor by adding acetylacetone resulted in the fabrication of TiO2 films exhibiting perfect blocking properties that were not influenced by post-calcination. These results will surely find use in the fabrication of large-scale dye-sensitized and perovskite solar cells. PMID:29719764
Dorrough, Angela R; Glöckner, Andreas; Betsch, Tilmann; Wille, Anika
2017-10-01
To make decisions in probabilistic inference tasks, individuals integrate relevant information partly in an automatic manner. Thereby, potentially irrelevant stimuli that are additionally presented can intrude on the decision process (e.g., Söllner, Bröder, Glöckner, & Betsch, 2014). We investigate whether such an intrusion effect can also be caused by potentially irrelevant or even misleading knowledge activated from memory. In four studies that combine a standard information board paradigm from decision research with a standard manipulation from social psychology, we investigate the case of stereotypes and demonstrate that stereotype knowledge can yield intrusion biases in probabilistic inferences from description. The magnitude of these biases increases with stereotype accessibility and decreases with a clarification of the rational solution. Copyright © 2017 Elsevier B.V. All rights reserved.
Automatic Estimation of Volcanic Ash Plume Height using WorldView-2 Imagery
NASA Technical Reports Server (NTRS)
McLaren, David; Thompson, David R.; Davies, Ashley G.; Gudmundsson, Magnus T.; Chien, Steve
2012-01-01
We explore the use of machine learning, computer vision, and pattern recognition techniques to automatically identify volcanic ash plumes and plume shadows, in WorldView-2 imagery. Using information of the relative position of the sun and spacecraft and terrain information in the form of a digital elevation map, classification, the height of the ash plume can also be inferred. We present the results from applying this approach to six scenes acquired on two separate days in April and May of 2010 of the Eyjafjallajokull eruption in Iceland. These results show rough agreement with ash plume height estimates from visual and radar based measurements.
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Fikes, R. E.; Chaitin, L. J.; Hart, P. E.; Duda, R. O.; Nilsson, N. J.
1971-01-01
A program of research in the field of artificial intelligence is presented. The research areas discussed include automatic theorem proving, representations of real-world environments, problem-solving methods, the design of a programming system for problem-solving research, techniques for general scene analysis based upon television data, and the problems of assembling an integrated robot system. Major accomplishments include the development of a new problem-solving system that uses both formal logical inference and informal heuristic methods, the development of a method of automatic learning by generalization, and the design of the overall structure of a new complete robot system. Eight appendices to the report contain extensive technical details of the work described.
Akashi, A; Yoshida, Y; Nakagoshi, H; Kuroki, K; Hashimoto, T; Tagawa, K; Imamoto, F
1988-10-01
Stabilizing factor, a 9 kDa protein, stabilizes and facilitates formation of the complex between mitochondrial ATP synthase and its intrinsic inhibitor protein. A clone containing the gene encoding the 9 kDa protein was selected from a yeast genomic library to determine the structure of its precursor protein. As deduced from the nucleotide sequence, the precursor of the yeast 9 kDa stabilizing factor contains 86 amino acid residues and has a molecular weight of 10,062. From the predicted sequence we infer that the stabilizing factor precursor contains a presequence of 23 amino acid residues at its amino terminus. We also used S1 mapping to determine the initiation site of transcription under glucose-repressed or derepressed conditions. These experiments suggest that transcription of this gene starts at three different sites and that only one of them is not affected by the presence of glucose.
Deductive Evaluation: Formal Code Analysis With Low User Burden
NASA Technical Reports Server (NTRS)
Di Vito, Ben. L
2016-01-01
We describe a framework for symbolically evaluating iterative C code using a deductive approach that automatically discovers and proves program properties. Although verification is not performed, the method can infer detailed program behavior. Software engineering work flows could be enhanced by this type of analysis. Floyd-Hoare verification principles are applied to synthesize loop invariants, using a library of iteration-specific deductive knowledge. When needed, theorem proving is interleaved with evaluation and performed on the fly. Evaluation results take the form of inferred expressions and type constraints for values of program variables. An implementation using PVS (Prototype Verification System) is presented along with results for sample C functions.
Automatic and strategic effects in the guidance of attention by working memory representations
Carlisle, Nancy B.; Woodman, Geoffrey F.
2010-01-01
Theories of visual attention suggest that working memory representations automatically guide attention toward memory-matching objects. Some empirical tests of this prediction have produced results consistent with working memory automatically guiding attention. However, others have shown that individuals can strategically control whether working memory representations guide visual attention. Previous studies have not independently measured automatic and strategic contributions to the interactions between working memory and attention. In this study, we used a classic manipulation of the probability of valid, neutral, and invalid cues to tease apart the nature of such interactions. This framework utilizes measures of reaction time (RT) to quantify the costs and benefits of attending to memory-matching items and infer the relative magnitudes of automatic and strategic effects. We found both costs and benefits even when the memory-matching item was no more likely to be the target than other items, indicating an automatic component of attentional guidance. However, the costs and benefits essentially doubled as the probability of a trial with a valid cue increased from 20% to 80%, demonstrating a potent strategic effect. We also show that the instructions given to participants led to a significant change in guidance distinct from the actual probability of events during the experiment. Together, these findings demonstrate that the influence of working memory representations on attention is driven by both automatic and strategic interactions. PMID:20643386
Automatic and strategic effects in the guidance of attention by working memory representations.
Carlisle, Nancy B; Woodman, Geoffrey F
2011-06-01
Theories of visual attention suggest that working memory representations automatically guide attention toward memory-matching objects. Some empirical tests of this prediction have produced results consistent with working memory automatically guiding attention. However, others have shown that individuals can strategically control whether working memory representations guide visual attention. Previous studies have not independently measured automatic and strategic contributions to the interactions between working memory and attention. In this study, we used a classic manipulation of the probability of valid, neutral, and invalid cues to tease apart the nature of such interactions. This framework utilizes measures of reaction time (RT) to quantify the costs and benefits of attending to memory-matching items and infer the relative magnitudes of automatic and strategic effects. We found both costs and benefits even when the memory-matching item was no more likely to be the target than other items, indicating an automatic component of attentional guidance. However, the costs and benefits essentially doubled as the probability of a trial with a valid cue increased from 20% to 80%, demonstrating a potent strategic effect. We also show that the instructions given to participants led to a significant change in guidance distinct from the actual probability of events during the experiment. Together, these findings demonstrate that the influence of working memory representations on attention is driven by both automatic and strategic interactions. Copyright © 2010 Elsevier B.V. All rights reserved.
Sobol-Shikler, Tal; Robinson, Peter
2010-07-01
We present a classification algorithm for inferring affective states (emotions, mental states, attitudes, and the like) from their nonverbal expressions in speech. It is based on the observations that affective states can occur simultaneously and different sets of vocal features, such as intonation and speech rate, distinguish between nonverbal expressions of different affective states. The input to the inference system was a large set of vocal features and metrics that were extracted from each utterance. The classification algorithm conducted independent pairwise comparisons between nine affective-state groups. The classifier used various subsets of metrics of the vocal features and various classification algorithms for different pairs of affective-state groups. Average classification accuracy of the 36 pairwise machines was 75 percent, using 10-fold cross validation. The comparison results were consolidated into a single ranked list of the nine affective-state groups. This list was the output of the system and represented the inferred combination of co-occurring affective states for the analyzed utterance. The inference accuracy of the combined machine was 83 percent. The system automatically characterized over 500 affective state concepts from the Mind Reading database. The inference of co-occurring affective states was validated by comparing the inferred combinations to the lexical definitions of the labels of the analyzed sentences. The distinguishing capabilities of the system were comparable to human performance.
Algorithm Optimally Orders Forward-Chaining Inference Rules
NASA Technical Reports Server (NTRS)
James, Mark
2008-01-01
People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.
Process Mining for Individualized Behavior Modeling Using Wireless Tracking in Nursing Homes
Fernández-Llatas, Carlos; Benedi, José-Miguel; García-Gómez, Juan M.; Traver, Vicente
2013-01-01
The analysis of human behavior patterns is increasingly used for several research fields. The individualized modeling of behavior using classical techniques requires too much time and resources to be effective. A possible solution would be the use of pattern recognition techniques to automatically infer models to allow experts to understand individual behavior. However, traditional pattern recognition algorithms infer models that are not readily understood by human experts. This limits the capacity to benefit from the inferred models. Process mining technologies can infer models as workflows, specifically designed to be understood by experts, enabling them to detect specific behavior patterns in users. In this paper, the eMotiva process mining algorithms are presented. These algorithms filter, infer and visualize workflows. The workflows are inferred from the samples produced by an indoor location system that stores the location of a resident in a nursing home. The visualization tool is able to compare and highlight behavior patterns in order to facilitate expert understanding of human behavior. This tool was tested with nine real users that were monitored for a 25-week period. The results achieved suggest that the behavior of users is continuously evolving and changing and that this change can be measured, allowing for behavioral change detection. PMID:24225907
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-26
... confirmed that a portion of their fleet is equipped with automatic nacelle and wing anti-ice systems, and... nacelle and wing anti-ice systems on during descent. From these statements, we infer that UPS is... Conditions Delta Airlines (Delta) requested that we revise the proposed AFM procedure to add the qualifier...
Towards Automatically Detecting Whether Student Learning Is Shallow
ERIC Educational Resources Information Center
Gowda, Sujith M.; Baker, Ryan S.; Corbett, Albert T.; Rossi, Lisa M.
2013-01-01
Recent research has extended student modeling to infer not just whether a student knows a skill or set of skills, but also whether the student has achieved robust learning--learning that enables the student to transfer their knowledge and prepares them for future learning (PFL). However, a student may fail to have robust learning in two fashions:…
Fluency Heuristic: A Model of How the Mind Exploits a By-Product of Information Retrieval
ERIC Educational Resources Information Center
Hertwig, Ralph; Herzog, Stefan M.; Schooler, Lael J.; Reimer, Torsten
2008-01-01
Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the…
Summarization as the base for text assessment
NASA Astrophysics Data System (ADS)
Karanikolas, Nikitas N.
2015-02-01
We present a model that apply shallow text summarization as a cheap (in resources needed) process for Automatic (machine based) free text answer Assessment (AA). The evaluation of the proposed method induces the inference that the Conventional Assessment (CA, man made assessment of free text answers) does not have an obvious mechanical replacement. However, this is a research challenge.
Automatic Facial Expression Recognition and Operator Functional State
NASA Technical Reports Server (NTRS)
Blanson, Nina
2012-01-01
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions
Automatic Facial Expression Recognition and Operator Functional State
NASA Technical Reports Server (NTRS)
Blanson, Nina
2011-01-01
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.
DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-01-01
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose “DeepInfer” – an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections. PMID:28615794
POPPER, a simple programming language for probabilistic semantic inference in medicine.
Robson, Barry
2015-01-01
Our previous reports described the use of the Hyperbolic Dirac Net (HDN) as a method for probabilistic inference from medical data, and a proposed probabilistic medical Semantic Web (SW) language Q-UEL to provide that data. Rather like a traditional Bayes Net, that HDN provided estimates of joint and conditional probabilities, and was static, with no need for evolution due to "reasoning". Use of the SW will require, however, (a) at least the semantic triple with more elaborate relations than conditional ones, as seen in use of most verbs and prepositions, and (b) rules for logical, grammatical, and definitional manipulation that can generate changes in the inference net. Here is described the simple POPPER language for medical inference. It can be automatically written by Q-UEL, or by hand. Based on studies with our medical students, it is believed that a tool like this may help in medical education and that a physician unfamiliar with SW science can understand it. It is here used to explore the considerable challenges of assigning probabilities, and not least what the meaning and utility of inference net evolution would be for a physician. Copyright © 2014 Elsevier Ltd. All rights reserved.
Where do spontaneous first impressions of faces come from?
Over, Harriet; Cook, Richard
2018-01-01
Humans spontaneously attribute a wide range of traits to strangers based solely on their facial features. These first impressions are known to exert striking effects on our choices and behaviours. In this paper, we provide a theoretical account of the origins of these spontaneous trait inferences. We describe a novel framework ('Trait Inference Mapping') in which trait inferences are products of mappings between locations in 'face space' and 'trait space'. These mappings are acquired during ontogeny and allow excitation of face representations to propagate automatically to associated trait representations. This conceptualization provides a framework within which the relative contribution of ontogenetic experience and genetic inheritance can be considered. Contrary to many existing ideas about the origins of trait inferences, we propose only a limited role for innate mechanisms and natural selection. Instead, our model explains inter-observer consistency by appealing to cultural learning and physiological responses that facilitate or 'canalise' particular face-trait mappings. Our TIM framework has both theoretical and substantive implications, and can be extended to trait inferences from non-facial cues to provide a unified account of first impressions. Copyright © 2017 Elsevier B.V. All rights reserved.
Van den Eede, Sofie; Baetens, Kris; Vandekerckhove, Marie
2009-01-01
This study measured event-related potentials (ERPs) during multiple goal and trait inferences, under spontaneous or intentional instructions. Participants read sentences describing several goal-implying behaviors of a target person from which also a strong trait could be inferred or not. The last word of each sentence determined the consistency with the inference induced during preceding sentences. In comparison with behaviors that implied only a goal, stronger waveforms beginning at ∼150 ms were obtained when the behaviors additionally implied a trait. These ERPs showed considerable parallels between spontaneous and intentional inferences. This suggests that traits embedded in a stream of goal-directed behaviors were detected more rapidly and automatically than mere goals, irrespective of the participants’ spontaneous or intentional instructions. In line with this, source localization (LORETA) of the ERPs show predominantly activation in the temporo-parietal junction (TPJ) during 150–200 ms, suggesting that goals were detected at that time interval. During 200–300 ms, activation was stronger at the medial prefrontal cortex (mPFC) for multiple goals and traits as opposed to goals only, suggesting that traits were inferred during this time window. A cued recall measure taken after the presentation of the stimulus material support the occurrence of goal and trait inferences and shows significant correlations with the neural components, indicating that these components are valid neural indices of spontaneous and intentional social inferences. The early detection of multiple goal and trait inferences is explained in terms of their greater social relevance, leading to privileged attention allocation and processing in the brain. PMID:19270041
Responsiveness to Intervention in Children with Dyslexia.
Tilanus, Elisabeth A T; Segers, Eliane; Verhoeven, Ludo
2016-08-01
We examined the responsiveness to a 12-week phonics intervention in 54 s-grade Dutch children with dyslexia, and compared their reading and spelling gains to a control group of 61 typical readers. The intervention aimed to train grapheme-phoneme correspondences (GPCs), and word reading and spelling by using phonics instruction. We examined the accuracy and efficiency of grapheme-phoneme correspondences, decoding words and pseudowords, as well as the accuracy of spelling words before and after the intervention. Moreover, responsiveness to intervention was examined by studying to what extent scores at posttest could directly or indirectly be predicted from precursor measures. Results showed that the children with dyslexia were significantly behind in all reading and spelling measures at pretest. During the intervention, the children with dyslexia made more progress on GPC, (pseudo)word decoding accuracy and efficiency, and spelling accuracy than the typical reading group. Furthermore, we found a direct effect of the precursor measures rapid automatized naming, verbal working memory and phoneme deletion on the dyslexic children's progress in GPC speed, and indirect effects of rapid automatized naming and phoneme deletion on word and pseudoword efficiency and word decoding accuracy via the scores at pretest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.
Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.
Diagnosis - Using automatic test equipment and artificial intelligence expert systems
NASA Astrophysics Data System (ADS)
Ramsey, J. E., Jr.
Three expert systems (ATEOPS, ATEFEXPERS, and ATEFATLAS), which were created to direct automatic test equipment (ATE), are reviewed. The purpose of the project was to develop an expert system to troubleshoot the converter-programmer power supply card for the F-15 aircraft and have that expert system direct the automatic test equipment. Each expert system uses a different knowledge base or inference engine, basing the testing on the circuit schematic, test requirements document, or ATLAS code. Implementing generalized modules allows the expert systems to be used for any different unit under test. Using converted ATLAS to LISP code allows the expert system to direct any ATE using ATLAS. The constraint propagated frame system allows for the expansion of control by creating the ATLAS code, checking the code for good software engineering techniques, directing the ATE, and changing the test sequence as needed (planning).
Yang, Huanjia; Chew, David A S; Wu, Weiwei; Zhou, Zhipeng; Li, Qiming
2012-09-01
Identifying accident precursors using real-time identity information has great potential to improve safety performance in construction industry, which is still suffering from day to day records of accident fatality and injury. Based on the requirements analysis for identifying precursor and the discussion of enabling technology solutions for acquiring and sharing real-time automatic identification information on construction site, this paper proposes an identification system design for proactive accident prevention to improve construction site safety. Firstly, a case study is conducted to analyze the automatic identification requirements for identifying accident precursors in construction site. Results show that it mainly consists of three aspects, namely access control, training and inspection information and operation authority. The system is then designed to fulfill these requirements based on ZigBee enabled wireless sensor network (WSN), radio frequency identification (RFID) technology and an integrated ZigBee RFID sensor network structure. At the same time, an information database is also designed and implemented, which includes 15 tables, 54 queries and several reports and forms. In the end, a demonstration system based on the proposed system design is developed as a proof of concept prototype. The contributions of this study include the requirement analysis and technical design of a real-time identity information tracking solution for proactive accident prevention on construction sites. The technical solution proposed in this paper has a significant importance in improving safety performance on construction sites. Moreover, this study can serve as a reference design for future system integrations where more functions, such as environment monitoring and location tracking, can be added. Copyright © 2011 Elsevier Ltd. All rights reserved.
Localization of the lumbar discs using machine learning and exact probabilistic inference.
Oktay, Ayse Betul; Akgul, Yusuf Sinan
2011-01-01
We propose a novel fully automatic approach to localize the lumbar intervertebral discs in MR images with PHOG based SVM and a probabilistic graphical model. At the local level, our method assigns a score to each pixel in target image that indicates whether it is a disc center or not. At the global level, we define a chain-like graphical model that represents the lumbar intervertebral discs and we use an exact inference algorithm to localize the discs. Our main contributions are the employment of the SVM with the PHOG based descriptor which is robust against variations of the discs and a graphical model that reflects the linear nature of the vertebral column. Our inference algorithm runs in polynomial time and produces globally optimal results. The developed system is validated on a real spine MRI dataset and the final localization results are favorable compared to the results reported in the literature.
U.S. stock market interaction network as learned by the Boltzmann machine
Borysov, Stanislav S.; Roudi, Yasser; Balatsky, Alexander V.
2015-12-07
Here, we study historical dynamics of joint equilibrium distribution of stock returns in the U.S. stock market using the Boltzmann distribution model being parametrized by external fields and pairwise couplings. Within Boltzmann learning framework for statistical inference, we analyze historical behavior of the parameters inferred using exact and approximate learning algorithms. Since the model and inference methods require use of binary variables, effect of this mapping of continuous returns to the discrete domain is studied. The presented results show that binarization preserves the correlation structure of the market. Properties of distributions of external fields and couplings as well as themore » market interaction network and industry sector clustering structure are studied for different historical dates and moving window sizes. We demonstrate that the observed positive heavy tail in distribution of couplings is related to the sparse clustering structure of the market. We also show that discrepancies between the model’s parameters might be used as a precursor of financial instabilities.« less
Deep convolutional neural network for prostate MR segmentation
NASA Astrophysics Data System (ADS)
Tian, Zhiqiang; Liu, Lizhi; Fei, Baowei
2017-03-01
Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%+/-3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.
Word Learning and Attention Allocation Based on Word Class and Category Knowledge
ERIC Educational Resources Information Center
Hupp, Julie M.
2015-01-01
Attention allocation in word learning may vary developmentally based on the novelty of the object. It has been suggested that children differentially learn verbs based on the novelty of the agent, but adults do not because they automatically infer the object's category and thus treat it like a familiar object. The current research examined…
ERIC Educational Resources Information Center
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Generating Customized Verifiers for Automatically Generated Code
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2008-01-01
Program verification using Hoare-style techniques requires many logical annotations. We have previously developed a generic annotation inference algorithm that weaves in all annotations required to certify safety properties for automatically generated code. It uses patterns to capture generator- and property-specific code idioms and property-specific meta-program fragments to construct the annotations. The algorithm is customized by specifying the code patterns and integrating them with the meta-program fragments for annotation construction. However, this is difficult since it involves tedious and error-prone low-level term manipulations. Here, we describe an annotation schema compiler that largely automates this customization task using generative techniques. It takes a collection of high-level declarative annotation schemas tailored towards a specific code generator and safety property, and generates all customized analysis functions and glue code required for interfacing with the generic algorithm core, thus effectively creating a customized annotation inference algorithm. The compiler raises the level of abstraction and simplifies schema development and maintenance. It also takes care of some more routine aspects of formulating patterns and schemas, in particular handling of irrelevant program fragments and irrelevant variance in the program structure, which reduces the size, complexity, and number of different patterns and annotation schemas that are required. The improvements described here make it easier and faster to customize the system to a new safety property or a new generator, and we demonstrate this by customizing it to certify frame safety of space flight navigation code that was automatically generated from Simulink models by MathWorks' Real-Time Workshop.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sasahara, M; Arimura, H; Hirose, T
Purpose: Current image-guided radiotherapy (IGRT) procedure is bonebased patient positioning, followed by subjective manual correction using cone beam computed tomography (CBCT). This procedure might cause the misalignment of the patient positioning. Automatic target-based patient positioning systems achieve the better reproducibility of patient setup. Our aim of this study was to develop an automatic target-based patient positioning framework for IGRT with CBCT images in prostate cancer treatment. Methods: Seventy-three CBCT images of 10 patients and 24 planning CT images with digital imaging and communications in medicine for radiotherapy (DICOM-RT) structures were used for this study. Our proposed framework started from themore » generation of probabilistic atlases of bone and prostate from 24 planning CT images and prostate contours, which were made in the treatment planning. Next, the gray-scale histograms of CBCT values within CTV regions in the planning CT images were obtained as the occurrence probability of the CBCT values. Then, CBCT images were registered to the atlases using a rigid registration with mutual information. Finally, prostate regions were estimated by applying the Bayesian inference to CBCT images with the probabilistic atlases and CBCT value occurrence probability. The proposed framework was evaluated by calculating the Euclidean distance of errors between two centroids of prostate regions determined by our method and ground truths of manual delineations by a radiation oncologist and a medical physicist on CBCT images for 10 patients. Results: The average Euclidean distance between the centroids of extracted prostate regions determined by our proposed method and ground truths was 4.4 mm. The average errors for each direction were 1.8 mm in anteroposterior direction, 0.6 mm in lateral direction and 2.1 mm in craniocaudal direction. Conclusion: Our proposed framework based on probabilistic atlases and Bayesian inference might be feasible to automatically determine prostate regions on CBCT images.« less
A large atomic chlorine source inferred from mid-continental reactive nitrogen chemistry.
Thornton, Joel A; Kercher, James P; Riedel, Theran P; Wagner, Nicholas L; Cozic, Julie; Holloway, John S; Dubé, William P; Wolfe, Glenn M; Quinn, Patricia K; Middlebrook, Ann M; Alexander, Becky; Brown, Steven S
2010-03-11
Halogen atoms and oxides are highly reactive and can profoundly affect atmospheric composition. Chlorine atoms can decrease the lifetimes of gaseous elemental mercury and hydrocarbons such as the greenhouse gas methane. Chlorine atoms also influence cycles that catalytically destroy or produce tropospheric ozone, a greenhouse gas potentially toxic to plant and animal life. Conversion of inorganic chloride into gaseous chlorine atom precursors within the troposphere is generally considered a coastal or marine air phenomenon. Here we report mid-continental observations of the chlorine atom precursor nitryl chloride at a distance of 1,400 km from the nearest coastline. We observe persistent and significant nitryl chloride production relative to the consumption of its nitrogen oxide precursors. Comparison of these findings to model predictions based on aerosol and precipitation composition data from long-term monitoring networks suggests nitryl chloride production in the contiguous USA alone is at a level similar to previous global estimates for coastal and marine regions. We also suggest that a significant fraction of tropospheric chlorine atoms may arise directly from anthropogenic pollutants.
Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms.
Li, Le; Yip, Kevin Y
2016-12-15
Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature. Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/.
Inference of the sparse kinetic Ising model using the decimation method
NASA Astrophysics Data System (ADS)
Decelle, Aurélien; Zhang, Pan
2015-05-01
In this paper we study the inference of the kinetic Ising model on sparse graphs by the decimation method. The decimation method, which was first proposed in Decelle and Ricci-Tersenghi [Phys. Rev. Lett. 112, 070603 (2014), 10.1103/PhysRevLett.112.070603] for the static inverse Ising problem, tries to recover the topology of the inferred system by setting the weakest couplings to zero iteratively. During the decimation process the likelihood function is maximized over the remaining couplings. Unlike the ℓ1-optimization-based methods, the decimation method does not use the Laplace distribution as a heuristic choice of prior to select a sparse solution. In our case, the whole process can be done auto-matically without fixing any parameters by hand. We show that in the dynamical inference problem, where the task is to reconstruct the couplings of an Ising model given the data, the decimation process can be applied naturally into a maximum-likelihood optimization algorithm, as opposed to the static case where pseudolikelihood method needs to be adopted. We also use extensive numerical studies to validate the accuracy of our methods in dynamical inference problems. Our results illustrate that, on various topologies and with different distribution of couplings, the decimation method outperforms the widely used ℓ1-optimization-based methods.
Kraus, T.E.C.; Bergamaschi, B.A.; Hernes, P.J.; Spencer, R.G.M.; Stepanauskas, R.; Kendall, C.; Losee, R.F.; Fujii, R.
2008-01-01
This study assesses how rivers, wetlands, island drains and open water habitats within the Sacramento-San Joaquin River Delta affect dissolved organic matter (DOM) content and composition, and disinfection byproduct (DBP) formation. Eleven sites representative of these habitats were sampled on six dates to encompass seasonal variability. Using a suite of qualitative analyses, including specific DBP formation potential, absorbance, fluorescence, lignin content and composition, C and N stable isotopic compositions, and structural groupings determined using CPMAS (cross polarization, magic angle spinning) 13C NMR, we applied a geochemical fingerprinting approach to characterize the DOM from different Delta habitats, and infer DOM and DBP precursor sources and estimate the relative contribution from different sources. Although river input was the predominant source of dissolved organic carbon (DOC), we observed that 13-49% of the DOC exported from the Delta originated from sources within the Delta, depending on season. Interaction with shallow wetlands and subsided islands significantly increased DOC and DBP precursor concentrations and affected DOM composition, while deep open water habitats had little discernable effect. Shallow wetlands contributed the greatest amounts of DOM and DBP precursors in the spring and summer, in contrast to island drains which appeared to be an important source during winter months. The DOM derived from wetlands and island drains had greater haloacetic acid precursor content relative to incoming river water, while two wetlands contributed DOM with greater propensity to form trihalomethanes. These results are pertinent to restoration of the Delta. Large scale introduction of shallow wetlands, a proposed restoration strategy, could alter existing DOC and DBP precursor concentrations, depending on their hydrologic connection to Delta channels. ?? 2008 Elsevier Ltd.
Monitoring groundwater and river interaction along the Hanford reach of the Columbia River
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, M.D.
1994-04-01
As an adjunct to efficient Hanford Site characterization and remediation of groundwater contamination, an automatic monitor network has been used to measure Columbia River and adjacent groundwater levels in several areas of the Hanford Site since 1991. Water levels, temperatures, and electrical conductivity measured by the automatic monitor network provided an initial database with which to calibrate models and from which to infer ground and river water interactions for site characterization and remediation activities. Measurements of the dynamic river/aquifer system have been simultaneous at 1-hr intervals, with a quality suitable for hydrologic modeling and for computer model calibration and testing.more » This report describes the equipment, procedures, and results from measurements done in 1993.« less
Autoclass: An automatic classification system
NASA Technical Reports Server (NTRS)
Stutz, John; Cheeseman, Peter; Hanson, Robin
1991-01-01
The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.
BALANCE: Towards a Usable Pervasive Wellness Application with Accurate Activity Inference
Denning, Tamara; Andrew, Adrienne; Chaudhri, Rohit; Hartung, Carl; Lester, Jonathan; Borriello, Gaetano; Duncan, Glen
2010-01-01
Technology offers the potential to objectively monitor people’s eating and activity behaviors and encourage healthier lifestyles. BALANCE is a mobile phone-based system for long term wellness management. The BALANCE system automatically detects the user’s caloric expenditure via sensor data from a Mobile Sensing Platform unit worn on the hip. Users manually enter information on foods eaten via an interface on an N95 mobile phone. Initial validation experiments measuring oxygen consumption during treadmill walking and jogging show that the system’s estimate of caloric output is within 87% of the actual value. Future work will refine and continue to evaluate the system’s efficacy and develop more robust data input and activity inference methods. PMID:20445819
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing
Xian, Xuefeng; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611
Exaggerated, mispredicted, and misplaced: when "it's the thought that counts" in gift exchanges.
Zhang, Yan; Epley, Nicholas
2012-11-01
Gift-giving involves both the objective value of a gift and the symbolic meaning of the exchange. The objective value is sometimes considered of secondary importance as when people claim, "It's the thought that counts." We evaluated when and how mental state inferences count in gift exchanges. Because considering another's thoughts requires motivation and deliberation, we predicted gift givers' thoughts would increase receivers' appreciation only when triggered to consider a giver's thoughts, such as when a friend gives a bad gift. Because gift givers do not experience this trigger, we expected they would mispredict when their thoughts count and when they do not. Three experiments support these predictions. A final experiment demonstrated that thoughts "count" for givers by increasing social connection to the receiver. These results suggest that mental state inferences are not automatic in social interactions and that inferences about how much thoughts count are systematically miscalibrated. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Stewart, Suzanne L K; Schepman, Astrid; Haigh, Matthew; McHugh, Rhian; Stewart, Andrew J
2018-03-14
The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.
Campbell, Kieran R; Yau, Christopher
2017-03-15
Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.
An Automatic Phase-Change Detection Technique for Colloidal Hard Sphere Suspensions
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth; Rogers, Richard B.
2005-01-01
Colloidal suspensions of monodisperse spheres are used as physical models of thermodynamic phase transitions and as precursors to photonic band gap materials. However, current image analysis techniques are not able to distinguish between densely packed phases within conventional microscope images, which are mainly characterized by degrees of randomness or order with similar grayscale value properties. Current techniques for identifying the phase boundaries involve manually identifying the phase transitions, which is very tedious and time consuming. We have developed an intelligent machine vision technique that automatically identifies colloidal phase boundaries. The algorithm utilizes intelligent image processing techniques that accurately identify and track phase changes vertically or horizontally for a sequence of colloidal hard sphere suspension images. This technique is readily adaptable to any imaging application where regions of interest are distinguished from the background by differing patterns of motion over time.
Kaneko, Takaaki; Tomonaga, Masaki
2014-06-01
Humans are often unaware of how they control their limb motor movements. People pay attention to their own motor movements only when their usual motor routines encounter errors. Yet little is known about the extent to which voluntary actions rely on automatic control and when automatic control shifts to deliberate control in nonhuman primates. In this study, we demonstrate that chimpanzees and humans showed similar limb motor adjustment in response to feedback error during reaching actions, whereas attentional allocation inferred from gaze behavior differed. We found that humans shifted attention to their own motor kinematics as errors were induced in motor trajectory feedback regardless of whether the errors actually disrupted their reaching their action goals. In contrast, chimpanzees shifted attention to motor execution only when errors actually interfered with their achieving a planned action goal. These results indicate that the species differed in their criteria for shifting from automatic to deliberate control of motor actions. It is widely accepted that sophisticated motor repertoires have evolved in humans. Our results suggest that the deliberate monitoring of one's own motor kinematics may have evolved in the human lineage. Copyright © 2014 Elsevier B.V. All rights reserved.
Blom, Mozes P K
2015-08-05
Recently developed molecular methods enable geneticists to target and sequence thousands of orthologous loci and infer evolutionary relationships across the tree of life. Large numbers of genetic markers benefit species tree inference but visual inspection of alignment quality, as traditionally conducted, is challenging with thousands of loci. Furthermore, due to the impracticality of repeated visual inspection with alternative filtering criteria, the potential consequences of using datasets with different degrees of missing data remain nominally explored in most empirical phylogenomic studies. In this short communication, I describe a flexible high-throughput pipeline designed to assess alignment quality and filter exonic sequence data for subsequent inference. The stringency criteria for alignment quality and missing data can be adapted based on the expected level of sequence divergence. Each alignment is automatically evaluated based on the stringency criteria specified, significantly reducing the number of alignments that require visual inspection. By developing a rapid method for alignment filtering and quality assessment, the consistency of phylogenetic estimation based on exonic sequence alignments can be further explored across distinct inference methods, while accounting for different degrees of missing data.
Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model Generation for ns-3
2015-12-01
through visiting the inferred automata o Fuzzing of an implementation by generating altered message formats We tested with 3 versions of Netzob. First...relationships. Afterwards, we used the Automata module to generate state machines using different functions: “generateChainedStateAutomata...The “generatePTAAutomata” takes as input several communication sessions and then identifies common paths and merges these into a single automata . The
Design of fuzzy cognitive maps using neural networks for predicting chaotic time series.
Song, H J; Miao, C Y; Shen, Z Q; Roel, W; Maja, D H; Francky, C
2010-12-01
As a powerful paradigm for knowledge representation and a simulation mechanism applicable to numerous research and application fields, Fuzzy Cognitive Maps (FCMs) have attracted a great deal of attention from various research communities. However, the traditional FCMs do not provide efficient methods to determine the states of the investigated system and to quantify causalities which are the very foundation of the FCM theory. Therefore in many cases, constructing FCMs for complex causal systems greatly depends on expert knowledge. The manually developed models have a substantial shortcoming due to model subjectivity and difficulties with accessing its reliability. In this paper, we propose a fuzzy neural network to enhance the learning ability of FCMs so that the automatic determination of membership functions and quantification of causalities can be incorporated with the inference mechanism of conventional FCMs. In this manner, FCM models of the investigated systems can be automatically constructed from data, and therefore are independent of the experts. Furthermore, we employ mutual subsethood to define and describe the causalities in FCMs. It provides more explicit interpretation for causalities in FCMs and makes the inference process easier to understand. To validate the performance, the proposed approach is tested in predicting chaotic time series. The simulation studies show the effectiveness of the proposed approach. Copyright © 2010 Elsevier Ltd. All rights reserved.
Ashkenazy, Haim; Abadi, Shiran; Martz, Eric; Chay, Ofer; Mayrose, Itay; Pupko, Tal; Ben-Tal, Nir
2016-01-01
The degree of evolutionary conservation of an amino acid in a protein or a nucleic acid in DNA/RNA reflects a balance between its natural tendency to mutate and the overall need to retain the structural integrity and function of the macromolecule. The ConSurf web server (http://consurf.tau.ac.il), established over 15 years ago, analyses the evolutionary pattern of the amino/nucleic acids of the macromolecule to reveal regions that are important for structure and/or function. Starting from a query sequence or structure, the server automatically collects homologues, infers their multiple sequence alignment and reconstructs a phylogenetic tree that reflects their evolutionary relations. These data are then used, within a probabilistic framework, to estimate the evolutionary rates of each sequence position. Here we introduce several new features into ConSurf, including automatic selection of the best evolutionary model used to infer the rates, the ability to homology-model query proteins, prediction of the secondary structure of query RNA molecules from sequence, the ability to view the biological assembly of a query (in addition to the single chain), mapping of the conservation grades onto 2D RNA models and an advanced view of the phylogenetic tree that enables interactively rerunning ConSurf with the taxa of a sub-tree. PMID:27166375
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.
2015-10-01
Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.
Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms
Li, Le; Yip, Kevin Y.
2016-01-01
Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature.Availability: Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/. PMID:27976738
Martin, Daniel B; Holzman, Ted; May, Damon; Peterson, Amelia; Eastham, Ashley; Eng, Jimmy; McIntosh, Martin
2008-11-01
Multiple reaction monitoring (MRM) mass spectrometry identifies and quantifies specific peptides in a complex mixture with very high sensitivity and speed and thus has promise for the high throughput screening of clinical samples for candidate biomarkers. We have developed an interactive software platform, called MRMer, for managing highly complex MRM-MS experiments, including quantitative analyses using heavy/light isotopic peptide pairs. MRMer parses and extracts information from MS files encoded in the platform-independent mzXML data format. It extracts and infers precursor-product ion transition pairings, computes integrated ion intensities, and permits rapid visual curation for analyses exceeding 1000 precursor-product pairs. Results can be easily output for quantitative comparison of consecutive runs. Additionally MRMer incorporates features that permit the quantitative analysis experiments including heavy and light isotopic peptide pairs. MRMer is open source and provided under the Apache 2.0 license.
PSNet: prostate segmentation on MRI based on a convolutional neural network.
Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei
2018-04-01
Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.
Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.
Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula
2016-03-01
When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mannen, Kazutaka; Yukutake, Yohei; Kikugawa, George; Harada, Masatake; Itadera, Kazuhiro; Takenaka, Jun
2018-04-01
The 2015 eruption of Hakone volcano was a very small phreatic eruption, with total erupted ash estimated to be in the order of only 102 m3 and ballistic blocks reaching less than 30 m from the vent. Precursors, however, had been recognized at least 2 months before the eruption and mitigation measures were taken by the local governments well in advance. In this paper, the course of precursors, the eruption and the post-eruptive volcanic activity are reviewed, and a preliminary model for the magma-hydrothermal process that caused the unrest and eruption is proposed. Also, mitigation measures taken during the unrest and eruption are summarized and discussed. The first precursors observed were an inflation of the deep source and deep low-frequency earthquakes in early April 2015; an earthquake swarm then started in late April. On May 3, steam wells in Owakudani, the largest fumarolic area on the volcano, started to blowout. Seismicity reached its maximum in mid-May and gradually decreased; however, at 7:32 local time on June 29, a shallow open crack was formed just beneath Owakudani as inferred from sudden tilt change and InSAR analysis. The same day mud flows and/or debris flows likely started before 11:00 and ash emission began at about 12:30. The volcanic unrest and the eruption of 2015 can be interpreted as a pressure increase in the hydrothermal system, which was triggered by magma replenishment to a deep magma chamber. Such a pressure increase was also inferred from the 2001 unrest and other minor unrests of Hakone volcano during the twenty-first century. In fact, monitoring of repeated periods of unrest enabled alerting prior to the 2015 eruption. However, since open crack formation seems to occur haphazardly, eruption prediction remains impossible and evacuation in the early phase of volcanic unrest is the only way to mitigate volcanic hazard.[Figure not available: see fulltext.
An expert system shell for inferring vegetation characteristics: Prototype help system (Task 1)
NASA Technical Reports Server (NTRS)
1993-01-01
The NASA Vegetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. A prototype of the VEG subgoal HELP.SYSTEM has been completed and the Help System has been added to the VEG system. It is loaded when the user first clicks on the HELP.SYSTEM option in the Tool Box Menu. The Help System provides a user tool to support needed user information. It also provides interactive tools the scientist may use to develop new help messages and to modify existing help messages that are attached to VEG screens. The system automatically manages system and file operations needed to preserve new or modified help messages. The Help System was tested both as a help system development and a help system user tool.
GAMBIT: the global and modular beyond-the-standard-model inference tool
NASA Astrophysics Data System (ADS)
Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Dickinson, Hugh; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Lundberg, Johan; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; White, Martin; Wild, Sebastian
2017-11-01
We describe the open-source global fitting package GAMBIT: the Global And Modular Beyond-the-Standard-Model Inference Tool. GAMBIT combines extensive calculations of observables and likelihoods in particle and astroparticle physics with a hierarchical model database, advanced tools for automatically building analyses of essentially any model, a flexible and powerful system for interfacing to external codes, a suite of different statistical methods and parameter scanning algorithms, and a host of other utilities designed to make scans faster, safer and more easily-extendible than in the past. Here we give a detailed description of the framework, its design and motivation, and the current models and other specific components presently implemented in GAMBIT. Accompanying papers deal with individual modules and present first GAMBIT results. GAMBIT can be downloaded from gambit.hepforge.org.
NASA Astrophysics Data System (ADS)
Coppi, B.
2018-05-01
The presence of well organized plasma structures around binary systems of collapsed objects [1,2] (black holes and neutron stars) is proposed in which processes can develop [3] leading to high energy electromagnetic radiation emission immediately before the binary collapse. The formulated theoretical model supporting this argument shows that resonating plasma collective modes can be excited in the relevant magnetized plasma structure. Accordingly, the collapse of the binary approaches, with the loss of angular momentum by emission of gravitational waves [2], the resonance conditions with vertically standing plasma density and magnetic field oscillations are met. Then, secondary plasma modes propagating along the magnetic field are envisioned to be sustained with mode-particle interactions producing the particle populations responsible for the observable electromagnetic radiation emission. Weak evidence for a precursor to the binary collapse reported in Ref. [2], has been offered by the Agile X-γ-ray observatory [4] while the August 17 (2017) event, identified first by the LIGO-Virgo detection of gravitational waves and featuring the inferred collapse of a neutron star binary, improves the evidence of such a precursor. A new set of experimental observations is needed to reassess the presented theory.
Temporal Imagery. An Approach to Reasoning about Time for Planning and Problem Solving.
1985-10-01
about protections ......... .. 97 3.6 Hypothesis generation and abductive inference .................... 98 3.7 Facilities for automatic projection and...events, and simultaneous actions. It you’re not careful, you can waste a considerable amount of effort just determining whether or not two points are or...the planner may construct "some plan, it may also ignore opportunities for merging tasks and con- solidating effort. My main objection, however, is
Semi-Supervised Multi-View Learning for Gene Network Reconstruction
Ceci, Michelangelo; Pio, Gianvito; Kuzmanovski, Vladimir; Džeroski, Sašo
2015-01-01
The task of gene regulatory network reconstruction from high-throughput data is receiving increasing attention in recent years. As a consequence, many inference methods for solving this task have been proposed in the literature. It has been recently observed, however, that no single inference method performs optimally across all datasets. It has also been shown that the integration of predictions from multiple inference methods is more robust and shows high performance across diverse datasets. Inspired by this research, in this paper, we propose a machine learning solution which learns to combine predictions from multiple inference methods. While this approach adds additional complexity to the inference process, we expect it would also carry substantial benefits. These would come from the automatic adaptation to patterns on the outputs of individual inference methods, so that it is possible to identify regulatory interactions more reliably when these patterns occur. This article demonstrates the benefits (in terms of accuracy of the reconstructed networks) of the proposed method, which exploits an iterative, semi-supervised ensemble-based algorithm. The algorithm learns to combine the interactions predicted by many different inference methods in the multi-view learning setting. The empirical evaluation of the proposed algorithm on a prokaryotic model organism (E. coli) and on a eukaryotic model organism (S. cerevisiae) clearly shows improved performance over the state of the art methods. The results indicate that gene regulatory network reconstruction for the real datasets is more difficult for S. cerevisiae than for E. coli. The software, all the datasets used in the experiments and all the results are available for download at the following link: http://figshare.com/articles/Semi_supervised_Multi_View_Learning_for_Gene_Network_Reconstruction/1604827. PMID:26641091
Automatic Carbon Dioxide-Methane Gas Sensor Based on the Solubility of Gases in Water
Cadena-Pereda, Raúl O.; Rivera-Muñoz, Eric M.; Herrera-Ruiz, Gilberto; Gomez-Melendez, Domingo J.; Anaya-Rivera, Ely K.
2012-01-01
Biogas methane content is a relevant variable in anaerobic digestion processing where knowledge of process kinetics or an early indicator of digester failure is needed. The contribution of this work is the development of a novel, simple and low cost automatic carbon dioxide-methane gas sensor based on the solubility of gases in water as the precursor of a sensor for biogas quality monitoring. The device described in this work was used for determining the composition of binary mixtures, such as carbon dioxide-methane, in the range of 0–100%. The design and implementation of a digital signal processor and control system into a low-cost Field Programmable Gate Array (FPGA) platform has permitted the successful application of data acquisition, data distribution and digital data processing, making the construction of a standalone carbon dioxide-methane gas sensor possible. PMID:23112626
Automatic carbon dioxide-methane gas sensor based on the solubility of gases in water.
Cadena-Pereda, Raúl O; Rivera-Muñoz, Eric M; Herrera-Ruiz, Gilberto; Gomez-Melendez, Domingo J; Anaya-Rivera, Ely K
2012-01-01
Biogas methane content is a relevant variable in anaerobic digestion processing where knowledge of process kinetics or an early indicator of digester failure is needed. The contribution of this work is the development of a novel, simple and low cost automatic carbon dioxide-methane gas sensor based on the solubility of gases in water as the precursor of a sensor for biogas quality monitoring. The device described in this work was used for determining the composition of binary mixtures, such as carbon dioxide-methane, in the range of 0-100%. The design and implementation of a digital signal processor and control system into a low-cost Field Programmable Gate Array (FPGA) platform has permitted the successful application of data acquisition, data distribution and digital data processing, making the construction of a standalone carbon dioxide-methane gas sensor possible.
The precursors of double dissociation between reading and spelling in a transparent orthography.
Torppa, Minna; Georgiou, George K; Niemi, Pekka; Lerkkanen, Marja-Kristiina; Poikkeus, Anna-Maija
2017-04-01
Research and clinical practitioners have mixed views whether reading and spelling difficulties should be combined or seen as separate. This study examined the following: (a) if double dissociation between reading and spelling can be identified in a transparent orthography (Finnish) and (b) the cognitive and noncognitive precursors of this phenomenon. Finnish-speaking children (n = 1963) were assessed on reading fluency and spelling in grades 1, 2, 3, and 4. Dissociation groups in reading and spelling were formed based on stable difficulties in grades 1-4. The groups were compared in kindergarten phonological awareness, rapid automatized naming, letter knowledge, home literacy environment, and task-avoidant behavior. The results indicated that the double dissociation groups could be identified even in the context of a highly transparent orthography: 41 children were unexpected poor spellers (SD), 36 were unexpected poor readers (RD), and 59 were poor in both reading and spelling (RSD). The RSD group performed poorest on all cognitive skills and showed the most task-avoidant behavior, the RD group performed poorly particularly on rapid automatized naming and letter knowledge, and the SD group had difficulties on phonological awareness and letter knowledge. Fathers' shared book reading was less frequent in the RD and RSD groups than in the other groups. The findings suggest that there are discernible double dissociation groups with distinct cognitive profiles. This further suggests that the identification of difficulties in Finnish and the planning of teaching and remediation practices should include both reading and spelling assessments.
Wright, Adam; Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W
2011-01-01
Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100,000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100,000 records to assess its accuracy. Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100,000 randomly selected patients showed high sensitivity (range: 62.8-100.0%) and positive predictive value (range: 79.8-99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts.
Learning From Others and Spontaneous Exploration: A Cross-Cultural Investigation.
Shneidman, Laura; Gweon, Hyowon; Schulz, Laura E; Woodward, Amanda L
2016-05-01
How does early social experience affect children's inferences and exploration? Following prior work on children's reasoning in pedagogical contexts, this study examined U.S. children with less experience in formal schooling and Yucatec Mayan children whose early social input is predominantly observational. In Experiment 1, U.S. 2-year-olds (n = 77) showed more restricted exploration of a toy following a pedagogical demonstration than an interrupted, accidental, or no demonstration (baseline). In Experiment 2, Yucatec Mayan and U.S. 2-year-olds (n = 66) showed more restricted exploration following a pedagogical than an observational demonstration, while only Mayan children showed more restriction with age. These results suggest that although schooling is not a necessary precursor for sensitivity to pedagogy, early social experience may influence children's inferences and exploration in pedagogical contexts. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
Instinctive analytics for coalition operations (Conference Presentation)
NASA Astrophysics Data System (ADS)
de Mel, Geeth R.; La Porta, Thomas; Pham, Tien; Pearson, Gavin
2017-05-01
The success of future military coalition operations—be they combat or humanitarian—will increasingly depend on a system's ability to share data and processing services (e.g. aggregation, summarization, fusion), and automatically compose services in support of complex tasks at the network edge. We call such an infrastructure instinctive—i.e., an infrastructure that reacts instinctively to address the analytics task at hand. However, developing such an infrastructure is made complex for the coalition environment due to its dynamism both in terms of user requirements and service availability. In order to address the above challenge, in this paper, we highlight our research vision and sketch some initial solutions into the problem domain. Specifically, we propose means to (1) automatically infer formal task requirements from mission specifications; (2) discover data, services, and their features automatically to satisfy the identified requirements; (3) create and augment shared domain models automatically; (4) efficiently offload services to the network edge and across coalition boundaries adhering to their computational properties and costs; and (5) optimally allocate and adjust services while respecting the constraints of operating environment and service fit. We envision that the research will result in a framework which enables self-description, discover, and assemble capabilities to both data and services in support of coalition mission goals.
Double ErrP Detection for Automatic Error Correction in an ERP-Based BCI Speller.
Cruz, Aniana; Pires, Gabriel; Nunes, Urbano J
2018-01-01
Brain-computer interface (BCI) is a useful device for people with severe motor disabilities. However, due to its low speed and low reliability, BCI still has a very limited application in daily real-world tasks. This paper proposes a P300-based BCI speller combined with a double error-related potential (ErrP) detection to automatically correct erroneous decisions. This novel approach introduces a second error detection to infer whether wrong automatic correction also elicits a second ErrP. Thus, two single-trial responses, instead of one, contribute to the final selection, improving the reliability of error detection. Moreover, to increase error detection, the evoked potential detected as target by the P300 classifier is combined with the evoked error potential at a feature-level. Discriminable error and positive potentials (response to correct feedback) were clearly identified. The proposed approach was tested on nine healthy participants and one tetraplegic participant. The online average accuracy for the first and second ErrPs were 88.4% and 84.8%, respectively. With automatic correction, we achieved an improvement around 5% achieving 89.9% in spelling accuracy for an effective 2.92 symbols/min. The proposed approach revealed that double ErrP detection can improve the reliability and speed of BCI systems.
NIFTY - Numerical Information Field Theory. A versatile PYTHON library for signal inference
NASA Astrophysics Data System (ADS)
Selig, M.; Bell, M. R.; Junklewitz, H.; Oppermann, N.; Reinecke, M.; Greiner, M.; Pachajoa, C.; Enßlin, T. A.
2013-06-01
NIFTy (Numerical Information Field Theory) is a software package designed to enable the development of signal inference algorithms that operate regardless of the underlying spatial grid and its resolution. Its object-oriented framework is written in Python, although it accesses libraries written in Cython, C++, and C for efficiency. NIFTy offers a toolkit that abstracts discretized representations of continuous spaces, fields in these spaces, and operators acting on fields into classes. Thereby, the correct normalization of operations on fields is taken care of automatically without concerning the user. This allows for an abstract formulation and programming of inference algorithms, including those derived within information field theory. Thus, NIFTy permits its user to rapidly prototype algorithms in 1D, and then apply the developed code in higher-dimensional settings of real world problems. The set of spaces on which NIFTy operates comprises point sets, n-dimensional regular grids, spherical spaces, their harmonic counterparts, and product spaces constructed as combinations of those. The functionality and diversity of the package is demonstrated by a Wiener filter code example that successfully runs without modification regardless of the space on which the inference problem is defined. NIFTy homepage http://www.mpa-garching.mpg.de/ift/nifty/; Excerpts of this paper are part of the NIFTy source code and documentation.
Robson, B; Boray, S
2018-04-01
Theoretical and methodological principles are presented for the construction of very large inference nets for odds calculations, composed of hundreds or many thousands or more of elements, in this paper generated by structured data mining. It is argued that the usual small inference nets can sometimes represent rather simple, arbitrary estimates. Examples of applications in clinical and public health data analysis, medical claims data and detection of irregular entries, and bioinformatics data, are presented. Construction of large nets benefits from application of a theory of expected information for sparse data and the Dirac notation and algebra. The extent to which these are important here is briefly discussed. Purposes of the study include (a) exploration of the properties of large inference nets and a perturbation and tacit conditionality models, (b) using these to propose simpler models including one that a physician could use routinely, analogous to a "risk score", (c) examination of the merit of describing optimal performance in a single measure that combines accuracy, specificity, and sensitivity in place of a ROC curve, and (d) relationship to methods for detecting anomalous and potentially fraudulent data. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterling, Alphonse C.; Moore, Ronald L.; Harra, Louise K., E-mail: alphonse.sterling@nasa.gov, E-mail: ron.moore@nasa.gov, E-mail: lkh@mssl.ucl.ac.uk
2011-12-10
Two GOES sub-C-class precursor eruptions occurred within {approx}10 hr prior to and from the same active region as the 2006 December 13 X4.3-class flare. Each eruption generated a coronal mass ejection (CME) with center laterally far offset ({approx}> 45 Degree-Sign ) from the co-produced bright flare. Explaining such CME-to-flare lateral offsets in terms of the standard model for solar eruptions has been controversial. Using Hinode/X-Ray Telescope (XRT) and EUV Imaging Spectrometer (EIS) data, and Solar and Heliospheric Observatory (SOHO)/Large Angle and Spectrometric Coronagraph (LASCO) and Michelson Doppler Imager (MDI) data, we find or infer the following. (1) The first precursormore » was a 'magnetic-arch-blowout' event, where an initial standard-model eruption of the active region's core field blew out a lobe on one side of the active region's field. (2) The second precursor began similarly, but the core-field eruption stalled in the side-lobe field, with the side-lobe field erupting {approx}1 hr later to make the CME either by finally being blown out or by destabilizing and undergoing a standard-model eruption. (3) The third eruption, the X-flare event, blew out side lobes on both sides of the active region and clearly displayed characteristics of the standard model. (4) The two precursors were offset due in part to the CME originating from a side-lobe coronal arcade that was offset from the active region's core. The main eruption (and to some extent probably the precursor eruptions) was offset primarily because it pushed against the field of the large sunspot as it escaped outward. (5) All three CMEs were plausibly produced by a suitable version of the standard model.« less
Integrated Approach to Reconstruction of Microbial Regulatory Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodionov, Dmitry A; Novichkov, Pavel S
2013-11-04
This project had the goal(s) of development of integrated bioinformatics platform for genome-scale inference and visualization of transcriptional regulatory networks (TRNs) in bacterial genomes. The work was done in Sanford-Burnham Medical Research Institute (SBMRI, P.I. D.A. Rodionov) and Lawrence Berkeley National Laboratory (LBNL, co-P.I. P.S. Novichkov). The developed computational resources include: (1) RegPredict web-platform for TRN inference and regulon reconstruction in microbial genomes, and (2) RegPrecise database for collection, visualization and comparative analysis of transcriptional regulons reconstructed by comparative genomics. These analytical resources were selected as key components in the DOE Systems Biology KnowledgeBase (SBKB). The high-quality data accumulated inmore » RegPrecise will provide essential datasets of reference regulons in diverse microbes to enable automatic reconstruction of draft TRNs in newly sequenced genomes. We outline our progress toward the three aims of this grant proposal, which were: Develop integrated platform for genome-scale regulon reconstruction; Infer regulatory annotations in several groups of bacteria and building of reference collections of microbial regulons; and Develop KnowledgeBase on microbial transcriptional regulation.« less
Engineered biofiltration for the removal of disinfection by-product precursors and genotoxicity.
McKie, Michael J; Taylor-Edmonds, Liz; Andrews, Susan A; Andrews, Robert C
2015-09-15
Disinfection by-products (DBPs) are formed when naturally occurring organic matter reacts with chlorine used in drinking water treatment, and DBPs formed in chlorinated drinking water samples have been shown to cause a genotoxic response. The objective of the current study was to further understand the principles of biofiltration and the resulting impacts on the formation of DBPs and genotoxicity. Pilot-scale systems were utilized to assess the performance of engineered biofilters enhanced with hydrogen peroxide, in-line coagulants, and nutrients when compared to passively operated biofilters and conventional treatment (coagulation, flocculation, sedimentation, non-biological filtration). Organic fractionation was completed using liquid chromatography-organic carbon detection (LC-OCD). Water samples were chlorinated after collection and examined for the removal of trihalomethane (THM), haloacetic acid (HAA), and adsorbable organic halide (AOX) precursors. Additionally, the formation potential of two halogenated furanones, 3-chloro-4(dichloromethyl)-2(5H)-furanone (MX) and mucochloric acid (MCA), and genotoxicity was determined. Biofiltration was shown to preferentially remove more DBP precursors than dissolved organic carbon (DOC). Formation potential of the unregulated DBPs, including MX and MCA, and genotoxic response was shown to be correlated to THM formation. These results infer that monitoring for THMs and HAAs provide insight to the formation of more mutagenic DBPs such as halogenated furanones, and that biofiltration may preferentially remove precursors to DBPs at a rate exceeding the removal of DOC. Copyright © 2015 Elsevier Ltd. All rights reserved.
Using Historical Data to Automatically Identify Air-Traffic Control Behavior
NASA Technical Reports Server (NTRS)
Lauderdale, Todd A.; Wu, Yuefeng; Tretto, Celeste
2014-01-01
This project seeks to develop statistical-based machine learning models to characterize the types of errors present when using current systems to predict future aircraft states. These models will be data-driven - based on large quantities of historical data. Once these models are developed, they will be used to infer situations in the historical data where an air-traffic controller intervened on an aircraft's route, even when there is no direct recording of this action.
Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.
Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej
2015-09-01
CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.
Quakefinder: A scalable data mining system for detecting earthquakes from space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolorz, P.; Dean, C.
1996-12-31
We present an application of novel massively parallel datamining techniques to highly precise inference of important physical processes from remote sensing imagery. Specifically, we have developed and applied a system, Quakefinder, that automatically detects and measures tectonic activity in the earth`s crust by examination of satellite data. We have used Quakefinder to automatically map the direction and magnitude of ground displacements due to the 1992 Landers earthquake in Southern California, over a spatial region of several hundred square kilometers, at a resolution of 10 meters, to a (sub-pixel) precision of 1 meter. This is the first calculation that has evermore » been able to extract area-mapped information about 2D tectonic processes at this level of detail. We outline the architecture of the Quakefinder system, based upon a combination of techniques drawn from the fields of statistical inference, massively parallel computing and global optimization. We confirm the overall correctness of the procedure by comparison of our results with known locations of targeted faults obtained by careful and time-consuming field measurements. The system also performs knowledge discovery by indicating novel unexplained tectonic activity away from the primary faults that has never before been observed. We conclude by discussing the future potential of this data mining system in the broad context of studying subtle spatio-temporal processes within massive image streams.« less
Larson, James H.; Richardson, William B.; Vallazza, Jon; Bartsch, Lynn; Bartsch, Michelle
2017-01-01
Inferences about ecological structure and function are often made using elemental or macromolecular tracers of food web structure. For example, inferences about food chain length are often made using stable isotope ratios of top predators and consumer food sources are often inferred from both stable isotopes and fatty acid (FA) content in consumer tissues. The use of FAs as tracers implies some degree of macromolecular conservation across trophic interactions, but many FAs are subject to physiological alteration and animals may produce those FAs from precursors in response to food deficiencies. We measured 41 individual FAs and several aggregate FA metrics in two filter-feeding taxa to (1) assess ecological variation in food availability and (2) identify potential drivers of among-site variation in FA content. These taxa were filter feeding caddisflies (Family Hydropyschidae) and dreissenid mussels (Genus Dreissena), which both consume seston. Stable isotopic composition (C and N) in these taxa co-varied across 13 sites in the Great Lakes region of North America, indicating they fed on very similar food resources. However, co-variation in FA content was very limited, with only one common FA co-varying across this gradient (α-linolenic acid; ALA), suggesting these taxa accumulate FAs very differently even when exposed to the same foods. Based on these results, among-site variation in ALA content in both consumers does appear to be driven by food resources, along with several other FAs in dreissenid mussels. We conclude that single-taxa measurements of FA content cannot be used to infer FA availability in food resources.
Long-Branch Attraction Bias and Inconsistency in Bayesian Phylogenetics
Kolaczkowski, Bryan; Thornton, Joseph W.
2009-01-01
Bayesian inference (BI) of phylogenetic relationships uses the same probabilistic models of evolution as its precursor maximum likelihood (ML), so BI has generally been assumed to share ML's desirable statistical properties, such as largely unbiased inference of topology given an accurate model and increasingly reliable inferences as the amount of data increases. Here we show that BI, unlike ML, is biased in favor of topologies that group long branches together, even when the true model and prior distributions of evolutionary parameters over a group of phylogenies are known. Using experimental simulation studies and numerical and mathematical analyses, we show that this bias becomes more severe as more data are analyzed, causing BI to infer an incorrect tree as the maximum a posteriori phylogeny with asymptotically high support as sequence length approaches infinity. BI's long branch attraction bias is relatively weak when the true model is simple but becomes pronounced when sequence sites evolve heterogeneously, even when this complexity is incorporated in the model. This bias—which is apparent under both controlled simulation conditions and in analyses of empirical sequence data—also makes BI less efficient and less robust to the use of an incorrect evolutionary model than ML. Surprisingly, BI's bias is caused by one of the method's stated advantages—that it incorporates uncertainty about branch lengths by integrating over a distribution of possible values instead of estimating them from the data, as ML does. Our findings suggest that trees inferred using BI should be interpreted with caution and that ML may be a more reliable framework for modern phylogenetic analysis. PMID:20011052
Long-branch attraction bias and inconsistency in Bayesian phylogenetics.
Kolaczkowski, Bryan; Thornton, Joseph W
2009-12-09
Bayesian inference (BI) of phylogenetic relationships uses the same probabilistic models of evolution as its precursor maximum likelihood (ML), so BI has generally been assumed to share ML's desirable statistical properties, such as largely unbiased inference of topology given an accurate model and increasingly reliable inferences as the amount of data increases. Here we show that BI, unlike ML, is biased in favor of topologies that group long branches together, even when the true model and prior distributions of evolutionary parameters over a group of phylogenies are known. Using experimental simulation studies and numerical and mathematical analyses, we show that this bias becomes more severe as more data are analyzed, causing BI to infer an incorrect tree as the maximum a posteriori phylogeny with asymptotically high support as sequence length approaches infinity. BI's long branch attraction bias is relatively weak when the true model is simple but becomes pronounced when sequence sites evolve heterogeneously, even when this complexity is incorporated in the model. This bias--which is apparent under both controlled simulation conditions and in analyses of empirical sequence data--also makes BI less efficient and less robust to the use of an incorrect evolutionary model than ML. Surprisingly, BI's bias is caused by one of the method's stated advantages--that it incorporates uncertainty about branch lengths by integrating over a distribution of possible values instead of estimating them from the data, as ML does. Our findings suggest that trees inferred using BI should be interpreted with caution and that ML may be a more reliable framework for modern phylogenetic analysis.
Development of a parameter optimization technique for the design of automatic control systems
NASA Technical Reports Server (NTRS)
Whitaker, P. H.
1977-01-01
Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.
Iterative refinement of implicit boundary models for improved geological feature reproduction
NASA Astrophysics Data System (ADS)
Martin, Ryan; Boisvert, Jeff B.
2017-12-01
Geological domains contain non-stationary features that cannot be described by a single direction of continuity. Non-stationary estimation frameworks generate more realistic curvilinear interpretations of subsurface geometries. A radial basis function (RBF) based implicit modeling framework using domain decomposition is developed that permits introduction of locally varying orientations and magnitudes of anisotropy for boundary models to better account for the local variability of complex geological deposits. The interpolation framework is paired with a method to automatically infer the locally predominant orientations, which results in a rapid and robust iterative non-stationary boundary modeling technique that can refine locally anisotropic geological shapes automatically from the sample data. The method also permits quantification of the volumetric uncertainty associated with the boundary modeling. The methodology is demonstrated on a porphyry dataset and shows improved local geological features.
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-01-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-08-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.
NASA Astrophysics Data System (ADS)
Chang, C.; Wang, J.; Liu, S.; Shao, M.; Zhang, Y.; Zhu, T.; Shiu, C.; Lai, C.
2010-12-01
Two on-site continuous measurements of ozone and its precursors in two megacities of China were carried out in an urban site of Beijing and a suburban site near Guangzhou in the Pearl River Delta (PRD) to estimate precursor consumption and to assess its relationship with oxidant (O3+NO2) formation level. An observation-based method (OBM) with the precursor consumption concept was adopted to assess the relationship between oxidant production and amounts of photochemically consumed non-methane hydrocarbons (NMHCs). In this approach, the ratio of ethylbenzene to m,p-xylenes was used to estimate the degree of photochemical processing, as well as the amounts of photochemically consumed NMHCs by reacting with OH. By trying to correlate the observed oxidant with the observed NMHC concentration, the two areas both revealed nearly no to low correlation between them. However, it existed fair to good correlations (R2=0.68 for Beijing, 0.53 for PRD) between the observed oxidant level and the degree of photochemical processing (ethylbenzene/m,p-xylenes). Furthermore, after taking the approach of consumption to estimate the consumed amounts of NMHCs, an interesting finding reveals that the definite correlation existed between the observed oxidant level and the total consumed NMHCs. The good correlations (R2=0.83 for Beijing, 0.81 for PRD) implies that the ambient oxidant level correlated to the amount of consumed NMHCs. The results of the two megacities in China by using the OBM with the precursor consumption concept can provide another pathway to explore the relationship between photochemically produced oxidant and consumed precursors, and will be helpful to validate model results and to reduce uncertainty of model predictions. However, the method has some room for uncertainty, as injection of fresh precursor emissions and additional boundary ozone involved, etc. could affect the estimation of consumed NMHCs and observed oxidant levels. Assistance of approaches in assessing the influence of the interfering factors would be helpful to acquire more reliable inferences of relationship between oxidant formation and precursor consumption.
Robotham, Scott A.; Horton, Andrew P.; Cannon, Joe R.; Cotham, Victoria C.; Marcotte, Edward M.; Brodbelt, Jennifer S.
2016-01-01
De novo peptide sequencing by mass spectrometry represents an important strategy for characterizing novel peptides and proteins, in which a peptide’s amino acid sequence is inferred directly from the precursor peptide mass and tandem mass spectrum (MS/MS or MS3) fragment ions, without comparison to a reference proteome. This method is ideal for organisms or samples lacking a complete or well-annotated reference sequence set. One of the major barriers to de novo spectral interpretation arises from confusion of N- and C-terminal ion series due to the symmetry between b and y ion pairs created by collisional activation methods (or c, z ions for electron-based activation methods). This is known as the ‘antisymmetric path problem’ and leads to inverted amino acid subsequences within a de novo reconstruction. Here, we combine several key strategies for de novo peptide sequencing into a single high-throughput pipeline: high efficiency carbamylation blocks lysine side chains, and subsequent tryptic digestion and N-terminal peptide derivatization with the ultraviolet chromophore AMCA yields peptides susceptible to 351 nm ultraviolet photodissociation (UVPD). UVPD-MS/MS of the AMCA-modified peptides then predominantly produces y ions in the MS/MS spectra, specifically addressing the antisymmetric path problem. Finally, the program UVnovo applies a random forest algorithm to automatically learn from and then interpret UVPD mass spectra, passing results to a hidden Markov model for de novo sequence prediction and scoring. We show this combined strategy provides high performance de novo peptide sequencing, enabling the de novo sequencing of thousands of peptides from an E. coli lysate at high confidence. PMID:26938041
How social cognition can inform social decision making.
Lee, Victoria K; Harris, Lasana T
2013-12-25
Social decision-making is often complex, requiring the decision-maker to make inferences of others' mental states in addition to engaging traditional decision-making processes like valuation and reward processing. A growing body of research in neuroeconomics has examined decision-making involving social and non-social stimuli to explore activity in brain regions such as the striatum and prefrontal cortex, largely ignoring the power of the social context. Perhaps more complex processes may influence decision-making in social vs. non-social contexts. Years of social psychology and social neuroscience research have documented a multitude of processes (e.g., mental state inferences, impression formation, spontaneous trait inferences) that occur upon viewing another person. These processes rely on a network of brain regions including medial prefrontal cortex (MPFC), superior temporal sulcus (STS), temporal parietal junction, and precuneus among others. Undoubtedly, these social cognition processes affect social decision-making since mental state inferences occur spontaneously and automatically. Few studies have looked at how these social inference processes affect decision-making in a social context despite the capability of these inferences to serve as predictions that can guide future decision-making. Here we review and integrate the person perception and decision-making literatures to understand how social cognition can inform the study of social decision-making in a way that is consistent with both literatures. We identify gaps in both literatures-while behavioral economics largely ignores social processes that spontaneously occur upon viewing another person, social psychology has largely failed to talk about the implications of social cognition processes in an economic decision-making context-and examine the benefits of integrating social psychological theory with behavioral economic theory.
Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W
2011-01-01
Background Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. Objective To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. Study design and methods We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100 000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100 000 records to assess its accuracy. Results Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100 000 randomly selected patients showed high sensitivity (range: 62.8–100.0%) and positive predictive value (range: 79.8–99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. Conclusion We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts. PMID:21613643
How social cognition can inform social decision making
Lee, Victoria K.; Harris, Lasana T.
2013-01-01
Social decision-making is often complex, requiring the decision-maker to make inferences of others' mental states in addition to engaging traditional decision-making processes like valuation and reward processing. A growing body of research in neuroeconomics has examined decision-making involving social and non-social stimuli to explore activity in brain regions such as the striatum and prefrontal cortex, largely ignoring the power of the social context. Perhaps more complex processes may influence decision-making in social vs. non-social contexts. Years of social psychology and social neuroscience research have documented a multitude of processes (e.g., mental state inferences, impression formation, spontaneous trait inferences) that occur upon viewing another person. These processes rely on a network of brain regions including medial prefrontal cortex (MPFC), superior temporal sulcus (STS), temporal parietal junction, and precuneus among others. Undoubtedly, these social cognition processes affect social decision-making since mental state inferences occur spontaneously and automatically. Few studies have looked at how these social inference processes affect decision-making in a social context despite the capability of these inferences to serve as predictions that can guide future decision-making. Here we review and integrate the person perception and decision-making literatures to understand how social cognition can inform the study of social decision-making in a way that is consistent with both literatures. We identify gaps in both literatures—while behavioral economics largely ignores social processes that spontaneously occur upon viewing another person, social psychology has largely failed to talk about the implications of social cognition processes in an economic decision-making context—and examine the benefits of integrating social psychological theory with behavioral economic theory. PMID:24399928
Bayesian truthing as experimental verification of C4ISR sensors
NASA Astrophysics Data System (ADS)
Jannson, Tomasz; Forrester, Thomas; Romanov, Volodymyr; Wang, Wenjian; Nielsen, Thomas; Kostrzewski, Andrew
2015-05-01
In this paper, the general methodology for experimental verification/validation of C4ISR and other sensors' performance, is presented, based on Bayesian inference, in general, and binary sensors, in particular. This methodology, called Bayesian Truthing, defines Performance Metrics for binary sensors in: physics, optics, electronics, medicine, law enforcement, C3ISR, QC, ATR (Automatic Target Recognition), terrorism related events, and many others. For Bayesian Truthing, the sensing medium itself is not what is truly important; it is how the decision process is affected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2003-05-29
AUTOGEN computes collision-free sequences of robot motion instructions to permit traversal of three-dimensional space curves. Order and direction of curve traversal and orientation of end effector are constraided by a set of manufacturing rules. Input can be provided as a collection of solid models or in terms of wireframe objects and structural cross-section definitions. Entity juxtaposition can be inferred, with appropriate structural features automatically provided. Process control is asserted as a function of position and orientation along each space curve, and is currently implemented for welding processes.
Computer-based prediction of mitochondria-targeting peptides.
Martelli, Pier Luigi; Savojardo, Castrense; Fariselli, Piero; Tasco, Gianluca; Casadio, Rita
2015-01-01
Computational methods are invaluable when protein sequences, directly derived from genomic data, need functional and structural annotation. Subcellular localization is a feature necessary for understanding the protein role and the compartment where the mature protein is active and very difficult to characterize experimentally. Mitochondrial proteins encoded on the cytosolic ribosomes carry specific patterns in the precursor sequence from where it is possible to recognize a peptide targeting the protein to its final destination. Here we discuss to which extent it is feasible to develop computational methods for detecting mitochondrial targeting peptides in the precursor sequences and benchmark our and other methods on the human mitochondrial proteins endowed with experimentally characterized targeting peptides. Furthermore, we illustrate our newly implemented web server and its usage on the whole human proteome in order to infer mitochondrial targeting peptides, their cleavage sites, and whether the targeting peptide regions contain or not arginine-rich recurrent motifs. By this, we add some other 2,800 human proteins to the 124 ones already experimentally annotated with a mitochondrial targeting peptide.
Effect of argon ion activity on the properties of Y 2O 3 thin films deposited by low pressure PACVD
NASA Astrophysics Data System (ADS)
Barve, S. A.; Jagannath; Deo, M. N.; Kishore, R.; Biswas, A.; Gantayet, L. M.; Patil, D. S.
2010-10-01
Yttrium oxide thin films are deposited by microwave electron cyclotron resonance (ECR) plasma assisted metal organic chemical vapour deposition process using an indegeneously developed Y(thd) 3 {(2,2,6,6-tetramethyl-3,5-heptanedionate)yttrium} precursor. Depositions were carried out at two different argon gas flow rates keeping precursor and oxygen gas flow rate constant. The deposited coatings are characterized by X-ray photoelectron spectroscopy (XPS), glancing angle X-ray diffraction (GIXRD) and infrared spectroscopy. Optical properties of the films are studied by spectroscopic ellipsometry. Hardness and elastic modulus of the films are measured by load depth sensing nanoindentation technique. Stability of the film and its adhesion with the substrate is inferred from the nanoscratch test. It is shown here that, the change in the argon gas flow rates changes the ionization of the gas in the microwave ECR plasma and imposes a drastic change in the characteristics like composition, structure as well as mechanical properties of the deposited film.
Modeling the evolution of protein domain architectures using maximum parsimony.
Fong, Jessica H; Geer, Lewis Y; Panchenko, Anna R; Bryant, Stephen H
2007-02-09
Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture "neighbors" identified in this way may lead to new insights about the evolution of protein function.
Modeling the Evolution of Protein Domain Architectures Using Maximum Parsimony
Fong, Jessica H.; Geer, Lewis Y.; Panchenko, Anna R.; Bryant, Stephen H.
2007-01-01
Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture “neighbors” identified in this way may lead to new insights about the evolution of protein function. PMID:17166515
Chen, Gengbo; Walmsley, Scott; Cheung, Gemmy C M; Chen, Liyan; Cheng, Ching-Yu; Beuerman, Roger W; Wong, Tien Yin; Zhou, Lei; Choi, Hyungwon
2017-05-02
Data independent acquisition-mass spectrometry (DIA-MS) coupled with liquid chromatography is a promising approach for rapid, automatic sampling of MS/MS data in untargeted metabolomics. However, wide isolation windows in DIA-MS generate MS/MS spectra containing a mixed population of fragment ions together with their precursor ions. This precursor-fragment ion map in a comprehensive MS/MS spectral library is crucial for relative quantification of fragment ions uniquely representative of each precursor ion. However, existing reference libraries are not sufficient for this purpose since the fragmentation patterns of small molecules can vary in different instrument setups. Here we developed a bioinformatics workflow called MetaboDIA to build customized MS/MS spectral libraries using a user's own data dependent acquisition (DDA) data and to perform MS/MS-based quantification with DIA data, thus complementing conventional MS1-based quantification. MetaboDIA also allows users to build a spectral library directly from DIA data in studies of a large sample size. Using a marine algae data set, we show that quantification of fragment ions extracted with a customized MS/MS library can provide as reliable quantitative data as the direct quantification of precursor ions based on MS1 data. To test its applicability in complex samples, we applied MetaboDIA to a clinical serum metabolomics data set, where we built a DDA-based spectral library containing consensus spectra for 1829 compounds. We performed fragment ion quantification using DIA data using this library, yielding sensitive differential expression analysis.
Sensitivity to value-driven attention is predicted by how we learn from value.
Jahfari, Sara; Theeuwes, Jan
2017-04-01
Reward learning is known to influence the automatic capture of attention. This study examined how the rate of learning, after high- or low-value reward outcomes, can influence future transfers into value-driven attentional capture. Participants performed an instrumental learning task that was directly followed by an attentional capture task. A hierarchical Bayesian reinforcement model was used to infer individual differences in learning from high or low reward. Results showed a strong relationship between high-reward learning rates (or the weight that is put on learning after a high reward) and the magnitude of attentional capture with high-reward colors. Individual differences in learning from high or low rewards were further related to performance differences when high- or low-value distractors were present. These findings provide novel insight into the development of value-driven attentional capture by showing how information updating after desired or undesired outcomes can influence future deployments of automatic attention.
The Automatic Integration of Folksonomies with Taxonomies Using Non-axiomatic Logic
NASA Astrophysics Data System (ADS)
Geldart, Joe; Cummins, Stephen
Cooperative tagging systems such as folksonomies are powerful tools when used to annotate information resources. The inherent power of folksonomies is in their ability to allow casual users to easily contribute ad hoc, yet meaningful, resource metadata without any specialist training. Older folksonomies have begun to degrade due to the lack of internal structure and from the use of many low quality tags. This chapter describes a remedy for some of the problems associated with folksonomies. We introduce a method of automatic integration and inference of the relationships between tags and resources in a folksonomy using non-axiomatic logic. We test this method on the CiteULike corpus of tags by comparing precision and recall between it and standard keyword search. Our results show that non-axiomatic reasoning is a promising technique for integrating tagging systems with more structured knowledge representations.
Accelerometry-based classification of human activities using Markov modeling.
Mannini, Andrea; Sabatini, Angelo Maria
2011-01-01
Accelerometers are a popular choice as body-motion sensors: the reason is partly in their capability of extracting information that is useful for automatically inferring the physical activity in which the human subject is involved, beside their role in feeding biomechanical parameters estimators. Automatic classification of human physical activities is highly attractive for pervasive computing systems, whereas contextual awareness may ease the human-machine interaction, and in biomedicine, whereas wearable sensor systems are proposed for long-term monitoring. This paper is concerned with the machine learning algorithms needed to perform the classification task. Hidden Markov Model (HMM) classifiers are studied by contrasting them with Gaussian Mixture Model (GMM) classifiers. HMMs incorporate the statistical information available on movement dynamics into the classification process, without discarding the time history of previous outcomes as GMMs do. An example of the benefits of the obtained statistical leverage is illustrated and discussed by analyzing two datasets of accelerometer time series.
Automating usability of ATLAS Distributed Computing resources
NASA Astrophysics Data System (ADS)
Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration
2014-06-01
The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.
NASA Astrophysics Data System (ADS)
Schmidt, S.; Heyns, P. S.; de Villiers, J. P.
2018-02-01
In this paper, a fault diagnostic methodology is developed which is able to detect, locate and trend gear faults under fluctuating operating conditions when only vibration data from a single transducer, measured on a healthy gearbox are available. A two-phase feature extraction and modelling process is proposed to infer the operating condition and based on the operating condition, to detect changes in the machine condition. Information from optimised machine and operating condition hidden Markov models are statistically combined to generate a discrepancy signal which is post-processed to infer the condition of the gearbox. The discrepancy signal is processed and combined with statistical methods for automatic fault detection and localisation and to perform fault trending over time. The proposed methodology is validated on experimental data and a tacholess order tracking methodology is used to enhance the cost-effectiveness of the diagnostic methodology.
Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study
Thomaz, Edison; Zhang, Cheng; Essa, Irfan; Abowd, Gregory D.
2015-01-01
Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors. PMID:25859566
Automated adaptive inference of phenomenological dynamical models.
Daniels, Bryan C; Nemenman, Ilya
2015-08-21
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.
Inferring Group Processes from Computer-Mediated Affective Text Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack C; Begoli, Edmon; Jose, Ajith
2011-02-01
Political communications in the form of unstructured text convey rich connotative meaning that can reveal underlying group social processes. Previous research has focused on sentiment analysis at the document level, but we extend this analysis to sub-document levels through a detailed analysis of affective relationships between entities extracted from a document. Instead of pure sentiment analysis, which is just positive or negative, we explore nuances of affective meaning in 22 affect categories. Our affect propagation algorithm automatically calculates and displays extracted affective relationships among entities in graphical form in our prototype (TEAMSTER), starting with seed lists of affect terms. Severalmore » useful metrics are defined to infer underlying group processes by aggregating affective relationships discovered in a text. Our approach has been validated with annotated documents from the MPQA corpus, achieving a performance gain of 74% over comparable random guessers.« less
Valeri, Linda; VanderWeele, Tyler J.
2012-01-01
Mediation analysis is a useful and widely employed approach to studies in the field of psychology and in the social and biomedical sciences. The contributions of this paper are several-fold. First we seek to bring the developments in mediation analysis for non linear models within the counterfactual framework to the psychology audience in an accessible format and compare the sorts of inferences about mediation that are possible in the presence of exposure-mediator interaction when using a counterfactual versus the standard statistical approach. Second, the work by VanderWeele and Vansteelandt (2009, 2010) is extended here to allow for dichotomous mediators and count outcomes. Third, we provide SAS and SPSS macros to implement all of these mediation analysis techniques automatically and we compare the types of inferences about mediation that are allowed by a variety of software macros. PMID:23379553
Valeri, Linda; Vanderweele, Tyler J
2013-06-01
Mediation analysis is a useful and widely employed approach to studies in the field of psychology and in the social and biomedical sciences. The contributions of this article are several-fold. First we seek to bring the developments in mediation analysis for nonlinear models within the counterfactual framework to the psychology audience in an accessible format and compare the sorts of inferences about mediation that are possible in the presence of exposure-mediator interaction when using a counterfactual versus the standard statistical approach. Second, the work by VanderWeele and Vansteelandt (2009, 2010) is extended here to allow for dichotomous mediators and count outcomes. Third, we provide SAS and SPSS macros to implement all of these mediation analysis techniques automatically, and we compare the types of inferences about mediation that are allowed by a variety of software macros. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
RAIN: RNA–protein Association and Interaction Networks
Junge, Alexander; Refsgaard, Jan C.; Garde, Christian; Pan, Xiaoyong; Santos, Alberto; Alkan, Ferhat; Anthon, Christian; von Mering, Christian; Workman, Christopher T.; Jensen, Lars Juhl; Gorodkin, Jan
2017-01-01
Protein association networks can be inferred from a range of resources including experimental data, literature mining and computational predictions. These types of evidence are emerging for non-coding RNAs (ncRNAs) as well. However, integration of ncRNAs into protein association networks is challenging due to data heterogeneity. Here, we present a database of ncRNA–RNA and ncRNA–protein interactions and its integration with the STRING database of protein–protein interactions. These ncRNA associations cover four organisms and have been established from curated examples, experimental data, interaction predictions and automatic literature mining. RAIN uses an integrative scoring scheme to assign a confidence score to each interaction. We demonstrate that RAIN outperforms the underlying microRNA-target predictions in inferring ncRNA interactions. RAIN can be operated through an easily accessible web interface and all interaction data can be downloaded. Database URL: http://rth.dk/resources/rain PMID:28077569
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan C.; Nemenman, Ilya
2015-01-01
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508
Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression
Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander
2016-01-01
By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143
Encoding probabilistic brain atlases using Bayesian inference.
Van Leemput, Koen
2009-06-01
This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. Probabilistic atlases are typically constructed by counting the relative frequency of occurrence of labels in corresponding locations across the training images. However, such an "averaging" approach generalizes poorly to unseen cases when the number of training images is limited, and provides no principled way of aligning the training datasets using deformable registration. In this paper, we generalize the generative image model implicitly underlying standard "average" atlases, using mesh-based representations endowed with an explicit deformation model. Bayesian inference is used to infer the optimal model parameters from the training data, leading to a simultaneous group-wise registration and atlas estimation scheme that encompasses standard averaging as a special case. We also use Bayesian inference to compare alternative atlas models in light of the training data, and show how this leads to a data compression problem that is intuitive to interpret and computationally feasible. Using this technique, we automatically determine the optimal amount of spatial blurring, the best deformation field flexibility, and the most compact mesh representation. We demonstrate, using 2-D training datasets, that the resulting models are better at capturing the structure in the training data than conventional probabilistic atlases. We also present experiments of the proposed atlas construction technique in 3-D, and show the resulting atlases' potential in fully-automated, pulse sequence-adaptive segmentation of 36 neuroanatomical structures in brain MRI scans.
NASA Astrophysics Data System (ADS)
Calderisi, Marco; Ulrici, Alessandro; Pigani, Laura; Secchi, Alberto; Seeber, Renato
2012-09-01
The EU FP7 project CUSTOM (Drugs and Precursor Sensing by Complementing Low Cost Multiple Techniques) aims at developing a new sensing system for the detection of drug precursors in gaseous samples, which includes an External Cavity-Quantum Cascade Laser Photo-Acoustic Sensor (EC-QCLPAS) that is in the final step of realisation. Thus, a simulation based on FT-IR literature spectra has been accomplished, where the development of a proper strategy for the design of the composition of the environment, as much as possible realistic and representative of different scenarios, is of key importance. To this aim, an approach based on the combination of signal processing and experimental design techniques has been developed. The gaseous mixtures were built by adding the considered 4 drug precursor (target) species to the gases typically found in atmosphere, taking also into account possible interfering species. These last chemicals were selected considering custom environments (20 interfering chemical species), whose concentrations have been inferred from literature data. The spectra were first denoised by means of a Fast Wavelet Transform-based algorithm; then, a procedure based on a sigmoidal transfer function was developed to multiply the pure components spectra by the respective concentration values, in a way to correctly preserve background intensity and shape, and to operate only on the absorption bands. The noise structure of the EC-QCLPAS was studied using sample spectra measured with a prototype instrument, and added to the simulated mixtures. Finally a matrix containing 5000 simulated spectra of gaseous mixtures was built up.
Nho, Kwangsik; Shen, Li; Kim, Sungeun; Risacher, Shannon L.; West, John D.; Foroud, Tatiana; Jack, Clifford R.; Weiner, Michael W.; Saykin, Andrew J.
2010-01-01
Mild Cognitive Impairment (MCI) is thought to be a precursor to the development of early Alzheimer’s disease (AD). For early diagnosis of AD, the development of a model that is able to predict the conversion of amnestic MCI to AD is challenging. Using automatic whole-brain MRI analysis techniques and pattern classification methods, we developed a model to differentiate AD from healthy controls (HC), and then applied it to the prediction of MCI conversion to AD. Classification was performed using support vector machines (SVMs) together with a SVM-based feature selection method, which selected a set of most discriminating predictors for optimizing prediction accuracy. We obtained 90.5% cross-validation accuracy for classifying AD and HC, and 72.3% accuracy for predicting MCI conversion to AD. These analyses suggest that a classifier trained to separate HC vs. AD has substantial potential for predicting MCI conversion to AD. PMID:21347037
Conversion of KEGG metabolic pathways to SBGN maps including automatic layout
2013-01-01
Background Biologists make frequent use of databases containing large and complex biological networks. One popular database is the Kyoto Encyclopedia of Genes and Genomes (KEGG) which uses its own graphical representation and manual layout for pathways. While some general drawing conventions exist for biological networks, arbitrary graphical representations are very common. Recently, a new standard has been established for displaying biological processes, the Systems Biology Graphical Notation (SBGN), which aims to unify the look of such maps. Ideally, online repositories such as KEGG would automatically provide networks in a variety of notations including SBGN. Unfortunately, this is non‐trivial, since converting between notations may add, remove or otherwise alter map elements so that the existing layout cannot be simply reused. Results Here we describe a methodology for automatic translation of KEGG metabolic pathways into the SBGN format. We infer important properties of the KEGG layout and treat these as layout constraints that are maintained during the conversion to SBGN maps. Conclusions This allows for the drawing and layout conventions of SBGN to be followed while creating maps that are still recognizably the original KEGG pathways. This article details the steps in this process and provides examples of the final result. PMID:23953132
Finding geospatial pattern of unstructured data by clustering routes
NASA Astrophysics Data System (ADS)
Boustani, M.; Mattmann, C. A.; Ramirez, P.; Burke, W.
2016-12-01
Today the majority of data generated has a geospatial context to it. Either in attribute form as a latitude or longitude, or name of location or cross referenceable using other means such as an external gazetteer or location service. Our research is interested in exploiting geospatial location and context in unstructured data such as that found on the web in HTML pages, images, videos, documents, and other areas, and in structured information repositories found on intranets, in scientific environments, and otherwise. We are working together on the DARPA MEMEX project to exploit open source software tools such as the Lucene Geo Gazetteer, Apache Tika, Apache Lucene, and Apache OpenNLP, to automatically extract, and make meaning out of geospatial information. In particular, we are interested in unstructured descriptors e.g., a phone number, or a named entity, and the ability to automatically learn geospatial paths related to these descriptors. For example, a particular phone number may represent an entity that travels on a monthly basis, according to easily identifiable and somes more difficult to track patterns. We will present a set of automatic techniques to extract descriptors, and then to geospatially infer their paths across unstructured data.
Automatic integration of social information in emotion recognition.
Mumenthaler, Christian; Sander, David
2015-04-01
This study investigated the automaticity of the influence of social inference on emotion recognition. Participants were asked to recognize dynamic facial expressions of emotion (fear or anger in Experiment 1 and blends of fear and surprise or of anger and disgust in Experiment 2) in a target face presented at the center of a screen while a subliminal contextual face appearing in the periphery expressed an emotion (fear or anger) or not (neutral) and either looked at the target face or not. Results of Experiment 1 revealed that recognition of the target emotion of fear was improved when a subliminal angry contextual face gazed toward-rather than away from-the fearful face. We replicated this effect in Experiment 2, in which facial expression blends of fear and surprise were more often and more rapidly categorized as expressing fear when the subliminal contextual face expressed anger and gazed toward-rather than away from-the target face. With the contextual face appearing for 30 ms in total, including only 10 ms of emotion expression, and being immediately masked, our data provide the first evidence that social influence on emotion recognition can occur automatically. (c) 2015 APA, all rights reserved).
Method and system for real-time analysis of biosensor data
Greenbaum, Elias; Rodriguez, Jr., Miguel
2014-08-19
A method of biosensor-based detection of toxins includes the steps of providing a fluid to be analyzed having a plurality of photosynthetic organisms therein, wherein chemical, biological or radiological agents alter a nominal photosynthetic activity of the photosynthetic organisms. At a first time a measured photosynthetic activity curve is obtained from the photosynthetic organisms. The measured curve is automatically compared to a reference photosynthetic activity curve to determine differences therebetween. The presence of the chemical, biological or radiological agents, or precursors thereof, are then identified if present in the fluid using the differences.
Design and implementation of the tree-based fuzzy logic controller.
Liu, B D; Huang, C Y
1997-01-01
In this paper, a tree-based approach is proposed to design the fuzzy logic controller. Based on the proposed methodology, the fuzzy logic controller has the following merits: the fuzzy control rule can be extracted automatically from the input-output data of the system and the extraction process can be done in one-pass; owing to the fuzzy tree inference structure, the search spaces of the fuzzy inference process are largely reduced; the operation of the inference process can be simplified as a one-dimensional matrix operation because of the fuzzy tree approach; and the controller has regular and modular properties, so it is easy to be implemented by hardware. Furthermore, the proposed fuzzy tree approach has been applied to design the color reproduction system for verifying the proposed methodology. The color reproduction system is mainly used to obtain a color image through the printer that is identical to the original one. In addition to the software simulation, an FPGA is used to implement the prototype hardware system for real-time application. Experimental results show that the effect of color correction is quite good and that the prototype hardware system can operate correctly under the condition of 30 MHz clock rate.
PREMER: a Tool to Infer Biological Networks.
Villaverde, Alejandro F; Becker, Kolja; Banga, Julio R
2017-10-04
Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features - such as distinguishing between direct and indirect interactions or determining the direction of a causal link - requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux and OSX (https://sites.google.com/site/premertoolbox/).
Active learning of cortical connectivity from two-photon imaging data.
Bertrán, Martín A; Martínez, Natalia L; Wang, Ye; Dunson, David; Sapiro, Guillermo; Ringach, Dario
2018-01-01
Understanding how groups of neurons interact within a network is a fundamental question in system neuroscience. Instead of passively observing the ongoing activity of a network, we can typically perturb its activity, either by external sensory stimulation or directly via techniques such as two-photon optogenetics. A natural question is how to use such perturbations to identify the connectivity of the network efficiently. Here we introduce a method to infer sparse connectivity graphs from in-vivo, two-photon imaging of population activity in response to external stimuli. A novel aspect of the work is the introduction of a recommended distribution, incrementally learned from the data, to optimally refine the inferred network. Unlike existing system identification techniques, this "active learning" method automatically focuses its attention on key undiscovered areas of the network, instead of targeting global uncertainty indicators like parameter variance. We show how active learning leads to faster inference while, at the same time, provides confidence intervals for the network parameters. We present simulations on artificial small-world networks to validate the methods and apply the method to real data. Analysis of frequency of motifs recovered show that cortical networks are consistent with a small-world topology model.
Active learning of cortical connectivity from two-photon imaging data
Wang, Ye; Dunson, David; Sapiro, Guillermo; Ringach, Dario
2018-01-01
Understanding how groups of neurons interact within a network is a fundamental question in system neuroscience. Instead of passively observing the ongoing activity of a network, we can typically perturb its activity, either by external sensory stimulation or directly via techniques such as two-photon optogenetics. A natural question is how to use such perturbations to identify the connectivity of the network efficiently. Here we introduce a method to infer sparse connectivity graphs from in-vivo, two-photon imaging of population activity in response to external stimuli. A novel aspect of the work is the introduction of a recommended distribution, incrementally learned from the data, to optimally refine the inferred network. Unlike existing system identification techniques, this “active learning” method automatically focuses its attention on key undiscovered areas of the network, instead of targeting global uncertainty indicators like parameter variance. We show how active learning leads to faster inference while, at the same time, provides confidence intervals for the network parameters. We present simulations on artificial small-world networks to validate the methods and apply the method to real data. Analysis of frequency of motifs recovered show that cortical networks are consistent with a small-world topology model. PMID:29718955
Shelly, David R.
2009-01-01
Earthquake predictability depends, in part, on the degree to which sudden slip is preceded by slow aseismic slip. Recently, observations of deep tremor have enabled inferences of deep slow slip even when detection by other means is not possible, but these data are limited to certain areas and mostly the last decade. The region near Parkfield, California, provides a unique convergence of several years of high-quality tremor data bracketing a moderate earthquake, the 2004 magnitude 6.0 event. Here, I present detailed observations of tectonic tremor from mid-2001 through 2008 that indicate deep fault slip both before and after the Parkfield earthquake that cannot be detected with surface geodetic instruments. While there is no obvious short-term precursor, I find unidirectional tremor migration accompanied by elevated tremor rates in the 3 months prior to the earthquake, which suggests accelerated creep on the fault ∼16 km beneath the eventual earthquake hypocenter.
Relevance and Reason Relations.
Skovgaard-Olsen, Niels; Singmann, Henrik; Klauer, Karl Christoph
2017-05-01
This paper examines precursors and consequents of perceived relevance of a proposition A for a proposition C. In Experiment 1, we test Spohn's (2012) assumption that ∆P = P(C|A) - P(C|~A) is a good predictor of ratings of perceived relevance and reason relations, and we examine whether it is a better predictor than the difference measure (P(C|A) - P(C)). In Experiment 2, we examine the effects of relevance on probabilistic coherence in Cruz, Baratgin, Oaksford, and Over's (2015) uncertain "and-to-if" inferences. The results suggest that ∆P predicts perceived relevance and reason relations better than the difference measure and that participants are either less probabilistically coherent in "and-to-if" inferences than initially assumed or that they do not follow P(if A, then C) = P(C|A) ("the Equation"). Results are discussed in light of recent results suggesting that the Equation may not hold under conditions of irrelevance or negative relevance. Copyright © 2016 Cognitive Science Society, Inc.
Study of lanthanum aluminate for cost effective electrolyte material for SOFC
NASA Astrophysics Data System (ADS)
Verma, O. N.; Shahi, A. K.; Singh, P.
2018-05-01
The perovskite type electrolyte material LaAlO3 (abbreviated LAO) has been prepared by easy processing of auto-combustion synthesis using lanthanum nitrate and aluminium nitrate salts as precursors and citric acid as the fuel. The XRD analysis reveals that as synthesized material exhibits only single phase having rhombohedral structure. The measured density and theoretical density have been deliberated. The temperature dependent electrical conductivity of LAO increases with increasing the temperature which leads to increased mobility of oxide ion. The major contribution of such a significant value of ionic conductivity of LAO can be inferred to grain boundary resistance.
NASA Technical Reports Server (NTRS)
Butera, M. K.
1981-01-01
An automatic technique has been developed to measure marsh plant production by inference from a species classification derived from Landsat MSS data. A separate computer technique has been developed to calculate the transport path length of detritus and nutrients from their point of origin in the marsh to the shoreline from Landsat data. A nutrient availability indicator, the ratio of production to transport path length, was derived for each marsh-identified Landsat cell. The use of a data base compatible with the Landsat format facilitated data handling and computations.
NASA Technical Reports Server (NTRS)
Mohapatra, R. K.; Murty, S. V. S.
2002-01-01
Chemical and (oxygen) isotopic compositions of SNC meteorites have been used by a number of workers to infer the nature of precursor materials for the accretion of Mars. The idea that chondritic materials played a key role in the formation of Mars has been the central assumption in these works. Wanke and Dreibus have proposed a mixture of two types of chondritic materials, differing in oxygen fugacity but having CI type bulk chemical composition for the nonvolatile elements, for Mars' precursor. But a number of studies based on high pressure and temperature melting experiments do not favor a CI type bulk planet composition for Mars, as it predicts a bulk planet Fe/Si ratio much higher than that reported from the recent Pathfinder data. Oxygen forms the bulk of Mars (approximately 40% by wt.) and might provide clues to the type of materials that formed Mars. But models based on the oxygen isotopic compositions of SNC meteorites predict three different mixtures of precursor materials for Mars: 90% H + 10% CM, 85% H + 11% CV + 4% CI and 45% EH + 55% H. As each of these models has been shown to be consistent with the bulk geophysical properties (such as mean density, and moment of inertia factor) of Mars, the nature of the material that accreted to form Mars remains ambiguous.
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-09-01
Anomaly detection is extremely important for forecasting the date, location and magnitude of an impending earthquake. In this paper, an Adaptive Network-based Fuzzy Inference System (ANFIS) has been proposed to detect the thermal and Total Electron Content (TEC) anomalies around the time of the Varzeghan, Iran, (Mw = 6.4) earthquake jolted in 11 August 2012 NW Iran. ANFIS is the famous hybrid neuro-fuzzy network for modeling the non-linear complex systems. In this study, also the detected thermal and TEC anomalies using the proposed method are compared to the results dealing with the observed anomalies by applying the classical and intelligent methods including Interquartile, Auto-Regressive Integrated Moving Average (ARIMA), Artificial Neural Network (ANN) and Support Vector Machine (SVM) methods. The duration of the dataset which is comprised from Aqua-MODIS Land Surface Temperature (LST) night-time snapshot images and also Global Ionospheric Maps (GIM), is 62 days. It can be shown that, if the difference between the predicted value using the ANFIS method and the observed value, exceeds the pre-defined threshold value, then the observed precursor value in the absence of non seismic effective parameters could be regarded as precursory anomaly. For two precursors of LST and TEC, the ANFIS method shows very good agreement with the other implemented classical and intelligent methods and this indicates that ANFIS is capable of detecting earthquake anomalies. The applied methods detected anomalous occurrences 1 and 2 days before the earthquake. This paper indicates that the detection of the thermal and TEC anomalies derive their credibility from the overall efficiencies and potentialities of the five integrated methods.
Constraints for the Progenitor Masses of Historic Core-collapse Supernovae
NASA Astrophysics Data System (ADS)
Williams, Benjamin F.; Hillis, Tristan J.; Murphy, Jeremiah W.; Gilbert, Karoline; Dalcanton, Julianne J.; Dolphin, Andrew E.
2018-06-01
We age-date the stellar populations associated with 12 historic nearby core-collapse supernovae (CCSNe) and two supernova impostors; from these ages, we infer their initial masses and associated uncertainties. To do this, we have obtained new Hubble Space Telescope imaging covering these CCSNe. Using these images, we measure resolved stellar photometry for the stars surrounding the locations of the SNe. We then fit the color–magnitude distributions of this photometry with stellar evolution models to determine the ages of any young existing populations present. From these age distributions, we infer the most likely progenitor masses for all of the SNe in our sample. We find ages between 4 and 50 Myr, corresponding to masses from 7.5 to 59 solar masses. There were no SNe that lacked a local young population. Our sample contains four SNe Ib/c; their masses have a wide range of values, suggesting that the progenitors of stripped-envelope SNe are binary systems. Both impostors have masses constrained to be ≲7.5 solar masses. In cases with precursor imaging measurements, we find that age-dating and precursor imaging give consistent progenitor masses. This consistency implies that, although the uncertainties for each technique are significantly different, the results of both are reliable to the measured uncertainties. We combine these new measurements with those from our previous work and find that the distribution of 25 core-collapse SNe progenitor masses is consistent with a standard Salpeter power-law mass function, no upper mass cutoff, and an assumed minimum mass for core-collapse of 7.5 M⊙. The distribution is consistent with a minimum mass <9.5 M⊙.
Past and present cosmic structure in the SDSS DR7 main sample
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasche, J.; Leclercq, F.; Wandelt, B.D., E-mail: jasche@iap.fr, E-mail: florent.leclercq@polytechnique.org, E-mail: wandelt@iap.fr
2015-01-01
We present a chrono-cosmography project, aiming at the inference of the four dimensional formation history of the observed large scale structure from its origin to the present epoch. To do so, we perform a full-scale Bayesian analysis of the northern galactic cap of the Sloan Digital Sky Survey (SDSS) Data Release 7 main galaxy sample, relying on a fully probabilistic, physical model of the non-linearly evolved density field. Besides inferring initial conditions from observations, our methodology naturally and accurately reconstructs non-linear features at the present epoch, such as walls and filaments, corresponding to high-order correlation functions generated by late-time structuremore » formation. Our inference framework self-consistently accounts for typical observational systematic and statistical uncertainties such as noise, survey geometry and selection effects. We further account for luminosity dependent galaxy biases and automatic noise calibration within a fully Bayesian approach. As a result, this analysis provides highly-detailed and accurate reconstructions of the present density field on scales larger than ∼ 3 Mpc/h, constrained by SDSS observations. This approach also leads to the first quantitative inference of plausible formation histories of the dynamic large scale structure underlying the observed galaxy distribution. The results described in this work constitute the first full Bayesian non-linear analysis of the cosmic large scale structure with the demonstrated capability of uncertainty quantification. Some of these results will be made publicly available along with this work. The level of detail of inferred results and the high degree of control on observational uncertainties pave the path towards high precision chrono-cosmography, the subject of simultaneously studying the dynamics and the morphology of the inhomogeneous Universe.« less
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
Yang, Sichao; Jiang, Yun; Xu, Liqing; Shiratake, Katsuhiro; Luo, Zhengrong; Zhang, Qinglin
2016-11-01
Persimmon fruits accumulate a large amount of proanthocyanidins (PAs) in "tannin cells" during development that cause the sensation of astringency due to coagulation of oral proteins. Pollination-constant non-astringent (PCNA) is a spontaneous mutant persimmon phenotype that loses its astringency naturally on the tree at maturity; while the more common non-PCNA fruits remain rich in PAs until they are fully ripened. Here, we isolated a DkMATE1 gene encoding a Multidrug And Toxic Compound Extrusion (MATE) family protein from the Chinese PCNA (C-PCNA) 'Eshi 1'. Expression patterns of DkMATE1 were positively correlated with the accumulation of PAs in different types of persimmons fruits during fruit development. An analysis of the inferred amino acid sequences and phylogenetic relationships indicated that DkMATE1 is a putative PA precursor transporter, and subcellular localization assays revealed that DkMATE1 is localized in the vacuolar membrane. Ectopic expression of the DkMATE1 in Arabidopsis tt12 mutant supported that DkMATE1 could complement its biological function in transporting epicatechin 3'-O-glucoside as a PAs precursor from the cytoplasm to vacuole. Furthermore, the transient over-expression and silencing of DkMATE1 in 'Mopanshi' persimmon leaves resulted in a significant increase and a decrease in PA content, respectively. The analysis of cis-elements in DkMATE1 promoter regions indicated that DkMATE1 might be regulated by DkMYB4, another well-known structural gene in persimmon. Overall, our results show that DkMATE1 may be an essential PA precursor membrane transporter that plays an important role in PA biosynthesis in persimmon. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Learning abstract visual concepts via probabilistic program induction in a Language of Thought.
Overlan, Matthew C; Jacobs, Robert A; Piantadosi, Steven T
2017-11-01
The ability to learn abstract concepts is a powerful component of human cognition. It has been argued that variable binding is the key element enabling this ability, but the computational aspects of variable binding remain poorly understood. Here, we address this shortcoming by formalizing the Hierarchical Language of Thought (HLOT) model of rule learning. Given a set of data items, the model uses Bayesian inference to infer a probability distribution over stochastic programs that implement variable binding. Because the model makes use of symbolic variables as well as Bayesian inference and programs with stochastic primitives, it combines many of the advantages of both symbolic and statistical approaches to cognitive modeling. To evaluate the model, we conducted an experiment in which human subjects viewed training items and then judged which test items belong to the same concept as the training items. We found that the HLOT model provides a close match to human generalization patterns, significantly outperforming two variants of the Generalized Context Model, one variant based on string similarity and the other based on visual similarity using features from a deep convolutional neural network. Additional results suggest that variable binding happens automatically, implying that binding operations do not add complexity to peoples' hypothesized rules. Overall, this work demonstrates that a cognitive model combining symbolic variables with Bayesian inference and stochastic program primitives provides a new perspective for understanding people's patterns of generalization. Copyright © 2017 Elsevier B.V. All rights reserved.
Precursors of Reading Difficulties in Czech and Slovak Children At-Risk of Dyslexia.
Moll, Kristina; Thompson, Paul A; Mikulajova, Marina; Jagercikova, Zuzana; Kucharska, Anna; Franke, Helena; Hulme, Charles; Snowling, Margaret J
2016-05-01
Children with preschool language difficulties are at high risk of literacy problems; however, the nature of the relationship between delayed language development and dyslexia is not understood. Three hundred eight Slovak and Czech children were recruited into three groups: family risk of dyslexia, speech/language difficulties and controls, and were assessed three times from kindergarten until Grade 1. There was a twofold increase in probability of reading problems in each risk group. Precursors of 'dyslexia' included difficulties in oral language and code-related skills (phoneme awareness, letter-knowledge and rapid automatized naming); poor performance in phonological memory and vocabulary was observed in both affected and unaffected high-risk peers. A two-group latent variable path model shows that early language skills predict code-related skills, which in turn predict literacy skills. Findings suggest that dyslexia in Slavic languages has its origins in early language deficits, and children who succumb to reading problems show impaired code-related skills before the onset of formal reading instruction. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Handfield, Louis-François; Chong, Yolanda T.; Simmons, Jibril; Andrews, Brenda J.; Moses, Alan M.
2013-01-01
Protein subcellular localization has been systematically characterized in budding yeast using fluorescently tagged proteins. Based on the fluorescence microscopy images, subcellular localization of many proteins can be classified automatically using supervised machine learning approaches that have been trained to recognize predefined image classes based on statistical features. Here, we present an unsupervised analysis of protein expression patterns in a set of high-resolution, high-throughput microscope images. Our analysis is based on 7 biologically interpretable features which are evaluated on automatically identified cells, and whose cell-stage dependency is captured by a continuous model for cell growth. We show that it is possible to identify most previously identified localization patterns in a cluster analysis based on these features and that similarities between the inferred expression patterns contain more information about protein function than can be explained by a previous manual categorization of subcellular localization. Furthermore, the inferred cell-stage associated to each fluorescence measurement allows us to visualize large groups of proteins entering the bud at specific stages of bud growth. These correspond to proteins localized to organelles, revealing that the organelles must be entering the bud in a stereotypical order. We also identify and organize a smaller group of proteins that show subtle differences in the way they move around the bud during growth. Our results suggest that biologically interpretable features based on explicit models of cell morphology will yield unprecedented power for pattern discovery in high-resolution, high-throughput microscopy images. PMID:23785265
Improving Visibility of Stereo-Radiographic Spine Reconstruction with Geometric Inferences.
Kumar, Sampath; Nayak, K Prabhakar; Hareesha, K S
2016-04-01
Complex deformities of the spine, like scoliosis, are evaluated more precisely using stereo-radiographic 3D reconstruction techniques. Primarily, it uses six stereo-corresponding points available on the vertebral body for the 3D reconstruction of each vertebra. The wireframe structure obtained in this process has poor visualization, hence difficult to diagnose. In this paper, a novel method is proposed to improve the visibility of this wireframe structure using a deformation of a generic spine model in accordance with the 3D-reconstructed corresponding points. Then, the geometric inferences like vertebral orientations are automatically extracted from the radiographs to improve the visibility of the 3D model. Biplanar radiographs are acquired from five scoliotic subjects on a specifically designed calibration bench. The stereo-corresponding point reconstruction method is used to build six-point wireframe vertebral structures and thus the entire spine model. Using the 3D spine midline and automatically extracted vertebral orientation features, a more realistic 3D spine model is generated. To validate the method, the 3D spine model is back-projected on biplanar radiographs and the error difference is computed. Though, this difference is within the error limits available in the literature, the proposed work is simple and economical. The proposed method does not require more corresponding points and image features to improve the visibility of the model. Hence, it reduces the computational complexity. Expensive 3D digitizer and vertebral CT scan models are also excluded from this study. Thus, the visibility of stereo-corresponding point reconstruction is improved to obtain a low-cost spine model for a better diagnosis of spinal deformities.
Automatic Detection of Landslides at Stromboli Volcano
NASA Astrophysics Data System (ADS)
Giudicepietro, F.; Esposito, A. M.; D'Auria, L.; Peluso, R.; Martini, M.
2011-12-01
In this work we present an automatic system for the landslide detection at Stromboli volcano that has proved to be effective both during the 2007 effusive eruption and in the recent (2 August 2011) volcanic activity. The study of the landslides at Stromboli is important because they could be considered short-term precursors of effusive eruptions, as seen during the 2007 eruption, and in addition they allow to improve the monitoring of the gravitational instabilities of the Sciara del Fuoco flank. The proposed system uses a two-class MLP (Multi Layer Perceptron) neural network in order to discriminate the landslides from other seismic signals usually recorded at Stromboli, such as explosion-quakes and volcanic tremor. To train and test the network we used a dataset of 537 signals, including 267 landslides and 270 other events (130 explosion-quakes and 140 tremor signals). The net performance is of 98.7%, if averaged over different net configurations, and of 99.5% for the best net performance. Based on the neural network response, the automatic system calculates a Landslide Percentage Index (LPI) defined on the number of signals identified as landslides by the net on a given temporal interval in order to recognize anomalies in the landslide rate. This system was sensitive to the signals produced by the flow of lava front during a recent mild effusive episode on the "La Sciara del Fuoco" slope.
van Velsen, Evert F S; Niessen, Wiro J; de Weert, Thomas T; de Monyé, Cécile; van der Lugt, Aad; Meijering, Erik; Stokking, Rik
2007-07-01
Vessel image analysis is crucial when considering therapeutical options for (cardio-) vascular diseases. Our method, VAMPIRE (Vascular Analysis using Multiscale Paths Inferred from Ridges and Edges), involves two parts: a user defines a start- and endpoint upon which a lumen path is automatically defined, and which is used for initialization; the automatic segmentation of the vessel lumen on computed tomographic angiography (CTA) images. Both parts are based on the detection of vessel-like structures by analyzing intensity, edge, and ridge information. A multi-observer evaluation study was performed to compare VAMPIRE with a conventional method on the CTA data of 15 patients with carotid artery stenosis. In addition to the start- and endpoint, the two radiologists required on average 2.5 (SD: 1.9) additional points to define a lumen path when using the conventional method, and 0.1 (SD: 0.3) when using VAMPIRE. The segmentation results were quantitatively evaluated using Similarity Indices, which were slightly lower between VAMPIRE and the two radiologists (respectively 0.90 and 0.88) compared with the Similarity Index between the radiologists (0.92). The evaluation shows that the improved definition of a lumen path requires minimal user interaction, and that using this path as initialization leads to good automatic lumen segmentation results.
Extending gene ontology with gene association networks.
Peng, Jiajie; Wang, Tao; Wang, Jixuan; Wang, Yadong; Chen, Jin
2016-04-15
Gene ontology (GO) is a widely used resource to describe the attributes for gene products. However, automatic GO maintenance remains to be difficult because of the complex logical reasoning and the need of biological knowledge that are not explicitly represented in the GO. The existing studies either construct whole GO based on network data or only infer the relations between existing GO terms. None is purposed to add new terms automatically to the existing GO. We proposed a new algorithm 'GOExtender' to efficiently identify all the connected gene pairs labeled by the same parent GO terms. GOExtender is used to predict new GO terms with biological network data, and connect them to the existing GO. Evaluation tests on biological process and cellular component categories of different GO releases showed that GOExtender can extend new GO terms automatically based on the biological network. Furthermore, we applied GOExtender to the recent release of GO and discovered new GO terms with strong support from literature. Software and supplementary document are available at www.msu.edu/%7Ejinchen/GOExtender jinchen@msu.edu or ydwang@hit.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Seo, Joo-Hyun; Park, Jihyang; Kim, Eun-Mi; Kim, Juhan; Joo, Keehyoung; Lee, Jooyoung; Kim, Byung-Gee
2014-02-01
Sequence subgrouping for a given sequence set can enable various informative tasks such as the functional discrimination of sequence subsets and the functional inference of unknown sequences. Because an identity threshold for sequence subgrouping may vary according to the given sequence set, it is highly desirable to construct a robust subgrouping algorithm which automatically identifies an optimal identity threshold and generates subgroups for a given sequence set. To meet this end, an automatic sequence subgrouping method, named 'Subgrouping Automata' was constructed. Firstly, tree analysis module analyzes the structure of tree and calculates the all possible subgroups in each node. Sequence similarity analysis module calculates average sequence similarity for all subgroups in each node. Representative sequence generation module finds a representative sequence using profile analysis and self-scoring for each subgroup. For all nodes, average sequence similarities are calculated and 'Subgrouping Automata' searches a node showing statistically maximum sequence similarity increase using Student's t-value. A node showing the maximum t-value, which gives the most significant differences in average sequence similarity between two adjacent nodes, is determined as an optimum subgrouping node in the phylogenetic tree. Further analysis showed that the optimum subgrouping node from SA prevents under-subgrouping and over-subgrouping. Copyright © 2013. Published by Elsevier Ltd.
Detection of buried magnetic objects by a SQUID gradiometer system
NASA Astrophysics Data System (ADS)
Meyer, Hans-Georg; Hartung, Konrad; Linzen, Sven; Schneider, Michael; Stolz, Ronny; Fried, Wolfgang; Hauspurg, Sebastian
2009-05-01
We present a magnetic detection system based on superconducting gradiometric sensors (SQUID gradiometers). The system provides a unique fast mapping of large areas with a high resolution of the magnetic field gradient as well as the local position. A main part of this work is the localization and classification of magnetic objects in the ground by automatic interpretation of geomagnetic field gradients, measured by the SQUID system. In accordance with specific features the field is decomposed into segments, which allow inferences to possible objects in the ground. The global consideration of object describing properties and their optimization using error minimization methods allows the reconstruction of superimposed features and detection of buried objects. The analysis system of measured geomagnetic fields works fully automatically. By a given surface of area-measured gradients the algorithm determines within numerical limits the absolute position of objects including depth with sub-pixel accuracy and allows an arbitrary position and attitude of sources. Several SQUID gradiometer data sets were used to show the applicability of the analysis algorithm.
Automatic detection of echolocation clicks based on a Gabor model of their waveform.
Madhusudhana, Shyam; Gavrilov, Alexander; Erbe, Christine
2015-06-01
Prior research has shown that echolocation clicks of several species of terrestrial and marine fauna can be modelled as Gabor-like functions. Here, a system is proposed for the automatic detection of a variety of such signals. By means of mathematical formulation, it is shown that the output of the Teager-Kaiser Energy Operator (TKEO) applied to Gabor-like signals can be approximated by a Gaussian function. Based on the inferences, a detection algorithm involving the post-processing of the TKEO outputs is presented. The ratio of the outputs of two moving-average filters, a Gaussian and a rectangular filter, is shown to be an effective detection parameter. Detector performance is assessed using synthetic and real (taken from MobySound database) recordings. The detection method is shown to work readily with a variety of echolocation clicks and in various recording scenarios. The system exhibits low computational complexity and operates several times faster than real-time. Performance comparisons are made to other publicly available detectors including pamguard.
Hoehndorf, Robert; Dumontier, Michel; Oellrich, Anika; Rebholz-Schuhmann, Dietrich; Schofield, Paul N; Gkoutos, Georgios V
2011-01-01
Researchers design ontologies as a means to accurately annotate and integrate experimental data across heterogeneous and disparate data- and knowledge bases. Formal ontologies make the semantics of terms and relations explicit such that automated reasoning can be used to verify the consistency of knowledge. However, many biomedical ontologies do not sufficiently formalize the semantics of their relations and are therefore limited with respect to automated reasoning for large scale data integration and knowledge discovery. We describe a method to improve automated reasoning over biomedical ontologies and identify several thousand contradictory class definitions. Our approach aligns terms in biomedical ontologies with foundational classes in a top-level ontology and formalizes composite relations as class expressions. We describe the semi-automated repair of contradictions and demonstrate expressive queries over interoperable ontologies. Our work forms an important cornerstone for data integration, automatic inference and knowledge discovery based on formal representations of knowledge. Our results and analysis software are available at http://bioonto.de/pmwiki.php/Main/ReasonableOntologies.
NASA Astrophysics Data System (ADS)
Larour, E. Y.; Khazendar, A.; Seroussi, H. L.; Schlegel, N.; Csatho, B. M.; Schenk, A. F.; Rignot, E. J.; Morlighem, M.
2014-12-01
Altimetry signals from missions such as ICESat-1, CryoSat, EnviSat, as well as altimeters onboard Operation IceBridge provide vital insights into processes such as surface mass balance, mass transport and ice-flow dynamics. Historically however, ice-flow models have been focused on assimilating surface velocities from satellite-based radar observations, to infer properties such as basal friction or the position of the bedrock. Here, we leverage a new methodology based on automatic differentation of the Ice Sheet System Model to assimilate surface altimetry data into a reconstruction of the past decade of ice flow on the North Greenland area. We infer corrections to boundary conditions such as basal friction and surface mass balance, as well as corrections to the ice hardness, to best-match the observed altimetry record. We compare these corrections between glaciers such as Petermann Glacier, 79 North and Zacchariae Isstrom. The altimetry signals exhibit very different patterns between East and West, which translate into very different signatures for the inverted boundary conditions. This study gives us greater insights into what differentiates different basins, both in terms of mass transport and ice-flow dynamics, and what could bethe controlling mechanisms behind the very different evolutions of these basins.
Software design as a problem in learning theory (a research overview)
NASA Technical Reports Server (NTRS)
Fass, Leona F.
1992-01-01
Our interest in automating software design has come out of our research in automated reasoning, inductive inference, learnability, and algebraic machine theory. We have investigated these areas extensively, in connection with specific problems of language representation, acquisition, processing, and design. In the case of formal context-free (CF) languages we established existence of finite learnable models ('behavioral realizations') and procedures for constructing them effectively. We also determined techniques for automatic construction of the models, inductively inferring them from finite examples of how they should 'behave'. These results were obtainable due to appropriate representation of domain knowledge, and constraints on the domain that the representation defined. It was when we sought to generalize our results, and adapt or apply them, that we began investigating the possibility of determining similar procedures for constructing correct software. Discussions with other researchers led us to examine testing and verification processes, as they are related to inference, and due to their considerable importance in correct software design. Motivating papers by other researchers, led us to examine these processes in some depth. Here we present our approach to those software design issues raised by other researchers, within our own theoretical context. We describe our results, relative to those of the other researchers, and conclude that they do not compare unfavorably.
NASA Astrophysics Data System (ADS)
Lobo, Daniel; Lobikin, Maria; Levin, Michael
2017-01-01
Progress in regenerative medicine requires reverse-engineering cellular control networks to infer perturbations with desired systems-level outcomes. Such dynamic models allow phenotypic predictions for novel perturbations to be rapidly assessed in silico. Here, we analyzed a Xenopus model of conversion of melanocytes to a metastatic-like phenotype only previously observed in an all-or-none manner. Prior in vivo genetic and pharmacological experiments showed that individual animals either fully convert or remain normal, at some characteristic frequency after a given perturbation. We developed a Machine Learning method which inferred a model explaining this complex, stochastic all-or-none dataset. We then used this model to ask how a new phenotype could be generated: animals in which only some of the melanocytes converted. Systematically performing in silico perturbations, the model predicted that a combination of altanserin (5HTR2 inhibitor), reserpine (VMAT inhibitor), and VP16-XlCreb1 (constitutively active CREB) would break the all-or-none concordance. Remarkably, applying the predicted combination of three reagents in vivo revealed precisely the expected novel outcome, resulting in partial conversion of melanocytes within individuals. This work demonstrates the capability of automated analysis of dynamic models of signaling networks to discover novel phenotypes and predictively identify specific manipulations that can reach them.
Receptive Field Inference with Localized Priors
Park, Mijung; Pillow, Jonathan W.
2011-01-01
The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD), and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC) algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets. PMID:22046110
Efficient Moment-Based Inference of Admixture Parameters and Sources of Gene Flow
Levin, Alex; Reich, David; Patterson, Nick; Berger, Bonnie
2013-01-01
The recent explosion in available genetic data has led to significant advances in understanding the demographic histories of and relationships among human populations. It is still a challenge, however, to infer reliable parameter values for complicated models involving many populations. Here, we present MixMapper, an efficient, interactive method for constructing phylogenetic trees including admixture events using single nucleotide polymorphism (SNP) genotype data. MixMapper implements a novel two-phase approach to admixture inference using moment statistics, first building an unadmixed scaffold tree and then adding admixed populations by solving systems of equations that express allele frequency divergences in terms of mixture parameters. Importantly, all features of the model, including topology, sources of gene flow, branch lengths, and mixture proportions, are optimized automatically from the data and include estimates of statistical uncertainty. MixMapper also uses a new method to express branch lengths in easily interpretable drift units. We apply MixMapper to recently published data for Human Genome Diversity Cell Line Panel individuals genotyped on a SNP array designed especially for use in population genetics studies, obtaining confident results for 30 populations, 20 of them admixed. Notably, we confirm a signal of ancient admixture in European populations—including previously undetected admixture in Sardinians and Basques—involving a proportion of 20–40% ancient northern Eurasian ancestry. PMID:23709261
Liao, J. G.; Mcmurry, Timothy; Berg, Arthur
2014-01-01
Empirical Bayes methods have been extensively used for microarray data analysis by modeling the large number of unknown parameters as random effects. Empirical Bayes allows borrowing information across genes and can automatically adjust for multiple testing and selection bias. However, the standard empirical Bayes model can perform poorly if the assumed working prior deviates from the true prior. This paper proposes a new rank-conditioned inference in which the shrinkage and confidence intervals are based on the distribution of the error conditioned on rank of the data. Our approach is in contrast to a Bayesian posterior, which conditions on the data themselves. The new method is almost as efficient as standard Bayesian methods when the working prior is close to the true prior, and it is much more robust when the working prior is not close. In addition, it allows a more accurate (but also more complex) non-parametric estimate of the prior to be easily incorporated, resulting in improved inference. The new method’s prior robustness is demonstrated via simulation experiments. Application to a breast cancer gene expression microarray dataset is presented. Our R package rank.Shrinkage provides a ready-to-use implementation of the proposed methodology. PMID:23934072
Inferring the Limit Behavior of Some Elementary Cellular Automata
NASA Astrophysics Data System (ADS)
Ruivo, Eurico L. P.; de Oliveira, Pedro P. B.
Cellular automata locally define dynamical systems, discrete in space, time and in the state variables, capable of displaying arbitrarily complex global emergent behavior. One core question in the study of cellular automata refers to their limit behavior, that is, to the global dynamical features in an infinite time evolution. Previous works have shown that for finite time evolutions, the dynamics of one-dimensional cellular automata can be described by regular languages and, therefore, by finite automata. Such studies have shown the existence of growth patterns in the evolution of such finite automata for some elementary cellular automata rules and also inferred the limit behavior of such rules based upon the growth patterns; however, the results on the limit behavior were obtained manually, by direct inspection of the structures that arise during the time evolution. Here we present the formalization of an automatic method to compute such structures. Based on this, the rules of the elementary cellular automata space were classified according to the existence of a growth pattern in their finite automata. Also, we present a method to infer the limit graph of some elementary cellular automata rules, derived from the analysis of the regular expressions that describe their behavior in finite time. Finally, we analyze some attractors of two rules for which we could not compute the whole limit set.
NASA Astrophysics Data System (ADS)
Statella, T.; Pina, P.; Silva, E. A.; Nervis Frigeri, Ary Vinicius; Neto, Frederico Gallon
2016-10-01
We have calculated the prevailing dust devil tracks direction as a means of verifying the Mars Climate Database (MCD) predicted wind directions accuracy. For that purpose we have applied an automatic method based on morphological openings for inferring the prevailing tracks direction in a dataset comprising 200 Mars Orbiter Camera (MOC) Narrow Angle (NA) and High Resolution Imaging Science Experiment (HiRISE) images of the Martian surface, depicting regions in the Aeolis, Eridania, Noachis, Argyre and Hellas quadrangles. The prevailing local wind directions were calculated from the MCD predicted speeds for the WE and SN wind components. The results showed that the MCD may not be able to predict accurately the locally dominant wind direction near the surface. In adittion, we confirm that the surface wind stress alone cannot produce dust lifting in the studied sites, since it never exceeds the threshold value of 0.0225 Nm-2 in the MCD.
Quality Assurance of NCI Thesaurus by Mining Structural-Lexical Patterns
Abeysinghe, Rashmie; Brooks, Michael A.; Talbert, Jeffery; Licong, Cui
2017-01-01
Quality assurance of biomedical terminologies such as the National Cancer Institute (NCI) Thesaurus is an essential part of the terminology management lifecycle. We investigate a structural-lexical approach based on non-lattice subgraphs to automatically identify missing hierarchical relations and missing concepts in the NCI Thesaurus. We mine six structural-lexical patterns exhibiting in non-lattice subgraphs: containment, union, intersection, union-intersection, inference-contradiction, and inference union. Each pattern indicates a potential specific type of error and suggests a potential type of remediation. We found 809 non-lattice subgraphs with these patterns in the NCI Thesaurus (version 16.12d). Domain experts evaluated a random sample of 50 small non-lattice subgraphs, of which 33 were confirmed to contain errors and make correct suggestions (33/50 = 66%). Of the 25 evaluated subgraphs revealing multiple patterns, 22 were verified correct (22/25 = 88%). This shows the effectiveness of our structurallexical-pattern-based approach in detecting errors and suggesting remediations in the NCI Thesaurus. PMID:29854100
Ontology-Based High-Level Context Inference for Human Behavior Identification
Villalonga, Claudia; Razzaq, Muhammad Asif; Khan, Wajahat Ali; Pomares, Hector; Rojas, Ignacio; Lee, Sungyoung; Banos, Oresti
2016-01-01
Recent years have witnessed a huge progress in the automatic identification of individual primitives of human behavior, such as activities or locations. However, the complex nature of human behavior demands more abstract contextual information for its analysis. This work presents an ontology-based method that combines low-level primitives of behavior, namely activity, locations and emotions, unprecedented to date, to intelligently derive more meaningful high-level context information. The paper contributes with a new open ontology describing both low-level and high-level context information, as well as their relationships. Furthermore, a framework building on the developed ontology and reasoning models is presented and evaluated. The proposed method proves to be robust while identifying high-level contexts even in the event of erroneously-detected low-level contexts. Despite reasonable inference times being obtained for a relevant set of users and instances, additional work is required to scale to long-term scenarios with a large number of users. PMID:27690050
Data Analysis with Graphical Models: Software Tools
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1994-01-01
Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
Early Astronomical Sequential Photography, 1873-1923
NASA Astrophysics Data System (ADS)
Bonifácio, Vitor
2011-11-01
In 1873 Jules Janssen conceived the first automatic sequential photographic apparatus to observe the eagerly anticipated 1874 transit of Venus. This device, the 'photographic revolver', is commonly considered today as the earliest cinema precursor. In the following years, in order to study the variability or the motion of celestial objects, several instruments, either manually or automatically actuated, were devised to obtain as many photographs as possible of astronomical events in a short time interval. In this paper we strive to identify from the available documents the attempts made between 1873 and 1923, and discuss the motivations behind them and the results obtained. During the time period studied astronomical sequential photography was employed to determine the time of the instants of contact in transits and occultations, and to study total solar eclipses. The technique was seldom used but apparently the modern film camera invention played no role on this situation. Astronomical sequential photographs were obtained both before and after 1895. We conclude that the development of astronomical sequential photography was constrained by the reduced number of subjects to which the technique could be applied.
Seismo-Geochemical Variations in SW Taiwan: Multi-Parameter Automatic Gas Monitoring Results
NASA Astrophysics Data System (ADS)
Yang, T. F.; Fu, C.-C.; Walia, V.; Chen, C.-H.; Chyi, L. L.; Liu, T.-K.; Song, S.-R.; Lee, M.; Lin, C.-W.; Lin, C.-C.
2006-04-01
Gas variations of many mud volcanoes and hot springs distributed along the tectonic sutures in southwestern Taiwan are considered to be sensitive to the earthquake activity. Therefore, a multi-parameter automatic gas station was built on the bank of one of the largest mud-pools at an active fault zone of southwestern Taiwan, for continuous monitoring of CO2, CH4, N2 and H2O, the major constituents of its bubbling gases. During the year round monitoring from October 2001 to October 2002, the gas composition, especially, CH4 and CO2, of the mud pool showed significant variations. Taking the CO2/CH4 ratio as the main indicator, anomalous variations can be recognized from a few days to a few weeks before earthquakes and correlated well with those with a local magnitude >4.0 and local intensities >2. It is concluded that the gas composition in the area is sensitive to the local crustal stress/strain and is worthy to conduct real-time monitoring for the seismo-geochemical precursors.
Time-dependent influence of sensorimotor set on automatic responses in perturbed stance
NASA Technical Reports Server (NTRS)
Chong, R. K.; Horak, F. B.; Woollacott, M. H.; Peterson, B. W. (Principal Investigator)
1999-01-01
These experiments tested the hypothesis that the ability to change sensorimotor set quickly for automatic responses depends on the time interval between successive surface perturbations. Sensorimotor set refers to the influence of prior experience or context on the state of the sensorimotor system. Sensorimotor set for postural responses was influenced by first giving subjects a block of identical backward translations of the support surface, causing forward sway and automatic gastrocnemius responses. The ability to change set quickly was inferred by measuring the suppression of the stretched antagonist gastrocnemius responses to toes-up rotations causing backward sway, following the translations. Responses were examined under short (10-14 s) and long (19-24 s) inter-trial intervals in young healthy subjects. The results showed that subjects in the long-interval group changed set immediately by suppressing gastrocnemius to 51% of translation responses within the first rotation and continued to suppress them over succeeding rotations. In contrast, subjects in the short-interval group did not change set immediately, but required two or more rotations to suppress gastrocnemius responses. By the last rotation, the short-interval group suppressed gastrocnemius responses to 33%, similar to the long-interval group of 29%. Associated surface plantarflexor torque resulting from these responses showed similar results. When rotation and translation perturbations alternated, however, the short-interval group was not able to suppress gastrocnemius responses to rotations as much as the long-interval group, although they did suppress more than in the first rotation trial after a series of translations. Set for automatic responses appears to linger, from one trial to the next. Specifically, sensorimotor set is more difficult to change when surface perturbations are given in close succession, making it appear as if set has become progressively stronger. A strong set does not mean that responses become larger over consecutive trials. Rather, it is inferred by the extent of difficulty in changing a response when it is appropriate to do so. These results suggest that the ability to change sensorimotor set quickly is sensitive to whether the change is required after a long or a short series of a prior different response, which in turn depends on the time interval between successive trials. Different rate of gastrocnemius suppression to toes-up rotation of the support surface have been reported in previous studies. This may be partially explained by different inter-trial time intervals demonstrated in this study.
Einstein SSS+MPC observations of Seyfert type galaxies
NASA Technical Reports Server (NTRS)
Holt, S. S.; Turner, T. J.; Mushotzky, R. F.; Weaver, K.
1989-01-01
The X-ray spectra of 27 Seyfert galaxies measured with the Solid State Spectrometer (SSS) onboard the Einstein Observatory is investigated. This new investigation features the utilization of simultaneous data from the Monitor Proportional Counter (MPC) and automatic correction for systematic effects in the SSS. The new results are that the best-fit single power law indices agree with those previously reported, but that soft excesses are inferred for at least 20 percent of the measured spectra. The soft excesses are consistent with either an approximately 0.25 keV black body or Fe-L line emission.
NASA Astrophysics Data System (ADS)
Gu, Y. J.; Schultz, R.
2013-12-01
Knowledge of upper mantle transition zone stratification and composition is highly dependent on our ability to efficiently extract and properly interpret small seismic arrivals. A promising high-frequency seismic phase group particularly suitable for a global analysis is P'P' precursors, which are capable of resolving mantle structures at vertical and lateral resolution of approximately 5 and 200 km, respectively, owing to their shallow incidence angle and small, quasi-symmetric Fresnel zones. This study presents a simultaneous analysis of SS and P'P' precursors based on deconvolution, Radon transform and depth migration. Our multi-resolution survey of the mantle near Nazca-South America subduction zone reveals both olivine and garnet related transitions at depth below 400 km. We attribute a depressed 660 to thermal variations, whereas compositional variations atop the upper-mantle transition zone are needed to explain the diminished or highly complex reflected/scattered signals from the 410 km discontinuity. We also observe prominent P'P' reflections within the transition zone, especially near the plate boundary zone where anomalously high reflection amplitudes result from a sharp (~10 km thick) mineral phase change resonant with the dominant frequency of the P'P' precursors. Near the base of the upper mantle, the migration of SS precursors shows no evidence of split reflections near the 660-km discontinuity, but potential majorite-ilmenite (590-640 km) and ilmenite-perovskite transitions (740-750 km) are identified based on similarly processed high-frequency P'P' precursors. At nominal mantle temperatures these two phase changes may be seismically indistinguishable, but colder mantle conditions from the descending Nazca plate, the presence of water and variable Fe contents may cause sufficient separation for a reliable analysis. In addition, our preliminary results provide compelling evidence for multiple shallow lower-mantle reflections (at ~800 km) along the elongated plate boundary zones of South America. Slab stagnation at the base of the transition zone could play a key role, though a proper interpretation of this finding would likely entail compositional (rather than strictly thermal) variations in the vicinity of the descending oceanic crust and lithosphere. Overall, the resolution and sensitivity differences between low/intermediate- S and high-frequency P wave reflections are key considerations toward reconciling seismic and mineralogical models of transition zone structure, both at the study location and worldwide.
Single-crystalline BaTiO3 films grown by gas-source molecular beam epitaxy
NASA Astrophysics Data System (ADS)
Matsubara, Yuya; Takahashi, Kei S.; Tokura, Yoshinori; Kawasaki, Masashi
2014-12-01
Thin BaTiO3 films were grown on GdScO3 (110) substrates by metalorganic gas-source molecular beam epitaxy. Titanium tetra-isopropoxide (TTIP) was used as a volatile precursor that provides a wide growth window of the supplied TTIP/Ba ratio for automatic adjustment of the film composition. Within the growth window, compressively strained films can be grown with excellent crystalline quality, whereas films grown outside of the growth window are relaxed with inferior crystallinity. This growth method will provide a way to study the intrinsic properties of ferroelectric BaTiO3 films and their heterostructures by precise control of the stoichiometry, structure, and purity.
Criaud, Marion; Longcamp, Marieke; Anton, Jean-Luc; Nazarian, Bruno; Roth, Muriel; Sescousse, Guillaume; Strafella, Antonio P; Ballanger, Bénédicte; Boulinguez, Philippe
2017-08-30
The neural mechanisms underlying response inhibition and related disorders are unclear and controversial for several reasons. First, it is a major challenge to assess the psychological bases of behaviour, and ultimately brain-behaviour relationships, of a function which is precisely intended to suppress overt measurable behaviours. Second, response inhibition is difficult to disentangle from other parallel processes involved in more general aspects of cognitive control. Consequently, different psychological and anatomo-functional models coexist, which often appear in conflict with each other even though they are not necessarily mutually exclusive. The standard model of response inhibition in go/no-go tasks assumes that inhibitory processes are reactively and selectively triggered by the stimulus that participants must refrain from reacting to. Recent alternative models suggest that action restraint could instead rely on reactive but non-selective mechanisms (all automatic responses are automatically inhibited in uncertain contexts) or on proactive and non-selective mechanisms (a gating function by which reaction to any stimulus is prevented in anticipation of stimulation when the situation is unpredictable). Here, we assessed the physiological plausibility of these different models by testing their respective predictions regarding event-related BOLD modulations (forward inference using fMRI). We set up a single fMRI design which allowed for us to record simultaneously the different possible forms of inhibition while limiting confounds between response inhibition and parallel cognitive processes. We found BOLD dynamics consistent with non-selective models. These results provide new theoretical and methodological lines of inquiry for the study of basic functions involved in behavioural control and related disorders. Copyright © 2017 Elsevier B.V. All rights reserved.
Toward synthesizing executable models in biology.
Fisher, Jasmin; Piterman, Nir; Bodik, Rastislav
2014-01-01
Over the last decade, executable models of biological behaviors have repeatedly provided new scientific discoveries, uncovered novel insights, and directed new experimental avenues. These models are computer programs whose execution mechanistically simulates aspects of the cell's behaviors. If the observed behavior of the program agrees with the observed biological behavior, then the program explains the phenomena. This approach has proven beneficial for gaining new biological insights and directing new experimental avenues. One advantage of this approach is that techniques for analysis of computer programs can be applied to the analysis of executable models. For example, one can confirm that a model agrees with experiments for all possible executions of the model (corresponding to all environmental conditions), even if there are a huge number of executions. Various formal methods have been adapted for this context, for example, model checking or symbolic analysis of state spaces. To avoid manual construction of executable models, one can apply synthesis, a method to produce programs automatically from high-level specifications. In the context of biological modeling, synthesis would correspond to extracting executable models from experimental data. We survey recent results about the usage of the techniques underlying synthesis of computer programs for the inference of biological models from experimental data. We describe synthesis of biological models from curated mutation experiment data, inferring network connectivity models from phosphoproteomic data, and synthesis of Boolean networks from gene expression data. While much work has been done on automated analysis of similar datasets using machine learning and artificial intelligence, using synthesis techniques provides new opportunities such as efficient computation of disambiguating experiments, as well as the ability to produce different kinds of models automatically from biological data.
SOMBI: Bayesian identification of parameter relations in unstructured cosmological data
NASA Astrophysics Data System (ADS)
Frank, Philipp; Jasche, Jens; Enßlin, Torsten A.
2016-11-01
This work describes the implementation and application of a correlation determination method based on self organizing maps and Bayesian inference (SOMBI). SOMBI aims to automatically identify relations between different observed parameters in unstructured cosmological or astrophysical surveys by automatically identifying data clusters in high-dimensional datasets via the self organizing map neural network algorithm. Parameter relations are then revealed by means of a Bayesian inference within respective identified data clusters. Specifically such relations are assumed to be parametrized as a polynomial of unknown order. The Bayesian approach results in a posterior probability distribution function for respective polynomial coefficients. To decide which polynomial order suffices to describe correlation structures in data, we include a method for model selection, the Bayesian information criterion, to the analysis. The performance of the SOMBI algorithm is tested with mock data. As illustration we also provide applications of our method to cosmological data. In particular, we present results of a correlation analysis between galaxy and active galactic nucleus (AGN) properties provided by the SDSS catalog with the cosmic large-scale-structure (LSS). The results indicate that the combined galaxy and LSS dataset indeed is clustered into several sub-samples of data with different average properties (for example different stellar masses or web-type classifications). The majority of data clusters appear to have a similar correlation structure between galaxy properties and the LSS. In particular we revealed a positive and linear dependency between the stellar mass, the absolute magnitude and the color of a galaxy with the corresponding cosmic density field. A remaining subset of data shows inverted correlations, which might be an artifact of non-linear redshift distortions.
Rueda-Ayala, Victor; Weis, Martin; Keller, Martina; Andújar, Dionisio; Gerhards, Roland
2013-01-01
Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS). The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow. PMID:23669712
Theoretical results which strengthen the hypothesis of electroweak bioenantioselection
NASA Astrophysics Data System (ADS)
Zanasi, R.; Lazzeretti, P.; Ligabue, A.; Soncini, A.
1999-03-01
It is shown via a large series of numerical tests on two fundamental organic molecules, the L-α-amino acid L-valine and the sugar precursor hydrated D-glyceraldheyde, that the ab initio calculation of the parity-violating energy shift, at the random-phase approximation level of accuracy, provides results that are about one order of magnitude larger than those obtained by means of less accurate methods employed previously. These findings would make more plausible the hypothesis of electroweak selection of natural enantiomers via the Kondepudi-Nelson scenario, or could imply that Salam phase-transition temperature is higher than previously inferred: accordingly, the hypothesis of terrestrial origin of life would become more realistic.
Industrial solutions trends for the control of HiRes spectrograph@E-ELT
NASA Astrophysics Data System (ADS)
Di Marcantonio, P.; Baldini, V.; Calderone, G.; Cirami, R.; Coretti, I.; Cristiani, S.
Starting a few years ago, ESO initiated a number of projects aiming to explore the possible adoption of industrial standards and commercial off-the-shelf components (COTS) for the control of future VLT and E-ELT instrumentations. In this context, ESPRESSO, the next generation high-stability spectrograph for the VLT and to a certain extent, a precursor of HiRes, has adopted since the preliminary design phase those solutions. Based on the ESPRESSO experience and taking into account the requirements inferred from the preliminary Hi-Res studies in terms of both high-level operations as well as low-level control, I will present in this paper the current proposal for the HiRes hardware architecture.
Annotation-based inference of transporter function.
Lee, Thomas J; Paulsen, Ian; Karp, Peter
2008-07-01
We present a method for inferring and constructing transport reactions for transporter proteins based primarily on the analysis of the names of individual proteins in the genome annotation of an organism. Transport reactions are declarative descriptions of transporter activities, and thus can be manipulated computationally, unlike free-text protein names. Once transporter activities are encoded as transport reactions, a number of computational analyses are possible including database queries by transporter activity; inclusion of transporters into an automatically generated metabolic-map diagram that can be painted with omics data to aid in their interpretation; detection of anomalies in the metabolic and transport networks, such as substrates that are transported into the cell but are not inputs to any metabolic reaction or pathway; and comparative analyses of the transport capabilities of different organisms. On randomly selected organisms, the method achieves precision and recall rates of 0.93 and 0.90, respectively in identifying transporter proteins by name within the complete genome. The method obtains 67.5% accuracy in predicting complete transport reactions; if allowance is made for predictions that are overly general yet not incorrect, reaction prediction accuracy is 82.5%. The method is implemented as part of PathoLogic, the inference component of the Pathway Tools software. Pathway Tools is freely available to researchers at non-commercial institutions, including source code; a fee applies to commercial institutions. Supplementary data are available at Bioinformatics online.
Phylogenomics of plant genomes: a methodology for genome-wide searches for orthologs in plants
Conte, Matthieu G; Gaillard, Sylvain; Droc, Gaetan; Perin, Christophe
2008-01-01
Background Gene ortholog identification is now a major objective for mining the increasing amount of sequence data generated by complete or partial genome sequencing projects. Comparative and functional genomics urgently need a method for ortholog detection to reduce gene function inference and to aid in the identification of conserved or divergent genetic pathways between several species. As gene functions change during evolution, reconstructing the evolutionary history of genes should be a more accurate way to differentiate orthologs from paralogs. Phylogenomics takes into account phylogenetic information from high-throughput genome annotation and is the most straightforward way to infer orthologs. However, procedures for automatic detection of orthologs are still scarce and suffer from several limitations. Results We developed a procedure for ortholog prediction between Oryza sativa and Arabidopsis thaliana. Firstly, we established an efficient method to cluster A. thaliana and O. sativa full proteomes into gene families. Then, we developed an optimized phylogenomics pipeline for ortholog inference. We validated the full procedure using test sets of orthologs and paralogs to demonstrate that our method outperforms pairwise methods for ortholog predictions. Conclusion Our procedure achieved a high level of accuracy in predicting ortholog and paralog relationships. Phylogenomic predictions for all validated gene families in both species were easily achieved and we can conclude that our methodology outperforms similarly based methods. PMID:18426584
A comparison of algorithms for inference and learning in probabilistic graphical models.
Frey, Brendan J; Jojic, Nebojsa
2005-09-01
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.
2015-01-01
We report the implementation of high-quality signal processing algorithms into ProteoWizard, an efficient, open-source software package designed for analyzing proteomics tandem mass spectrometry data. Specifically, a new wavelet-based peak-picker (CantWaiT) and a precursor charge determination algorithm (Turbocharger) have been implemented. These additions into ProteoWizard provide universal tools that are independent of vendor platform for tandem mass spectrometry analyses and have particular utility for intralaboratory studies requiring the advantages of different platforms convergent on a particular workflow or for interlaboratory investigations spanning multiple platforms. We compared results from these tools to those obtained using vendor and commercial software, finding that in all cases our algorithms resulted in a comparable number of identified peptides for simple and complex samples measured on Waters, Agilent, and AB SCIEX quadrupole time-of-flight and Thermo Q-Exactive mass spectrometers. The mass accuracy of matched precursor ions also compared favorably with vendor and commercial tools. Additionally, typical analysis runtimes (∼1–100 ms per MS/MS spectrum) were short enough to enable the practical use of these high-quality signal processing tools for large clinical and research data sets. PMID:25411686
French, William R; Zimmerman, Lisa J; Schilling, Birgit; Gibson, Bradford W; Miller, Christine A; Townsend, R Reid; Sherrod, Stacy D; Goodwin, Cody R; McLean, John A; Tabb, David L
2015-02-06
We report the implementation of high-quality signal processing algorithms into ProteoWizard, an efficient, open-source software package designed for analyzing proteomics tandem mass spectrometry data. Specifically, a new wavelet-based peak-picker (CantWaiT) and a precursor charge determination algorithm (Turbocharger) have been implemented. These additions into ProteoWizard provide universal tools that are independent of vendor platform for tandem mass spectrometry analyses and have particular utility for intralaboratory studies requiring the advantages of different platforms convergent on a particular workflow or for interlaboratory investigations spanning multiple platforms. We compared results from these tools to those obtained using vendor and commercial software, finding that in all cases our algorithms resulted in a comparable number of identified peptides for simple and complex samples measured on Waters, Agilent, and AB SCIEX quadrupole time-of-flight and Thermo Q-Exactive mass spectrometers. The mass accuracy of matched precursor ions also compared favorably with vendor and commercial tools. Additionally, typical analysis runtimes (∼1-100 ms per MS/MS spectrum) were short enough to enable the practical use of these high-quality signal processing tools for large clinical and research data sets.
Morton, L L; Siegel, L S
1991-02-01
Twenty reading comprehension-disabled (CD) and 20 reading comprehension and word recognition-disabled (CWRD), right-handed male children were matched with 20 normal-achieving age-matched controls and 20 normal-achieving reading level-matched controls and tested for left ear report on dichotic listening tasks using digits and consonant-vowel combinations (CVs). Left ear report for CVs and digits did not correlate for any of the groups. Both reading-disabled groups showed lower left ear report on digits. On CVs the CD group showed a high left ear report but only when there were no priming precursors, such as directions to attend right first and to process digits first. Priming effects interfered with the processing of both digits and CVs. Theoretically, the CWRD group seems to be characterized by a depressed right hemisphere, whereas the CD group may have a more labile right hemisphere, perhaps tending to overengagement for CV tasks but vulnerable to situational precursors in the form of priming effects. Implications extend to (1) subtyping practices in research with the learning-disabled, (2) inferences drawn from studies using different dichotic stimuli, and (3) the neuropsychology of reading disorders.
NASA Technical Reports Server (NTRS)
Trainer, Melissa G.; Sebree, Joshua A.; Anderson, Carrie M.; Loeffler, Mark J.
2012-01-01
Since Cassini's arrival at Titan, ppm levels of benzene (C6H6) as well as large positive ions, which may be polycyclic aromatic hydrocarbons (PAHs). have been detected in the atmosphere. Aromatic molecules. photolytically active in the ultraviolet, may be important in the formation of the organic aerosol comprising the Titan haze layer even when present at low mixing ratios. Yet there have not been laboratory simulations exploring the impact of these molecules as precursors to Titan's organic aerosol. Observations of Titan by the Cassini Composite Infrared Spectrometer (CIRS) in the far-infrared (far-IR) between 560 and 20/cm (approx. 18 to 500 microns) and in the mid-infrared (mid-IR) between 1500 and 600/cm (approx. 7 to 17 microns) have been used to infer the vertical variations of Titan's aerosol from the surface to an altitude of 300 km in the far-IR and between 150 and 350 km in the mid-IR. Titan's aerosol has several observed emission features which cannot be reproduced using currently available optical constants from laboratory-generated Titan aerosol analogs, including a broad far-IR feature centered approximately at 140/cm (71 microns).
A statistical approach for inferring the 3D structure of the genome.
Varoquaux, Nelle; Ay, Ferhat; Noble, William Stafford; Vert, Jean-Philippe
2014-06-15
Recent technological advances allow the measurement, in a single Hi-C experiment, of the frequencies of physical contacts among pairs of genomic loci at a genome-wide scale. The next challenge is to infer, from the resulting DNA-DNA contact maps, accurate 3D models of how chromosomes fold and fit into the nucleus. Many existing inference methods rely on multidimensional scaling (MDS), in which the pairwise distances of the inferred model are optimized to resemble pairwise distances derived directly from the contact counts. These approaches, however, often optimize a heuristic objective function and require strong assumptions about the biophysics of DNA to transform interaction frequencies to spatial distance, and thereby may lead to incorrect structure reconstruction. We propose a novel approach to infer a consensus 3D structure of a genome from Hi-C data. The method incorporates a statistical model of the contact counts, assuming that the counts between two loci follow a Poisson distribution whose intensity decreases with the physical distances between the loci. The method can automatically adjust the transfer function relating the spatial distance to the Poisson intensity and infer a genome structure that best explains the observed data. We compare two variants of our Poisson method, with or without optimization of the transfer function, to four different MDS-based algorithms-two metric MDS methods using different stress functions, a non-metric version of MDS and ChromSDE, a recently described, advanced MDS method-on a wide range of simulated datasets. We demonstrate that the Poisson models reconstruct better structures than all MDS-based methods, particularly at low coverage and high resolution, and we highlight the importance of optimizing the transfer function. On publicly available Hi-C data from mouse embryonic stem cells, we show that the Poisson methods lead to more reproducible structures than MDS-based methods when we use data generated using different restriction enzymes, and when we reconstruct structures at different resolutions. A Python implementation of the proposed method is available at http://cbio.ensmp.fr/pastis. © The Author 2014. Published by Oxford University Press.
Real time validation of GPS TEC precursor mask for Greece
NASA Astrophysics Data System (ADS)
Pulinets, Sergey; Davidenko, Dmitry
2013-04-01
It was established by earlier studies of pre-earthquake ionospheric variations that for every specific site these variations manifest definite stability in their temporal behavior within the time interval few days before the seismic shock. This self-similarity (characteristic to phenomena registered for processes observed close to critical point of the system) permits us to consider these variations as a good candidate to short-term precursor. Physical mechanism of GPS TEC variations before earthquakes is developed within the framework of Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model. Taking into account the different tectonic structure and different source mechanisms of earthquakes in different regions of the globe, every site has its individual behavior in pre-earthquake activity what creates individual "imprint" on the ionosphere behavior at every given point. Just this so called "mask" of the ionosphere variability before earthquake in the given point creates opportunity to detect anomalous behavior of electron concentration in ionosphere basing not only on statistical processing procedure but applying the pattern recognition technique what facilitates the automatic recognition of short-term ionospheric precursors of earthquakes. Such kind of precursor mask was created using the GPS TEC variation around the time of 9 earthquakes with magnitude from M6.0 till M6.9 which took place in Greece within the time interval 2006-2011. The major anomaly revealed in the relative deviation of the vertical TEC was the positive anomaly appearing at ~04PM UT one day before the seismic shock and lasting nearly 12 hours till ~04AM UT. To validate this approach it was decided to check the mask in real-time monitoring of earthquakes in Greece starting from the 1 of December 2012 for the earthquakes with magnitude more than 4.5. During this period (till 9 of January 2013) 4 cases of seismic shocks were registered, including the largest one M5.7 on 8 of January. For all of them the mask confirmed its validity and 6 of December event was predicted in advance.
NASA Astrophysics Data System (ADS)
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.
Weinstein, Nathan; Mendoza, Luis
2013-01-01
The vulva of Caenorhabditis elegans has been long used as an experimental model of cell differentiation and organogenesis. While it is known that the signaling cascades of Wnt, Ras/MAPK, and NOTCH interact to form a molecular network, there is no consensus regarding its precise topology and dynamical properties. We inferred the molecular network, and developed a multivalued synchronous discrete dynamic model to study its behavior. The model reproduces the patterns of activation reported for the following types of cell: vulval precursor, first fate, second fate, second fate with reversed polarity, third fate, and fusion fate. We simulated the fusion of cells, the determination of the first, second, and third fates, as well as the transition from the second to the first fate. We also used the model to simulate all possible single loss- and gain-of-function mutants, as well as some relevant double and triple mutants. Importantly, we associated most of these simulated mutants to multivulva, vulvaless, egg-laying defective, or defective polarity phenotypes. The model shows that it is necessary for RAL-1 to activate NOTCH signaling, since the repression of LIN-45 by RAL-1 would not suffice for a proper second fate determination in an environment lacking DSL ligands. We also found that the model requires the complex formed by LAG-1, LIN-12, and SEL-8 to inhibit the transcription of eff-1 in second fate cells. Our model is the largest reconstruction to date of the molecular network controlling the specification of vulval precursor cells and cell fusion control in C. elegans. According to our model, the process of fate determination in the vulval precursor cells is reversible, at least until either the cells fuse with the ventral hypoderm or divide, and therefore the cell fates must be maintained by the presence of extracellular signals. PMID:23785384
NASA Astrophysics Data System (ADS)
Mølgaard, Lasse L.; Buus, Ole T.; Larsen, Jan; Babamoradi, Hamid; Thygesen, Ida L.; Laustsen, Milan; Munk, Jens Kristian; Dossi, Eleftheria; O'Keeffe, Caroline; Lässig, Lina; Tatlow, Sol; Sandström, Lars; Jakobsen, Mogens H.
2017-05-01
We present a data-driven machine learning approach to detect drug- and explosives-precursors using colorimetric sensor technology for air-sampling. The sensing technology has been developed in the context of the CRIM-TRACK project. At present a fully- integrated portable prototype for air sampling with disposable sensing chips and automated data acquisition has been developed. The prototype allows for fast, user-friendly sampling, which has made it possible to produce large datasets of colorimetric data for different target analytes in laboratory and simulated real-world application scenarios. To make use of the highly multi-variate data produced from the colorimetric chip a number of machine learning techniques are employed to provide reliable classification of target analytes from confounders found in the air streams. We demonstrate that a data-driven machine learning method using dimensionality reduction in combination with a probabilistic classifier makes it possible to produce informative features and a high detection rate of analytes. Furthermore, the probabilistic machine learning approach provides a means of automatically identifying unreliable measurements that could produce false predictions. The robustness of the colorimetric sensor has been evaluated in a series of experiments focusing on the amphetamine pre-cursor phenylacetone as well as the improvised explosives pre-cursor hydrogen peroxide. The analysis demonstrates that the system is able to detect analytes in clean air and mixed with substances that occur naturally in real-world sampling scenarios. The technology under development in CRIM-TRACK has the potential as an effective tool to control trafficking of illegal drugs, explosive detection, or in other law enforcement applications.
Content-aware automatic cropping for consumer photos
NASA Astrophysics Data System (ADS)
Tang, Hao; Tretter, Daniel; Lin, Qian
2013-03-01
Consumer photos are typically authored once, but need to be retargeted for reuse in various situations. These include printing a photo on different size paper, changing the size and aspect ratio of an embedded photo to accommodate the dynamic content layout of web pages or documents, adapting a large photo for browsing on small displays such as mobile phone screens, and improving the aesthetic quality of a photo that was badly composed at the capture time. In this paper, we propose a novel, effective, and comprehensive content-aware automatic cropping (hereafter referred to as "autocrop") method for consumer photos to achieve the above purposes. Our autocrop method combines the state-of-the-art context-aware saliency detection algorithm, which aims to infer the likely intent of the photographer, and the "branch-and-bound" efficient subwindow search optimization technique, which seeks to locate the globally optimal cropping rectangle in a fast manner. Unlike most current autocrop methods, which can only crop a photo into an arbitrary rectangle, our autocrop method can automatically crop a photo into either a rectangle of arbitrary dimensions or a rectangle of the desired aspect ratio specified by the user. The aggressiveness of the cropping operation may be either automatically determined by the method or manually indicated by the user with ease. In addition, our autocrop method is extended to support the cropping of a photo into non-rectangular shapes such as polygons of any number of sides. It may also be potentially extended to return multiple cropping suggestions, which will enable the creation of new photos to enrich the original photo collections. Our experimental results show that the proposed autocrop method in this paper can generate high-quality crops for consumer photos of various types.
Software for Data Analysis with Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Roy, H. Scott
1994-01-01
Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
Bayesian Covariate Selection in Mixed-Effects Models For Longitudinal Shape Analysis
Muralidharan, Prasanna; Fishbaugh, James; Kim, Eun Young; Johnson, Hans J.; Paulsen, Jane S.; Gerig, Guido; Fletcher, P. Thomas
2016-01-01
The goal of longitudinal shape analysis is to understand how anatomical shape changes over time, in response to biological processes, including growth, aging, or disease. In many imaging studies, it is also critical to understand how these shape changes are affected by other factors, such as sex, disease diagnosis, IQ, etc. Current approaches to longitudinal shape analysis have focused on modeling age-related shape changes, but have not included the ability to handle covariates. In this paper, we present a novel Bayesian mixed-effects shape model that incorporates simultaneous relationships between longitudinal shape data and multiple predictors or covariates to the model. Moreover, we place an Automatic Relevance Determination (ARD) prior on the parameters, that lets us automatically select which covariates are most relevant to the model based on observed data. We evaluate our proposed model and inference procedure on a longitudinal study of Huntington's disease from PREDICT-HD. We first show the utility of the ARD prior for model selection in a univariate modeling of striatal volume, and next we apply the full high-dimensional longitudinal shape model to putamen shapes. PMID:28090246
Automatic detection of unattended changes in room acoustics.
Frey, Johannes Daniel; Wendt, Mike; Jacobsen, Thomas
2015-01-01
Previous research has shown that the human auditory system continuously monitors its acoustic environment, detecting a variety of irregularities (e.g., deviance from prior stimulation regularity in pitch, loudness, duration, and (perceived) sound source location). Detection of irregularities can be inferred from a component of the event-related brain potential (ERP), referred to as the mismatch negativity (MMN), even in conditions in which participants are instructed to ignore the auditory stimulation. The current study extends previous findings by demonstrating that auditory irregularities brought about by a change in room acoustics elicit a MMN in a passive oddball protocol (acoustic stimuli with differing room acoustics, that were otherwise identical, were employed as standard and deviant stimuli), in which participants watched a fiction movie (silent with subtitles). While the majority of participants reported no awareness for any changes in the auditory stimulation, only one out of 14 participants reported to have become aware of changing room acoustics or sound source location. Together, these findings suggest automatic monitoring of room acoustics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Algorithms for database-dependent search of MS/MS data.
Matthiesen, Rune
2013-01-01
The frequent used bottom-up strategy for identification of proteins and their associated modifications generate nowadays typically thousands of MS/MS spectra that normally are matched automatically against a protein sequence database. Search engines that take as input MS/MS spectra and a protein sequence database are referred as database-dependent search engines. Many programs both commercial and freely available exist for database-dependent search of MS/MS spectra and most of the programs have excellent user documentation. The aim here is therefore to outline the algorithm strategy behind different search engines rather than providing software user manuals. The process of database-dependent search can be divided into search strategy, peptide scoring, protein scoring, and finally protein inference. Most efforts in the literature have been put in to comparing results from different software rather than discussing the underlining algorithms. Such practical comparisons can be cluttered by suboptimal implementation and the observed differences are frequently caused by software parameters settings which have not been set proper to allow even comparison. In other words an algorithmic idea can still be worth considering even if the software implementation has been demonstrated to be suboptimal. The aim in this chapter is therefore to split the algorithms for database-dependent searching of MS/MS data into the above steps so that the different algorithmic ideas become more transparent and comparable. Most search engines provide good implementations of the first three data analysis steps mentioned above, whereas the final step of protein inference are much less developed for most search engines and is in many cases performed by an external software. The final part of this chapter illustrates how protein inference is built into the VEMS search engine and discusses a stand-alone program SIR for protein inference that can import a Mascot search result.
Koehne, Svenja; Behrends, Andrea; Fairhurst, Merle T; Dziobek, Isabel
2016-01-01
Since social cognition is impaired in individuals with autism spectrum disorder (ASD), this study aimed at establishing the efficacy of a newly developed imitation- and synchronization-based dance/movement intervention (SI-DMI) in fostering emotion inference and empathic feelings (emotional reaction to feelings of others) in adults with high-functioning ASD. Fifty-five adults with ASD (IQ ≥85) who were blinded to the aim of the study were assigned to receive either 10 weeks of a dance/movement intervention focusing on interpersonal movement imitation and synchronization (SI-DMI, n = 27) or a control movement intervention (CMI, n = 24) focusing on individual motor coordination (2 participants from each group declined before baseline testing). The primary outcome measure was the objective Multifaceted Empathy Test targeting emotion inference and empathic feelings. Secondary outcomes were scores on the self-rated Interpersonal Reactivity Index. The well-established automatic imitation task and synchronization finger-tapping task were used to quantify effects on imitation and synchronization functions, complemented by the more naturalistic Assessment of Spontaneous Interaction in Movement. Intention-to-treat analyses revealed that from baseline to 3 months, patients treated with SI-DMI showed a significantly larger improvement in emotion inference (d = 0.58), but not empathic feelings, than those treated with CMI (d = -0.04). On the close generalization level, SI-DMI increased synchronization skills and imitation tendencies, as well as whole-body imitation/synchronization and movement reciprocity/dialogue, compared to CMI. SI-DMI can be successful in promoting emotion inference in adults with ASD and warrants further investigation. © 2015 S. Karger AG, Basel.
Inferring transposons activity chronology by TRANScendence - TEs database and de-novo mining tool.
Startek, Michał Piotr; Nogły, Jakub; Gromadka, Agnieszka; Grzebelus, Dariusz; Gambin, Anna
2017-10-16
The constant progress in sequencing technology leads to ever increasing amounts of genomic data. In the light of current evidence transposable elements (TEs for short) are becoming useful tools for learning about the evolution of host genome. Therefore the software for genome-wide detection and analysis of TEs is of great interest. Here we describe the computational tool for mining, classifying and storing TEs from newly sequenced genomes. This is an online, web-based, user-friendly service, enabling users to upload their own genomic data, and perform de-novo searches for TEs. The detected TEs are automatically analyzed, compared to reference databases, annotated, clustered into families, and stored in TEs repository. Also, the genome-wide nesting structure of found elements are detected and analyzed by new method for inferring evolutionary history of TEs. We illustrate the functionality of our tool by performing a full-scale analyses of TE landscape in Medicago truncatula genome. TRANScendence is an effective tool for the de-novo annotation and classification of transposable elements in newly-acquired genomes. Its streamlined interface makes it well-suited for evolutionary studies.
The expert surgical assistant. An intelligent virtual environment with multimodal input.
Billinghurst, M; Savage, J; Oppenheimer, P; Edmond, C
1996-01-01
Virtual Reality has made computer interfaces more intuitive but not more intelligent. This paper shows how an expert system can be coupled with multimodal input in a virtual environment to provide an intelligent simulation tool or surgical assistant. This is accomplished in three steps. First, voice and gestural input is interpreted and represented in a common semantic form. Second, a rule-based expert system is used to infer context and user actions from this semantic representation. Finally, the inferred user actions are matched against steps in a surgical procedure to monitor the user's progress and provide automatic feedback. In addition, the system can respond immediately to multimodal commands for navigational assistance and/or identification of critical anatomical structures. To show how these methods are used we present a prototype sinus surgery interface. The approach described here may easily be extended to a wide variety of medical and non-medical training applications by making simple changes to the expert system database and virtual environment models. Successful implementation of an expert system in both simulated and real surgery has enormous potential for the surgeon both in training and clinical practice.
New challenges for text mining: mapping between text and manually curated pathways
Oda, Kanae; Kim, Jin-Dong; Ohta, Tomoko; Okanohara, Daisuke; Matsuzaki, Takuya; Tateisi, Yuka; Tsujii, Jun'ichi
2008-01-01
Background Associating literature with pathways poses new challenges to the Text Mining (TM) community. There are three main challenges to this task: (1) the identification of the mapping position of a specific entity or reaction in a given pathway, (2) the recognition of the causal relationships among multiple reactions, and (3) the formulation and implementation of required inferences based on biological domain knowledge. Results To address these challenges, we constructed new resources to link the text with a model pathway; they are: the GENIA pathway corpus with event annotation and NF-kB pathway. Through their detailed analysis, we address the untapped resource, ‘bio-inference,’ as well as the differences between text and pathway representation. Here, we show the precise comparisons of their representations and the nine classes of ‘bio-inference’ schemes observed in the pathway corpus. Conclusions We believe that the creation of such rich resources and their detailed analysis is the significant first step for accelerating the research of the automatic construction of pathway from text. PMID:18426550
Flame analysis using image processing techniques
NASA Astrophysics Data System (ADS)
Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng
2018-04-01
This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.
NASA Astrophysics Data System (ADS)
Aucan, J.; Merrifield, M. A.; Pouvreau, N.
2017-10-01
Automatic sea-level measurements in Nouméa, South Pacific, started in 1957 for the International Geophysical year. Data from this location exist in paper record for the 1957-1967 period, and in two distinct electronic records for the 1967-2005 and 2005-2015 period. In this study, we digitize the early record, and established a link between the two electronic records to create a unique, nearly 60 year-long instrumental sea-level record. This work creates one of the longest instrumental sea-level records in the Pacific Islands. These data are critical for the study of regional and interannual variations of sea level. This new data set is then used to infer rates of vertical movements by comparing it to (1) the entire satellite altimetric record (1993-2013) and (2) a global sea-level reconstruction (1957-2010). These inferred rates show an uplift of 1.3-1.4 mm/year, opposite to the currently accepted values of subsidence found in the geological and geodetic literature, and underlie the importance of systematic geodetic measurements at, over very near tide gauges.
Automatic detection of diabetic foot complications with infrared thermography by asymmetric analysis
NASA Astrophysics Data System (ADS)
Liu, Chanjuan; van Netten, Jaap J.; van Baal, Jeff G.; Bus, Sicco A.; van der Heijden, Ferdi
2015-02-01
Early identification of diabetic foot complications and their precursors is essential in preventing their devastating consequences, such as foot infection and amputation. Frequent, automatic risk assessment by an intelligent telemedicine system might be feasible and cost effective. Infrared thermography is a promising modality for such a system. The temperature differences between corresponding areas on contralateral feet are the clinically significant parameters. This asymmetric analysis is hindered by (1) foot segmentation errors, especially when the foot temperature and the ambient temperature are comparable, and by (2) different shapes and sizes between contralateral feet due to deformities or minor amputations. To circumvent the first problem, we used a color image and a thermal image acquired synchronously. Foot regions, detected in the color image, were rigidly registered to the thermal image. This resulted in 97.8%±1.1% sensitivity and 98.4%±0.5% specificity over 76 high-risk diabetic patients with manual annotation as a reference. Nonrigid landmark-based registration with B-splines solved the second problem. Corresponding points in the two feet could be found regardless of the shapes and sizes of the feet. With that, the temperature difference of the left and right feet could be obtained.
Detecting primary precursors of January surface air temperature anomalies in China
NASA Astrophysics Data System (ADS)
Tan, Guirong; Ren, Hong-Li; Chen, Haishan; You, Qinglong
2017-12-01
This study aims to detect the primary precursors and impact mechanisms for January surface temperature anomaly (JSTA) events in China against the background of global warming, by comparing the causes of two extreme JSTA events occurring in 2008 and 2011 with the common mechanisms inferred from all typical episodes during 1979-2008. The results show that these two extreme events exhibit atmospheric circulation patterns in the mid-high latitudes of Eurasia, with a positive anomaly center over the Ural Mountains and a negative one to the south of Lake Baikal (UMLB), which is a pattern quite similar to that for all the typical events. However, the Eurasian teleconnection patterns in the 2011 event, which are accompanied by a negative phase of the North Atlantic Oscillation, are different to those of the typical events and the 2008 event. We further find that a common anomalous signal appearing in early summer over the tropical Indian Ocean may be responsible for the following late-winter Eurasian teleconnections and the associated JSTA events in China. We show that sea surface temperature anomalies (SSTAs) in the preceding summer over the western Indian Ocean (WIO) are intimately related to the UMLB-like circulation pattern in the following January. Positive WIOSSTAs in early summer tend to induce strong UMLB-like circulation anomalies in January, which may result in anomalously or extremely cold events in China, which can also be successfully reproduced in model experiments. Our results suggest that the WIOSSTAs may be a useful precursor for predicting JSTA events in China.
NASA Astrophysics Data System (ADS)
Schrader, Devin L.; Nagashima, Kazuhide; Waitukaitis, Scott R.; Davidson, Jemma; McCoy, Timothy J.; Connolly, Harold C.; Lauretta, Dante S.
2018-02-01
By investigating the in situ chemical and O-isotope compositions of olivine in lightly sintered dust agglomerates from the early Solar System, we constrain their origins and the retention of dust in the protoplanetary disk. The grain sizes of silicates in these agglomeratic olivine (AO) chondrules indicate that the grain sizes of chondrule precursors in the Renazzo-like carbonaceous (CR) chondrites ranged from <1 to 80 μm. We infer this grain size range to be equivalent to the size range for dust in the early Solar System. AO chondrules may contain, but are not solely composed of, recycled fragments of earlier formed chondrules. They also contain 16O-rich olivine related to amoeboid olivine aggregates and represent the best record of chondrule-precursor materials. AO chondrules contain one or more large grains, sometimes similar to FeO-poor (type I) and/or FeO-rich (type II) chondrules, while others contain a type II chondrule core. These morphologies are consistent with particle agglomeration by electrostatic charging of grains during collision, a process that may explain solid agglomeration in the protoplanetary disk in the micrometer size regime. The petrographic, isotopic, and chemical compositions of AO chondrules are consistent with chondrule formation by large-scale shocks, bow shocks, and current sheets. The petrographic, isotopic, and chemical similarities between AO chondrules in CR chondrites and chondrule-like objects from comet 81P/Wild 2 indicate that comets contain AO chondrules. We infer that these AO chondrules likely formed in the inner Solar System and migrated to the comet forming region at least 3 Ma after the formation of the first Solar System solids. Observations made in this study imply that the protoplanetary disk retained a dusty disk at least ∼3.7 Ma after the formation of the first Solar System solids, longer than half of the dusty accretion disks observed around other stars.
NASA Technical Reports Server (NTRS)
Potemra, T. A. (Principal Investigator); Sugiura, M.; Zanettic, L. J.
1982-01-01
Disturbances in the MAGSAT magnetometer data set due to high latitude phenomena were evaluated. Much of the categorization of disturbances due to Birkeland currents, ionospheric Hall currents, fine structure and wave phenomena was done with the MAGSAT data catalog. A color graphics technique was developed for the display of disturbances from multiple orbits, from which one can infer a 'global-image' of the current systems of the auroral zone. The MAGSAT 4/81 magnetic field model appears to represent the Earth's main field at high latitudes very well for the epoch 1980. MAGSAT's low altitude allows analysis of disturbances in the magnetometer data due to ionospheric electrojet currents. These current distributions were modeled properly for single events as a precursor to the inference of the Birkeland current system. MAGSAT's orbit was approximately shared with that of the Navy/APL TRIAD satellite. This allowed space-time studies of the magnetic disturbance signatures to be performed, the result being an approximately 75% agreement in, as well as high frequency of, signatures due to Birkeland currents. Thus the field-aligned currents are a steady-state participant in the Earth's magnetospheric current system.
Certifying Auto-Generated Flight Code
NASA Technical Reports Server (NTRS)
Denney, Ewen
2008-01-01
Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.
Data base manipulation for assessment of multiresource suitability and land change
NASA Technical Reports Server (NTRS)
Colwell, J.; Sanders, P.; Davis, G.; Thomson, F. (Principal Investigator)
1981-01-01
Progress is reported in three tasks which support the overall objectives of renewable resources inventory task of the AgRISTARS program. In the first task, the geometric correction algorithms of the Master Data Processor were investigated to determine the utility of data corrected by this processor for U.S. Forest Service uses. The second task involved investigation of logic to form blobs as a precursor step to automatic change detection involving two dates of LANDSAT data. Some routine procedures for selecting BLOB (spatial averaging) parameters were developed. In the third task, a major effort was made to develop land suitability modeling approches for timber, grazing, and wildlife habitat in support of resource planning efforts on the San Juan National Forest.
Meibom; Desch; Krot; Cuzzi; Petaev; Wilson; Keil
2000-05-05
Chemical zoning patterns in some iron, nickel metal grains from CH carbonaceous chondrites imply formation at temperatures from 1370 to 1270 kelvin by condensation from a solar nebular gas cooling at a rate of approximately 0.2 kelvin per hour. This cooling rate requires a large-scale thermal event in the nebula, in contrast to the localized, transient heating events inferred for chondrule formation. In our model, mass accretion through the protoplanetary disk caused large-scale evaporation of precursor dust near its midplane inside of a few astronomical units. Gas convectively moved from the midplane to cooler regions above it, and the metal grains condensed in these parcels of rising gas.
Amino acid precursors in lunar fines - Limits to the contribution of jet exhaust
NASA Technical Reports Server (NTRS)
Fox, S. W.; Harada, K.; Hare, P. E.
1976-01-01
A sample of lunar fines collected at a maximum distance, 6.5 km, from the descent engine on Apollo 17 has been analyzed for total amino acids obtainable by hydrolysis of aqueous extracts. The minimum amounts of amino acids, calculated for a disk of 6 km radius are 10,000 to 100,000 times those which could be contributed by the lunar module jet exhaust, on the basis of conservatively limiting assumptions. The amino acids thus obtained are not explainable as due to chemical or biological contamination; their source is accordingly inferred as lunar. Under the conditions of hydrolysis of lunar extracts, cyanide is found to be converted, almost exclusively to glycine, to an extent of 0.0001.
Path Models of Vocal Emotion Communication
Bänziger, Tanja; Hosoya, Georg; Scherer, Klaus R.
2015-01-01
We propose to use a comprehensive path model of vocal emotion communication, encompassing encoding, transmission, and decoding processes, to empirically model data sets on emotion expression and recognition. The utility of the approach is demonstrated for two data sets from two different cultures and languages, based on corpora of vocal emotion enactment by professional actors and emotion inference by naïve listeners. Lens model equations, hierarchical regression, and multivariate path analysis are used to compare the relative contributions of objectively measured acoustic cues in the enacted expressions and subjective voice cues as perceived by listeners to the variance in emotion inference from vocal expressions for four emotion families (fear, anger, happiness, and sadness). While the results confirm the central role of arousal in vocal emotion communication, the utility of applying an extended path modeling framework is demonstrated by the identification of unique combinations of distal cues and proximal percepts carrying information about specific emotion families, independent of arousal. The statistical models generated show that more sophisticated acoustic parameters need to be developed to explain the distal underpinnings of subjective voice quality percepts that account for much of the variance in emotion inference, in particular voice instability and roughness. The general approach advocated here, as well as the specific results, open up new research strategies for work in psychology (specifically emotion and social perception research) and engineering and computer science (specifically research and development in the domain of affective computing, particularly on automatic emotion detection and synthetic emotion expression in avatars). PMID:26325076
NASA Astrophysics Data System (ADS)
Zhang, Lili; Merényi, Erzsébet; Grundy, William M.; Young, Eliot F.
2010-07-01
The near-infrared spectra of icy volatiles collected from planetary surfaces can be used to infer surface parameters, which in turn may depend on the recent geologic history. The high dimensionality and complexity of the spectral data, the subtle differences between the spectra, and the highly nonlinear interplay between surface parameters make it often difficult to accurately derive these surface parameters. We use a neural machine, with a Self-Organizing Map (SOM) as its hidden layer, to infer the latent physical parameters, temperature and grain size from near-infrared spectra of crystalline H2O ice. The output layer of the SOM-hybrid machine is customarily trained with only the output from the SOM winner. We show that this scheme prevents simultaneous achievement of high prediction accuracies for both parameters. We propose an innovative neural architecture we call Conjoined Twins that allows multiple (k) SOM winners to participate in the training of the output layer and in which the customization of k can be limited automatically to a small range. With this novel machine we achieve scientifically useful accuracies, 83.0 ± 2.7% and 100.0 ± 0.0%, for temperature and grain size, respectively, from simulated noiseless spectra. We also show that the performance of the neural model is robust under various noisy conditions. A primary application of this prediction capability is planned for spectra returned from the Pluto-Charon system by New Horizons.
Zheng, Guangyong; Xu, Yaochen; Zhang, Xiujun; Liu, Zhi-Ping; Wang, Zhuo; Chen, Luonan; Zhu, Xin-Guang
2016-12-23
A gene regulatory network (GRN) represents interactions of genes inside a cell or tissue, in which vertexes and edges stand for genes and their regulatory interactions respectively. Reconstruction of gene regulatory networks, in particular, genome-scale networks, is essential for comparative exploration of different species and mechanistic investigation of biological processes. Currently, most of network inference methods are computationally intensive, which are usually effective for small-scale tasks (e.g., networks with a few hundred genes), but are difficult to construct GRNs at genome-scale. Here, we present a software package for gene regulatory network reconstruction at a genomic level, in which gene interaction is measured by the conditional mutual information measurement using a parallel computing framework (so the package is named CMIP). The package is a greatly improved implementation of our previous PCA-CMI algorithm. In CMIP, we provide not only an automatic threshold determination method but also an effective parallel computing framework for network inference. Performance tests on benchmark datasets show that the accuracy of CMIP is comparable to most current network inference methods. Moreover, running tests on synthetic datasets demonstrate that CMIP can handle large datasets especially genome-wide datasets within an acceptable time period. In addition, successful application on a real genomic dataset confirms its practical applicability of the package. This new software package provides a powerful tool for genomic network reconstruction to biological community. The software can be accessed at http://www.picb.ac.cn/CMIP/ .
Automatic face naming by learning discriminative affinity matrices from weakly labeled images.
Xiao, Shijie; Xu, Dong; Wu, Jianxin
2015-10-01
Given a collection of images, where each image contains several faces and is associated with a few names in the corresponding caption, the goal of face naming is to infer the correct name for each face. In this paper, we propose two new methods to effectively solve this problem by learning two discriminative affinity matrices from these weakly labeled images. We first propose a new method called regularized low-rank representation by effectively utilizing weakly supervised information to learn a low-rank reconstruction coefficient matrix while exploring multiple subspace structures of the data. Specifically, by introducing a specially designed regularizer to the low-rank representation method, we penalize the corresponding reconstruction coefficients related to the situations where a face is reconstructed by using face images from other subjects or by using itself. With the inferred reconstruction coefficient matrix, a discriminative affinity matrix can be obtained. Moreover, we also develop a new distance metric learning method called ambiguously supervised structural metric learning by using weakly supervised information to seek a discriminative distance metric. Hence, another discriminative affinity matrix can be obtained using the similarity matrix (i.e., the kernel matrix) based on the Mahalanobis distances of the data. Observing that these two affinity matrices contain complementary information, we further combine them to obtain a fused affinity matrix, based on which we develop a new iterative scheme to infer the name of each face. Comprehensive experiments demonstrate the effectiveness of our approach.
Bogialli, Sara; Bortolini, Claudio; Di Gangi, Iole Maria; Di Gregorio, Federica Nigro; Lucentini, Luca; Favaro, Gabriella; Pastore, Paolo
2017-08-01
A comprehensive risk management on human exposure to cyanotoxins, whose production is actually unpredictable, is limited by reliable analytical tools for monitoring as many toxic algal metabolites as possible. Two analytical approaches based on a LC-QTOF system for target analysis and suspect screening of cyanotoxins in freshwater were presented. A database with 369 compounds belonging to cyanobacterial metabolites was developed and used for a retrospective data analysis based on high resolution mass spectrometry (HRMS). HRMS fragmentation of the suspect cyanotoxin precursor ions was subsequently performed for correctly identifying the specific variants. Alternatively, an automatic tandem HRMS analysis tailored for cyanotoxins was performed in a single chromatographic run, using the developed database as a preferred precursor ions list. Twenty-five extracts of surface and drinking waters contaminated by cyanobacteria were processed. The identification of seven uncommon microcystins (M(O)R, MC-FR, MSer 7 -YR, D-Asp 3 MSer 7 -LR, MSer 7 -LR, dmAdda-LR and dmAdda-YR) and 6 anabaenopeptins (A, B, F, MM850, MM864, oscyllamide Y) was reported. Several isobaric variants, fully separated by chromatography, were pointed out. The developed methods are proposed to be used by environmental and health agencies for strengthening the surveillance monitoring of cyanotoxins in water. Copyright © 2017 Elsevier B.V. All rights reserved.
Bayesian classification theory
NASA Technical Reports Server (NTRS)
Hanson, Robin; Stutz, John; Cheeseman, Peter
1991-01-01
The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework and using various mathematical and algorithmic approximations, the AutoClass system searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit or share model parameters though a class hierarchy. We summarize the mathematical foundations of AutoClass.
Spatial and temporal variations in lagoon and coastal processes of the southern Brazilian coast
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Herz, R.
1980-01-01
From a collection of information gathered during a long period, through the orbital platforms SKYLAB and LANDSAT, it was possible to establish a method for the systematic study of the dynamical regime of lagoon and marine surface waters, on coastal plain of Rio Grande do Sul. The series of multispectral images analyzed by visual and automatic techniques put in evidence spatial and temporal variations reflected in the optical properties of waters, which carry different loads of materials in suspension. The identified patterns offer a synoptic picture of phenomena of great amplitude, from which trends of circulation can be inferred, correlating the atmospheric and hydrologic variables simultaneously to the overflight of orbital vehicles.
GRIL: genome rearrangement and inversion locator.
Darling, Aaron E; Mau, Bob; Blattner, Frederick R; Perna, Nicole T
2004-01-01
GRIL is a tool to automatically identify collinear regions in a set of bacterial-size genome sequences. GRIL uses three basic steps. First, regions of high sequence identity are located. Second, some of these regions are filtered based on user-specified criteria. Finally, the remaining regions of sequence identity are used to define significant collinear regions among the sequences. By locating collinear regions of sequence, GRIL provides a basis for multiple genome alignment using current alignment systems. GRIL also provides a basis for using current inversion distance tools to infer phylogeny. GRIL is implemented in C++ and runs on any x86-based Linux or Windows platform. It is available from http://asap.ahabs.wisc.edu/gril
Recognition and inference of crevice processing on digitized paintings
NASA Astrophysics Data System (ADS)
Karuppiah, S. P.; Srivatsa, S. K.
2013-03-01
This paper is designed to detect and removal of cracks on digitized paintings. The cracks are detected by threshold. Afterwards, the thin dark brush strokes which have been misidentified as cracks are removed using Median radial basis function neural network on hue and saturation data, Semi-automatic procedure based on region growing. Finally, crack is filled using wiener filter. The paper is well designed in such a way that most of the cracks on digitized paintings have identified and removed. The paper % of betterment is 90%. This paper helps us to perform not only on digitized paintings but also the medical images and bmp images. This paper is implemented by Mat Lab.
Decoding cell signalling and regulation of oligodendrocyte differentiation.
Santos, A K; Vieira, M S; Vasconcellos, R; Goulart, V A M; Kihara, A H; Resende, R R
2018-05-22
Oligodendrocytes are fundamental for the functioning of the nervous system; they participate in several cellular processes, including axonal myelination and metabolic maintenance for astrocytes and neurons. In the mammalian nervous system, they are produced through waves of proliferation and differentiation, which occur during embryogenesis. However, oligodendrocytes and their precursors continue to be generated during adulthood from specific niches of stem cells that were not recruited during development. Deficiencies in the formation and maturation of these cells can generate pathologies mainly related to myelination. Understanding the mechanisms involved in oligodendrocyte development, from the precursor to mature cell level, will allow inferring therapies and treatments for associated pathologies and disorders. Such mechanisms include cell signalling pathways that involve many growth factors, small metabolic molecules, non-coding RNAs, and transcription factors, as well as specific elements of the extracellular matrix, which act in a coordinated temporal and spatial manner according to a given stimulus. Deciphering those aspects will allow researchers to replicate them in vitro in a controlled environment and thus mimic oligodendrocyte maturation to understand the role of oligodendrocytes in myelination in pathologies and normal conditions. In this study, we review these aspects, based on the most recent in vivo and in vitro data on oligodendrocyte generation and differentiation. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Okamoto, S.; Tanimoto, H.; Hirota, N.; Ikeda, K.; Akimoto, H.
2017-12-01
During the past decades, springtime ozone concentrations in the downwind regions of East Asia have rapidly increased with the increase of anthropogenic emissions. However, recent several studies based on the analysis of satellite tropospheric nitrogen dioxides data inferred possible peaking out of nitrogen oxides emissions in China. In addition to the precursor emissions, climate plays an important role in controlling the variations and distributions of tropospheric ozone. Here we revisited and updated the long-term trend of tropospheric ozone at Mt. Happo, Japan, for the period from 1998 to 2016. Since 1998 the springtime ozone concentration has shown a large increase until 2007, very likely caused by the increase in the emissions of ozone precursors associated with economic growth in eastern China, as evidenced from satellite observations of nitrogen dioxides. After the monotonic increase until 2007, the ozone level has been flattened associated with substantial drop in 2008. Recent low ozone levels were largely influenced by the decrease of the anthropogenic emissions from eastern China. We also found that the efficiency of long-range transport from central eastern China, driven by North Pacific climate, play a role in modulating the year-to-year variations of ozone at Mt. Happo.
Zuanetti, Bryan; McGrane, Shawn David; Bolme, Cynthia Anne; ...
2018-05-18
Here, this article presents results from laser-driven shock compression experiments performed on pre-heated pure aluminum films at temperatures ranging from 23 to 400 °C. The samples were vapor deposited on the surface of a 500 μm thick sapphire substrate and mounted onto a custom holder with an integrated ring-heater to enable variable initial temperature conditions. A chirped pulse amplified laser was used to generate a pulse for both shocking the films and for probing the free surface velocity using Ultrafast Dynamic Ellipsometry. The particle velocity traces measured at the free surface clearly show elastic and plastic wave separation, which wasmore » used to estimate the decay of the elastic precursor amplitude over propagation distances ranging from 0.278 to 4.595 μm. Elastic precursors (which also correspond to dynamic material strength under uniaxial strain) of increasing amplitudes were observed with increasing initial sample temperatures for all propagation distances, which is consistent with expectations for aluminum in a deformation regime where phonon drag limits the mobility of dislocations. The experimental results show peak elastic amplitudes corresponding to axial stresses of over 7.5 GPa; estimates for plastic strain-rates in the samples are of the order 10 9/s. The measured elastic amplitudes at the micron length scales are compared with those at the millimeter length-scales using a two-parameter model and used to correlate the rate sensitivity of the dynamic strength at strain-rates ranging from 10 3 to 10 9/s and elevated temperature conditions. The overall trend, as inferred from the experimental data, indicates that the temperature-strengthening effect decreases with increasing plastic strain-rates.« less
NASA Astrophysics Data System (ADS)
Zuanetti, Bryan; McGrane, Shawn D.; Bolme, Cynthia A.; Prakash, Vikas
2018-05-01
This article presents results from laser-driven shock compression experiments performed on pre-heated pure aluminum films at temperatures ranging from 23 to 400 °C. The samples were vapor deposited on the surface of a 500 μm thick sapphire substrate and mounted onto a custom holder with an integrated ring-heater to enable variable initial temperature conditions. A chirped pulse amplified laser was used to generate a pulse for both shocking the films and for probing the free surface velocity using Ultrafast Dynamic Ellipsometry. The particle velocity traces measured at the free surface clearly show elastic and plastic wave separation, which was used to estimate the decay of the elastic precursor amplitude over propagation distances ranging from 0.278 to 4.595 μm. Elastic precursors (which also correspond to dynamic material strength under uniaxial strain) of increasing amplitudes were observed with increasing initial sample temperatures for all propagation distances, which is consistent with expectations for aluminum in a deformation regime where phonon drag limits the mobility of dislocations. The experimental results show peak elastic amplitudes corresponding to axial stresses of over 7.5 GPa; estimates for plastic strain-rates in the samples are of the order 109/s. The measured elastic amplitudes at the micron length scales are compared with those at the millimeter length-scales using a two-parameter model and used to correlate the rate sensitivity of the dynamic strength at strain-rates ranging from 103 to 109/s and elevated temperature conditions. The overall trend, as inferred from the experimental data, indicates that the temperature-strengthening effect decreases with increasing plastic strain-rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuanetti, Bryan; McGrane, Shawn David; Bolme, Cynthia Anne
Here, this article presents results from laser-driven shock compression experiments performed on pre-heated pure aluminum films at temperatures ranging from 23 to 400 °C. The samples were vapor deposited on the surface of a 500 μm thick sapphire substrate and mounted onto a custom holder with an integrated ring-heater to enable variable initial temperature conditions. A chirped pulse amplified laser was used to generate a pulse for both shocking the films and for probing the free surface velocity using Ultrafast Dynamic Ellipsometry. The particle velocity traces measured at the free surface clearly show elastic and plastic wave separation, which wasmore » used to estimate the decay of the elastic precursor amplitude over propagation distances ranging from 0.278 to 4.595 μm. Elastic precursors (which also correspond to dynamic material strength under uniaxial strain) of increasing amplitudes were observed with increasing initial sample temperatures for all propagation distances, which is consistent with expectations for aluminum in a deformation regime where phonon drag limits the mobility of dislocations. The experimental results show peak elastic amplitudes corresponding to axial stresses of over 7.5 GPa; estimates for plastic strain-rates in the samples are of the order 10 9/s. The measured elastic amplitudes at the micron length scales are compared with those at the millimeter length-scales using a two-parameter model and used to correlate the rate sensitivity of the dynamic strength at strain-rates ranging from 10 3 to 10 9/s and elevated temperature conditions. The overall trend, as inferred from the experimental data, indicates that the temperature-strengthening effect decreases with increasing plastic strain-rates.« less
Diagnostic accuracy of automatic normalization of CBV in glioma grading using T1- weighted DCE-MRI.
Sahoo, Prativa; Gupta, Rakesh K; Gupta, Pradeep K; Awasthi, Ashish; Pandey, Chandra M; Gupta, Mudit; Patir, Rana; Vaishya, Sandeep; Ahlawat, Sunita; Saha, Indrajit
2017-12-01
Aim of this retrospective study was to compare diagnostic accuracy of proposed automatic normalization method to quantify the relative cerebral blood volume (rCBV) with existing contra-lateral region of interest (ROI) based CBV normalization method for glioma grading using T1-weighted dynamic contrast enhanced MRI (DCE-MRI). Sixty patients with histologically confirmed gliomas were included in this study retrospectively. CBV maps were generated using T1-weighted DCE-MRI and are normalized by contralateral ROI based method (rCBV_contra), unaffected white matter (rCBV_WM) and unaffected gray matter (rCBV_GM), the latter two of these were generated automatically. An expert radiologist with >10years of experience in DCE-MRI and a non-expert with one year experience were used independently to measure rCBVs. Cutoff values for glioma grading were decided from ROC analysis. Agreement of histology with rCBV_WM, rCBV_GM and rCBV_contra respectively was studied using Kappa statistics and intra-class correlation coefficient (ICC). The diagnostic accuracy of glioma grading using the measured rCBV_contra by expert radiologist was found to be high (sensitivity=1.00, specificity=0.96, p<0.001) compared to the non-expert user (sensitivity=0.65, specificity=0.78, p<0.001). On the other hand, both the expert and non-expert user showed similar diagnostic accuracy for automatic rCBV_WM (sensitivity=0.89, specificity=0.87, p=0.001) and rCBV_GM (sensitivity=0.81, specificity=0.78, p=0.001) measures. Further, it was also observed that, contralateral based method by expert user showed highest agreement with histological grading of tumor (kappa=0.96, agreement 98.33%, p<0.001), however; automatic normalization method showed same percentage of agreement for both expert and non-expert user. rCBV_WM showed an agreement of 88.33% (kappa=0.76,p<0.001) with histopathological grading. It was inferred from this study that, in the absence of expert user, automated normalization of CBV using the proposed method could provide better diagnostic accuracy compared to the manual contralateral based approach. Copyright © 2017 Elsevier Inc. All rights reserved.
Object-oriented approach to the automatic segmentation of bones from pediatric hand radiographs
NASA Astrophysics Data System (ADS)
Shim, Hyeonjoon; Liu, Brent J.; Taira, Ricky K.; Hall, Theodore R.
1997-04-01
The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The development of this system draws principles from object-oriented design, model- guided analysis, and feedback control. A system architecture called 'the object segmentation machine' was implemented incorporating these design philosophies. The system is aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. These models include object structure models, shape models, 1-D wrist profiles, and gray level histogram models. Shape analysis is performed first by using an arc-length orientation transform to break down a given contour into elementary segments and curves. Then an interpretation tree is used as an inference engine to map known model contour segments to data contour segments obtained from the transform. Spatial and anatomical relationships among contour segments work as constraints from shape model. These constraints aid in generating a list of candidate matches. The candidate match with the highest confidence is chosen to be the current intermediate result. Verification of intermediate results are perform by a feedback control loop.
Evaluation of arterial propagation velocity based on the automated analysis of the Pulse Wave Shape
NASA Astrophysics Data System (ADS)
Clara, F. M.; Scandurra, A. G.; Meschino, G. J.; Passoni, L. I.
2011-12-01
This paper proposes the automatic estimation of the arterial propagation velocity from the pulse wave raw records measured in the region of the radial artery. A fully automatic process is proposed to select and analyze typical pulse cycles from the raw data. An adaptive neuro-fuzzy inference system, together with a heuristic search is used to find a functional approximation of the pulse wave. The estimation of the propagation velocity is carried out via the analysis of the functional approximation obtained with the fuzzy model. The analysis of the pulse wave records with the proposed methodology showed small differences compared with the method used so far, based on a strong interaction with the user. To evaluate the proposed methodology, we estimated the propagation velocity in a population of healthy men from a wide range of ages. It has been found in these studies that propagation velocity increases linearly with age and it presents a considerable dispersion of values in healthy individuals. We conclude that this process could be used to evaluate indirectly the propagation velocity of the aorta, which is related to physiological age in healthy individuals and with the expectation of life in cardiovascular patients.
NASA Technical Reports Server (NTRS)
Haley, Paul
1991-01-01
The C Language Integrated Production System (CLIPS) cannot effectively perform sound and complete logical inference in most real-world contexts. The problem facing CLIPS is its lack of goal generation. Without automatic goal generation and maintenance, forward chaining can only deduce all instances of a relationship. Backward chaining, which requires goal generation, allows deduction of only that subset of what is logically true which is also relevant to ongoing problem solving. Goal generation can be mimicked in simple cases using forward chaining. However, such mimicry requires manual coding of additional rules which can assert an inadequate goal representation for every condition in every rule that can have corresponding facts derived by backward chaining. In general, for N rules with an average of M conditions per rule the number of goal generation rules required is on the order of N*M. This is clearly intractable from a program maintenance perspective. We describe the support in Eclipse for backward chaining which it automatically asserts as it checks rule conditions. Important characteristics of this extension are that it does not assert goals which cannot match any rule conditions, that 2 equivalent goals are never asserted, and that goals persist as long as, but no longer than, they remain relevant.
Automated multi-day tracking of marked mice for the analysis of social behaviour.
Ohayon, Shay; Avni, Ofer; Taylor, Adam L; Perona, Pietro; Roian Egnor, S E
2013-09-30
A quantitative description of animal social behaviour is informative for behavioural biologists and clinicians developing drugs to treat social disorders. Social interaction in a group of animals has been difficult to measure because behaviour develops over long periods of time and requires tedious manual scoring, which is subjective and often non-reproducible. Computer-vision systems with the ability to measure complex social behaviour automatically would have a transformative impact on biology. Here, we present a method for tracking group-housed mice individually as they freely interact over multiple days. Each mouse is bleach-marked with a unique fur pattern. The patterns are automatically learned by the tracking software and used to infer identities. Trajectories are analysed to measure behaviour as it develops over days, beyond the range of acute experiments. We demonstrate how our system may be used to study the development of place preferences, associations and social relationships by tracking four mice continuously for five days. Our system enables accurate and reproducible characterisation of wild-type mouse social behaviour and paves the way for high-throughput long-term observation of the effects of genetic, pharmacological and environmental manipulations. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Mallast, U.; Gloaguen, R.; Geyer, S.; Rödiger, T.; Siebert, C.
2011-08-01
In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine), the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.
Data Provenance as a Tool for Debugging Hydrological Models based on Python
NASA Astrophysics Data System (ADS)
Wombacher, A.; Huq, M.; Wada, Y.; Van Beek, R.
2012-12-01
There is an increase in data volume used in hydrological modeling. The increasing data volume requires additional efforts in debugging models since a single output value is influenced by a multitude of input values. Thus, it is difficult to keep an overview among the data dependencies. Further, knowing these dependencies, it is a tedious job to infer all the relevant data values. The aforementioned data dependencies are also known as data provenance, i.e. the determination of how a particular value has been created and processed. The proposed tool infers the data provenance automatically from a python script and visualizes the dependencies as a graph without executing the script. To debug the model the user specifies the value of interest in space and time. The tool infers all related data values and displays them in the graph. The tool has been evaluated by hydrologists developing a model for estimating the global water demand [1]. The model uses multiple different data sources. The script we analysed has 120 lines of codes and used more than 3000 individual files, each of them representing a raster map of 360*720 cells. After importing the data of the files into a SQLite database, the data consumes around 40 GB of memory. Using the proposed tool a modeler is able to select individual values and infer which values have been used to calculate the value. Especially in cases of outliers or missing values it is a beneficial tool to provide the modeler with efficient information to investigate the unexpected behavior of the model. The proposed tool can be applied to many python scripts and has been tested with other scripts in different contexts. In case a python code contains an unknown function or class the tool requests additional information about the used function or class to enable the inference. This information has to be entered only once and can be shared with colleagues or in the community. Reference [1] Y. Wada, L. P. H. van Beek, D. Viviroli, H. H. Drr, R. Weingartner, and M. F. P. Bierkens, "Global monthly water stress: II. water demand and severity of water," Water Resources Research, vol. 47, 2011.
Final results of the PERSEE experiment
NASA Astrophysics Data System (ADS)
Le Duigou, J. M.; Lozi, J.; Cassaing, F.; Houairi, K.; Sorrente, B.; Montri, J.; Jacquinod, S.; Reess, J.-M.; Pham, L.; Lhome, E.; Buey, T.; Hénault, F.; Marcotto, A.; Girard, P.; Mauclert, N.; Barillot, M.; Coudé du Foresto, V.; Ollivier, M.
2012-07-01
The PERSEE breadboard, developed by a consortium including CNES, IAS, LESIA, OCA, ONERA and TAS since 2005, is a nulling demonstrator that couples an infrared nulling interferometer with a formation flying simulator able to introduce realistic disturbances in the set-up. The general idea is to prove that an adequate optical design can considerably relax the constraints applying at the spacecrafts level of a future interferometric space mission like Darwin/TPF or one of its precursors. The breadboard is now fully operational and the measurements sequences are managed from a remote control room using automatic procedures. A set of excellent results were obtained in 2011. The measured polychromatic nulling depth with non polarized light is 8.8 10-6 stabilized at 9 10-8 in the 1.65-2.45 μm spectral band (37 % bandwidth) during 100 s. This result was extended to a 7h duration thanks to an automatic calibration process. The various contributors are identified and the nulling budget is now well mastered. We also proved that harmonic disturbances in the 1-100 Hz up to several ten’s of nm rms can be very efficiently corrected by a Linear Quadratic Control (LQG) if a sufficient flux is available. These results are important contributions to the feasibility of a future space based nulling interferometer.
Final results of the PERSEE experiment
NASA Astrophysics Data System (ADS)
Le Duigou, J.-M.; Lozi, J.; Cassaing, F.; Houairi, K.; Sorrente, B.; Montri, J.; Jacquinod, S.; Réess, J.-M.; Pham, L.; Lhomé, E.; Buey, T.; Hénault, F.; Marcotto, A.; Girard, P.; Mauclert, N.; Barillot, M.; Coudé du Foresto, V.; Ollivier, M.
2017-11-01
The PERSEE breadboard, developed by a consortium including CNES, IAS, LESIA, OCA, ONERA and TAS since 2006, is a nulling demonstrator that couples an infrared nulling interferometer with a formation flying simulator able to introduce realistic disturbances in the set-up. The general idea is to prove that an adequate optical design can considerably release the constraints applied at the spacecrafts level of a future interferometric space mission like Darwin/TPF or one of its precursors. The breadboard is now fully operational and the measurements sequences are managed from a remote control room using automatic procedures. A set of excellent results were obtained in 2011: the measured polychromatic nulling depth with non polarized light is 8.8x10-6 stabilized at 9x10-8 in the [1.65-2.45] μm spectral band (37% bandwidth) during 100s. This result was extended to a 7h duration thanks to an automatic calibration process. The various contributors are identified and the nulling budget is now well mastered. We also proved that harmonic disturbances in the 1-100Hz up to several tens of nm rms can be very efficiently corrected by a Linear Quadratic Control (LQG) if a sufficient flux is available. These results are important contributions to the feasibility of a future space based nulling interferometer.
Liu, Chanjuan; van Netten, Jaap J; van Baal, Jeff G; Bus, Sicco A; van der Heijden, Ferdi
2015-02-01
Early identification of diabetic foot complications and their precursors is essential in preventing their devastating consequences, such as foot infection and amputation. Frequent, automatic risk assessment by an intelligent telemedicine system might be feasible and cost effective. Infrared thermography is a promising modality for such a system. The temperature differences between corresponding areas on contralateral feet are the clinically significant parameters. This asymmetric analysis is hindered by (1) foot segmentation errors, especially when the foot temperature and the ambient temperature are comparable, and by (2) different shapes and sizes between contralateral feet due to deformities or minor amputations. To circumvent the first problem, we used a color image and a thermal image acquired synchronously. Foot regions, detected in the color image, were rigidly registered to the thermal image. This resulted in 97.8% ± 1.1% sensitivity and 98.4% ± 0.5% specificity over 76 high-risk diabetic patients with manual annotation as a reference. Nonrigid landmark-based registration with B-splines solved the second problem. Corresponding points in the two feet could be found regardless of the shapes and sizes of the feet. With that, the temperature difference of the left and right feet could be obtained. © 2015 Society of Photo-Optical Instrumentation Engineers (SPIE)
A Bayesian state-space approach for damage detection and classification
NASA Astrophysics Data System (ADS)
Dzunic, Zoran; Chen, Justin G.; Mobahi, Hossein; Büyüköztürk, Oral; Fisher, John W.
2017-11-01
The problem of automatic damage detection in civil structures is complex and requires a system that can interpret collected sensor data into meaningful information. We apply our recently developed switching Bayesian model for dependency analysis to the problems of damage detection and classification. The model relies on a state-space approach that accounts for noisy measurement processes and missing data, which also infers the statistical temporal dependency between measurement locations signifying the potential flow of information within the structure. A Gibbs sampling algorithm is used to simultaneously infer the latent states, parameters of the state dynamics, the dependence graph, and any changes in behavior. By employing a fully Bayesian approach, we are able to characterize uncertainty in these variables via their posterior distribution and provide probabilistic estimates of the occurrence of damage or a specific damage scenario. We also implement a single class classification method which is more realistic for most real world situations where training data for a damaged structure is not available. We demonstrate the methodology with experimental test data from a laboratory model structure and accelerometer data from a real world structure during different environmental and excitation conditions.
An ERP investigation of conditional reasoning with emotional and neutral contents.
Blanchette, Isabelle; El-Deredy, Wael
2014-11-01
In two experiments we investigate conditional reasoning using event-related potentials (ERPs). Our goal was to examine the time course of inference making in two conditional forms, one logically valid (Modus Ponens, MP) and one logically invalid (Affirming the Consequent, AC). We focus particularly on the involvement of semantically-based inferential processes potentially marked by modulations of the N400. We also compared reasoning about emotional and neutral contents with separate sets of stimuli of differing linguistic complexity across the two experiments. Both MP and AC modulated the N400 component, suggesting the involvement of a semantically-based inferential mechanism common across different logical forms, content types, and linguistic features of the problems. Emotion did not have an effect on early components, and did not interact with components related to inference making. There was a main effect of emotion in the 800-1050 ms time window, consistent with an effect on sustained attention. The results suggest that conditional reasoning is not a purely formal process but that it importantly implicates semantic processing, and that the effect of emotion on reasoning does not primarily operate through a modulation of early automatic stages of information processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Phylo.io: Interactive Viewing and Comparison of Large Phylogenetic Trees on the Web.
Robinson, Oscar; Dylus, David; Dessimoz, Christophe
2016-08-01
Phylogenetic trees are pervasively used to depict evolutionary relationships. Increasingly, researchers need to visualize large trees and compare multiple large trees inferred for the same set of taxa (reflecting uncertainty in the tree inference or genuine discordance among the loci analyzed). Existing tree visualization tools are however not well suited to these tasks. In particular, side-by-side comparison of trees can prove challenging beyond a few dozen taxa. Here, we introduce Phylo.io, a web application to visualize and compare phylogenetic trees side-by-side. Its distinctive features are: highlighting of similarities and differences between two trees, automatic identification of the best matching rooting and leaf order, scalability to large trees, high usability, multiplatform support via standard HTML5 implementation, and possibility to store and share visualizations. The tool can be freely accessed at http://phylo.io and can easily be embedded in other web servers. The code for the associated JavaScript library is available at https://github.com/DessimozLab/phylo-io under an MIT open source license. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Event-Based Media Enrichment Using an Adaptive Probabilistic Hypergraph Model.
Liu, Xueliang; Wang, Meng; Yin, Bao-Cai; Huet, Benoit; Li, Xuelong
2015-11-01
Nowadays, with the continual development of digital capture technologies and social media services, a vast number of media documents are captured and shared online to help attendees record their experience during events. In this paper, we present a method combining semantic inference and multimodal analysis for automatically finding media content to illustrate events using an adaptive probabilistic hypergraph model. In this model, media items are taken as vertices in the weighted hypergraph and the task of enriching media to illustrate events is formulated as a ranking problem. In our method, each hyperedge is constructed using the K-nearest neighbors of a given media document. We also employ a probabilistic representation, which assigns each vertex to a hyperedge in a probabilistic way, to further exploit the correlation among media data. Furthermore, we optimize the hypergraph weights in a regularization framework, which is solved as a second-order cone problem. The approach is initiated by seed media and then used to rank the media documents using a transductive inference process. The results obtained from validating the approach on an event dataset collected from EventMedia demonstrate the effectiveness of the proposed approach.
Supervised Gamma Process Poisson Factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dylan Zachary
This thesis develops the supervised gamma process Poisson factorization (S- GPPF) framework, a novel supervised topic model for joint modeling of count matrices and document labels. S-GPPF is fully generative and nonparametric: document labels and count matrices are modeled under a uni ed probabilistic framework and the number of latent topics is controlled automatically via a gamma process prior. The framework provides for multi-class classification of documents using a generative max-margin classifier. Several recent data augmentation techniques are leveraged to provide for exact inference using a Gibbs sampling scheme. The first portion of this thesis reviews supervised topic modeling andmore » several key mathematical devices used in the formulation of S-GPPF. The thesis then introduces the S-GPPF generative model and derives the conditional posterior distributions of the latent variables for posterior inference via Gibbs sampling. The S-GPPF is shown to exhibit state-of-the-art performance for joint topic modeling and document classification on a dataset of conference abstracts, beating out competing supervised topic models. The unique properties of S-GPPF along with its competitive performance make it a novel contribution to supervised topic modeling.« less
Automatic evaluation of skin histopathological images for melanocytic features
NASA Astrophysics Data System (ADS)
Koosha, Mohaddeseh; Hoseini Alinodehi, S. Pourya; Nicolescu, Mircea; Safaei Naraghi, Zahra
2017-03-01
Successfully detecting melanocyte cells in the skin epidermis has great significance in skin histopathology. Because of the existence of cells with similar appearance to melanocytes in hematoxylin and eosin (HE) images of the epidermis, detecting melanocytes becomes a challenging task. This paper proposes a novel technique for the detection of melanocytes in HE images of the epidermis, based on the melanocyte color features, in the HSI color domain. Initially, an effective soft morphological filter is applied to the HE images in the HSI color domain to remove noise. Then a novel threshold-based technique is applied to distinguish the candidate melanocytes' nuclei. Similarly, the method is applied to find the candidate surrounding halos of the melanocytes. The candidate nuclei are associated with their surrounding halos using the suggested logical and statistical inferences. Finally, a fuzzy inference system is proposed, based on the HSI color information of a typical melanocyte in the epidermis, to calculate the similarity ratio of each candidate cell to a melanocyte. As our review on the literature shows, this is the first method evaluating epidermis cells for melanocyte similarity ratio. Experimental results on various images with different zooming factors show that the proposed method improves the results of previous works.
Artistic image analysis using graph-based learning approaches.
Carneiro, Gustavo
2013-08-01
We introduce a new methodology for the problem of artistic image analysis, which among other tasks, involves the automatic identification of visual classes present in an art work. In this paper, we advocate the idea that artistic image analysis must explore a graph that captures the network of artistic influences by computing the similarities in terms of appearance and manual annotation. One of the novelties of our methodology is the proposed formulation that is a principled way of combining these two similarities in a single graph. Using this graph, we show that an efficient random walk algorithm based on an inverted label propagation formulation produces more accurate annotation and retrieval results compared with the following baseline algorithms: bag of visual words, label propagation, matrix completion, and structural learning. We also show that the proposed approach leads to a more efficient inference and training procedures. This experiment is run on a database containing 988 artistic images (with 49 visual classification problems divided into a multiclass problem with 27 classes and 48 binary problems), where we show the inference and training running times, and quantitative comparisons with respect to several retrieval and annotation performance measures.
An expert system shell for inferring vegetation characteristics: The learning system (tasks C and D)
NASA Technical Reports Server (NTRS)
Harrison, P. Ann; Harrison, Patrick R.
1992-01-01
This report describes the implementation of a learning system that uses a data base of historical cover type reflectance data taken at different solar zenith angles and wavelengths to learn class descriptions of classes of cover types. It has been integrated with the VEG system and requires that the VEG system be loaded to operate. VEG is the NASA VEGetation workbench - an expert system for inferring vegetation characteristics from reflectance data. The learning system provides three basic options. Using option one, the system learns class descriptions of one or more classes. Using option two, the system learns class descriptions of one or more classes and then uses the learned classes to classify an unknown sample. Using option three, the user can test the system's classification performance. The learning system can also be run in an automatic mode. In this mode, options two and three are executed on each sample from an input file. The system was developed using KEE. It is menu driven and contains a sophisticated window and mouse driven interface which guides the user through various computations. Input and output file management and data formatting facilities are also provided.
Software Analyzes Complex Systems in Real Time
NASA Technical Reports Server (NTRS)
2008-01-01
Expert system software programs, also known as knowledge-based systems, are computer programs that emulate the knowledge and analytical skills of one or more human experts, related to a specific subject. SHINE (Spacecraft Health Inference Engine) is one such program, a software inference engine (expert system) designed by NASA for the purpose of monitoring, analyzing, and diagnosing both real-time and non-real-time systems. It was developed to meet many of the Agency s demanding and rigorous artificial intelligence goals for current and future needs. NASA developed the sophisticated and reusable software based on the experience and requirements of its Jet Propulsion Laboratory s (JPL) Artificial Intelligence Research Group in developing expert systems for space flight operations specifically, the diagnosis of spacecraft health. It was designed to be efficient enough to operate in demanding real time and in limited hardware environments, and to be utilized by non-expert systems applications written in conventional programming languages. The technology is currently used in several ongoing NASA applications, including the Mars Exploration Rovers and the Spacecraft Health Automatic Reasoning Pilot (SHARP) program for the diagnosis of telecommunication anomalies during the Neptune Voyager Encounter. It is also finding applications outside of the Space Agency.
NASA Astrophysics Data System (ADS)
Ajay Kumar, M.; Srikanth, N. V.
2014-03-01
In HVDC Light transmission systems, converter control is one of the major fields of present day research works. In this paper, fuzzy logic controller is utilized for controlling both the converters of the space vector pulse width modulation (SVPWM) based HVDC Light transmission systems. Due to its complexity in the rule base formation, an intelligent controller known as adaptive neuro fuzzy inference system (ANFIS) controller is also introduced in this paper. The proposed ANFIS controller changes the PI gains automatically for different operating conditions. A hybrid learning method which combines and exploits the best features of both the back propagation algorithm and least square estimation method is used to train the 5-layer ANFIS controller. The performance of the proposed ANFIS controller is compared and validated with the fuzzy logic controller and also with the fixed gain conventional PI controller. The simulations are carried out in the MATLAB/SIMULINK environment. The results reveal that the proposed ANFIS controller is reducing power fluctuations at both the converters. It also improves the dynamic performance of the test power system effectively when tested for various ac fault conditions.
MicroScope: a platform for microbial genome annotation and comparative genomics
Vallenet, D.; Engelen, S.; Mornico, D.; Cruveiller, S.; Fleury, L.; Lajus, A.; Rouy, Z.; Roche, D.; Salvignol, G.; Scarpelli, C.; Médigue, C.
2009-01-01
The initial outcome of genome sequencing is the creation of long text strings written in a four letter alphabet. The role of in silico sequence analysis is to assist biologists in the act of associating biological knowledge with these sequences, allowing investigators to make inferences and predictions that can be tested experimentally. A wide variety of software is available to the scientific community, and can be used to identify genomic objects, before predicting their biological functions. However, only a limited number of biologically interesting features can be revealed from an isolated sequence. Comparative genomics tools, on the other hand, by bringing together the information contained in numerous genomes simultaneously, allow annotators to make inferences based on the idea that evolution and natural selection are central to the definition of all biological processes. We have developed the MicroScope platform in order to offer a web-based framework for the systematic and efficient revision of microbial genome annotation and comparative analysis (http://www.genoscope.cns.fr/agc/microscope). Starting with the description of the flow chart of the annotation processes implemented in the MicroScope pipeline, and the development of traditional and novel microbial annotation and comparative analysis tools, this article emphasizes the essential role of expert annotation as a complement of automatic annotation. Several examples illustrate the use of implemented tools for the review and curation of annotations of both new and publicly available microbial genomes within MicroScope’s rich integrated genome framework. The platform is used as a viewer in order to browse updated annotation information of available microbial genomes (more than 440 organisms to date), and in the context of new annotation projects (117 bacterial genomes). The human expertise gathered in the MicroScope database (about 280,000 independent annotations) contributes to improve the quality of microbial genome annotation, especially for genomes initially analyzed by automatic procedures alone. Database URLs: http://www.genoscope.cns.fr/agc/mage and http://www.genoscope.cns.fr/agc/microcyc PMID:20157493
NASA Astrophysics Data System (ADS)
Barandun, Martina; Huss, Matthias; Usubaliev, Ryskul; Azisov, Erlan; Berthier, Etienne; Kääb, Andreas; Bolch, Tobias; Hoelzle, Martin
2018-06-01
Glacier surface mass balance observations in the Tien Shan and Pamir are relatively sparse and often discontinuous. Nevertheless, glaciers are one of the most important components of the high-mountain cryosphere in the region as they strongly influence water availability in the arid, continental and intensely populated downstream areas. This study provides reliable and continuous surface mass balance series for selected glaciers located in the Tien Shan and Pamir-Alay. By cross-validating the results of three independent methods, we reconstructed the mass balance of the three benchmark glaciers, Abramov, Golubin and Glacier no. 354 for the past 2 decades. By applying different approaches, it was possible to compensate for the limitations and shortcomings of each individual method. This study proposes the use of transient snow line observations throughout the melt season obtained from satellite optical imagery and terrestrial automatic cameras. By combining modelling with remotely acquired information on summer snow depletion, it was possible to infer glacier mass changes for unmeasured years. The model is initialized with daily temperature and precipitation data collected at automatic weather stations in the vicinity of the glacier or with adjusted data from climate reanalysis products. Multi-annual mass changes based on high-resolution digital elevation models and in situ glaciological surveys were used to validate the results for the investigated glaciers. Substantial surface mass loss was confirmed for the three studied glaciers by all three methods, ranging from -0.30 ± 0.19 to -0.41 ± 0.33 m w.e. yr-1 over the 2004-2016 period. Our results indicate that integration of snow line observations into mass balance modelling significantly narrows the uncertainty ranges of the estimates. Hence, this highlights the potential of the methodology for application to unmonitored glaciers at larger scales for which no direct measurements are available.
Automatic Road Gap Detection Using Fuzzy Inference System
NASA Astrophysics Data System (ADS)
Hashemi, S.; Valadan Zoej, M. J.; Mokhtarzadeh, M.
2011-09-01
Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1) Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2) Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3) Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4) Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.
MicroScope: a platform for microbial genome annotation and comparative genomics.
Vallenet, D; Engelen, S; Mornico, D; Cruveiller, S; Fleury, L; Lajus, A; Rouy, Z; Roche, D; Salvignol, G; Scarpelli, C; Médigue, C
2009-01-01
The initial outcome of genome sequencing is the creation of long text strings written in a four letter alphabet. The role of in silico sequence analysis is to assist biologists in the act of associating biological knowledge with these sequences, allowing investigators to make inferences and predictions that can be tested experimentally. A wide variety of software is available to the scientific community, and can be used to identify genomic objects, before predicting their biological functions. However, only a limited number of biologically interesting features can be revealed from an isolated sequence. Comparative genomics tools, on the other hand, by bringing together the information contained in numerous genomes simultaneously, allow annotators to make inferences based on the idea that evolution and natural selection are central to the definition of all biological processes. We have developed the MicroScope platform in order to offer a web-based framework for the systematic and efficient revision of microbial genome annotation and comparative analysis (http://www.genoscope.cns.fr/agc/microscope). Starting with the description of the flow chart of the annotation processes implemented in the MicroScope pipeline, and the development of traditional and novel microbial annotation and comparative analysis tools, this article emphasizes the essential role of expert annotation as a complement of automatic annotation. Several examples illustrate the use of implemented tools for the review and curation of annotations of both new and publicly available microbial genomes within MicroScope's rich integrated genome framework. The platform is used as a viewer in order to browse updated annotation information of available microbial genomes (more than 440 organisms to date), and in the context of new annotation projects (117 bacterial genomes). The human expertise gathered in the MicroScope database (about 280,000 independent annotations) contributes to improve the quality of microbial genome annotation, especially for genomes initially analyzed by automatic procedures alone.Database URLs: http://www.genoscope.cns.fr/agc/mage and http://www.genoscope.cns.fr/agc/microcyc.
Indium sulfide microflowers: Fabrication and optical properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu Hui; Wang Xiaolei; Yang Wen
2009-10-15
With the assistance of urea, uniform 2D nanoflakes assembled 3D In{sub 2}S{sub 3} microflowers were synthesized via a facile hydrothermal method at relative low temperature. The properties of the as-obtained In{sub 2}S{sub 3} flowers were characterized by various techniques. In this work, the utilization of urea and L-cysteine, as well as the amount of them played important roles in the formation of In{sub 2}S{sub 3} with different nanostructures. Inferred from their morphology evolution, a urea induced precursor-decomposition associated with the Ostwald-ripening mechanism was proposed to interpret these hierarchical structure formation. Furthermore, the optical properties of these In{sub 2}S{sub 3} microflowersmore » were investigated via UV-vis absorption and photoluminescence (PL) spectroscopies in detail.« less
Early formation of planetary building blocks inferred from Pb isotopic ages of chondrules
Bollard, Jean; Connelly, James N.; Whitehouse, Martin J.; Pringle, Emily A.; Bonal, Lydie; Jørgensen, Jes K.; Nordlund, Åke; Moynier, Frédéric; Bizzarro, Martin
2017-01-01
The most abundant components of primitive meteorites (chondrites) are millimeter-sized glassy spherical chondrules formed by transient melting events in the solar protoplanetary disk. Using Pb-Pb dates of 22 individual chondrules, we show that primary production of chondrules in the early solar system was restricted to the first million years after the formation of the Sun and that these existing chondrules were recycled for the remaining lifetime of the protoplanetary disk. This finding is consistent with a primary chondrule formation episode during the early high-mass accretion phase of the protoplanetary disk that transitions into a longer period of chondrule reworking. An abundance of chondrules at early times provides the precursor material required to drive the efficient and rapid formation of planetary objects via chondrule accretion. PMID:28808680
Why is Interstellar Object 1I/2017 U1 (`Oumuamua) Rocky, Tumbling and Possibly Very Prolate?
NASA Astrophysics Data System (ADS)
Katz, J. I.
2018-05-01
The recently discovered first interstellar object 1I/2017 U1 (`Oumuamua) has brightness that varies by a factor of 10, a range greater than that of any Solar System asteroid, a spectrum characteristic of Type D asteroids, and no evidence of evaporating volatiles, contrary to expectation for exo-Oort clouds. `Oumuamua is possibly the first example of the proposed "Jurads", objects depleted in volatiles and ejected from planetary systems during the post-main sequence evolution of their parent stars. I suggest that heating by the star's giant stage fluidized a precursor object as well as driving off any volatiles, causing it to assume the Jacobi ellipsoidal shape of a self-gravitating incompressible liquid. The collision that produced the inferred tumbling motion may have occurred thousands of years after the formation of 1I/2017 U1 `Oumuamua. Jacobi ellipsoids have a unique relation among rotation rate, density and axial ratio. The inferred axial ratio ⪆ 5 suggests a lower bound on the density of 1.6 g/cm3, apparently excluding an icy interior unless it is almost entirely frozen CO2. `Oumuamua may be related to accreting objects that pollute white dwarf atmospheres and that may make Soft Gamma Repeaters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacPherson, G. J.; Bullock, E. S.; Janney, P. E.
2010-03-10
The short-lived radionuclide {sup 26}Al existed throughout the solar nebula 4.57 Ga ago, and the initial abundance ratio ({sup 26}Al/{sup 27}Al){sub 0}, as inferred from magnesium isotopic compositions of calcium-aluminum-rich inclusions (CAIs) in chondritic meteorites, has become a benchmark for understanding early solar system chronology. Internal mineral isochrons in most CAIs measured by secondary ion mass spectrometry (SIMS) give ({sup 26}Al/{sup 27}Al){sub 0} {approx} (4-5) x 10{sup -5}, called 'canonical'. Some recent high-precision analyses of (1) bulk CAIs measured by multicollector inductively coupled plasma mass spectrometry (MC-ICPMS), (2) individual CAI minerals and their mixtures measured by laser-ablation MC-ICPMS, and (3)more » internal isochrons measured by multicollector (MC)-SIMS indicated a somewhat higher 'supracanonical' ({sup 26}Al/{sup 27}Al){sub 0} ranging from (5.85 {+-} 0.05) x 10{sup -5} to >7 x 10{sup -5}. These measurements were done on coarse-grained Type B and Type A CAIs that probably formed by recrystallization and/or melting of fine-grained condensate precursors. Thus the supracanonical ratios might record an earlier event, the actual nebular condensation of the CAI precursors. We tested this idea by performing in situ high-precision magnesium isotope measurements of individual minerals in a fine-grained CAI whose structures and volatility-fractionated trace element abundances mark it as a primary solar nebula condensate. Such CAIs are ideal candidates for the fine-grained precursors to the coarse-grained CAIs, and thus should best preserve a supracanonical ratio. Yet, our measured internal isochron yields ({sup 26}Al/{sup 27}Al){sub 0} = (5.27 {+-} 0.17) x 10{sup -5}. Thus our data do not support the existence of supracanonical ({sup 26}Al/{sup 27}Al){sub 0} = (5.85-7) x 10{sup -5}. There may not have been a significant time interval between condensation of the CAI precursors and their subsequent melting into coarse-grained CAIs.« less
Vertically Integrated Seismological Analysis II : Inference
NASA Astrophysics Data System (ADS)
Arora, N. S.; Russell, S.; Sudderth, E.
2009-12-01
Methods for automatically associating detected waveform features with hypothesized seismic events, and localizing those events, are a critical component of efforts to verify the Comprehensive Test Ban Treaty (CTBT). As outlined in our companion abstract, we have developed a hierarchical model which views detection, association, and localization as an integrated probabilistic inference problem. In this abstract, we provide more details on the Markov chain Monte Carlo (MCMC) methods used to solve this inference task. MCMC generates samples from a posterior distribution π(x) over possible worlds x by defining a Markov chain whose states are the worlds x, and whose stationary distribution is π(x). In the Metropolis-Hastings (M-H) method, transitions in the Markov chain are constructed in two steps. First, given the current state x, a candidate next state x‧ is generated from a proposal distribution q(x‧ | x), which may be (more or less) arbitrary. Second, the transition to x‧ is not automatic, but occurs with an acceptance probability—α(x‧ | x) = min(1, π(x‧)q(x | x‧)/π(x)q(x‧ | x)). The seismic event model outlined in our companion abstract is quite similar to those used in multitarget tracking, for which MCMC has proved very effective. In this model, each world x is defined by a collection of events, a list of properties characterizing those events (times, locations, magnitudes, and types), and the association of each event to a set of observed detections. The target distribution π(x) = P(x | y), the posterior distribution over worlds x given the observed waveform data y at all stations. Proposal distributions then implement several types of moves between worlds. For example, birth moves create new events; death moves delete existing events; split moves partition the detections for an event into two new events; merge moves combine event pairs; swap moves modify the properties and assocations for pairs of events. Importantly, the rules for accepting such complex moves need not be hand-designed. Instead, they are automatically determined by the underlying probabilistic model, which is in turn calibrated via historical data and scientific knowledge. Consider a small seismic event which generates weak signals at several different stations, which might independently be mistaken for noise. A birth move may nevertheless hypothesize an event jointly explaining these detections. If the corresponding waveform data then aligns with the seismological knowledge encoded in the probabilistic model, the event may be detected even though no single station observes it unambiguously. Alternatively, if a large outlier reading is produced at a single station, moves which instantiate a corresponding (false) event would be rejected because of the absence of plausible detections at other sensors. More broadly, one of the main advantages of our MCMC approach is its consistent handling of the relative uncertainties in different information sources. By avoiding low-level thresholds, we expect to improve accuracy and robustness. At the conference, we will present results quantitatively validating our approach, using ground-truth associations and locations provided either by simulation or human analysts.
Oxytocin administration enhances controlled social cognition in patients with schizophrenia
Woolley, J.D.; Chuang, B.; Lam, O.; Lai, W.; O’Donovan, A.; Rankin, K.P.; Mathalon, D.H.; Vinogradov, S.
2014-01-01
Summary Background Individuals with schizophrenia have functionally significant deficits in automatic and controlled social cognition, but no currently available pharmacologic treatments reduce these deficits. The neuropeptide oxytocin has multiple prosocial effects when administered intranasally in humans and there is growing interest in its therapeutic potential in schizophrenia. Methods We administered 40 IU of oxytocin and saline placebo intranasally to 29 male subjects with schizophrenia and 31 age-matched, healthy controls in a randomized, double-blind, placebo-controlled, cross-over study. Social cognition was assessed with The Awareness of Social Inference Test (TASIT) and the Reading the Mind in the Eyes Test (RMET). We examined the effects of oxytocin administration on automatic social cognition (the ability to rapidly interpret and understand emotional cues from the voice, face, and body); controlled social cognition (the ability to comprehend indirectly expressed emotions, thoughts, and intentions through complex deliberations over longer time periods); and a control task (the ability to comprehend truthful dialog and perform general task procedures) in individuals with and without schizophrenia using mixed factorial analysis of variance models. Results Patients with schizophrenia showed significant impairments in automatic and controlled social cognition compared to healthy controls, and administration of oxytocin significantly improved their controlled, but not automatic, social cognition, F(1, 58) = 8.75; p = 0.004. Conversely, oxytocin administration had limited effects on social cognition in healthy participants. Patients and controls performed equally well and there were no effects of oxytocin administration on the control task. Discussion Intact social cognitive abilities are associated with better functional outcomes in individuals with schizophrenia. Our data highlight the potentially complex effects of oxytocin on some but not all aspects of social cognition, and support the exploration of intranasal oxytocin as a potential adjunct treatment to improve controlled social cognition in schizophrenia. Published by Elsevier Ltd. PMID:25001961
SciFlo: Semantically-Enabled Grid Workflow for Collaborative Science
NASA Astrophysics Data System (ADS)
Yunck, T.; Wilson, B. D.; Raskin, R.; Manipon, G.
2005-12-01
SciFlo is a system for Scientific Knowledge Creation on the Grid using a Semantically-Enabled Dataflow Execution Environment. SciFlo leverages Simple Object Access Protocol (SOAP) Web Services and the Grid Computing standards (WS-* standards and the Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable SOAP Services, native executables, local command-line scripts, and python codes into a distributed computing flow (a graph of operators). SciFlo's XML dataflow documents can be a mixture of concrete operators (fully bound operations) and abstract template operators (late binding via semantic lookup). All data objects and operators can be both simply typed (simple and complex types in XML schema) and semantically typed using controlled vocabularies (linked to OWL ontologies such as SWEET). By exploiting ontology-enhanced search and inference, one can discover (and automatically invoke) Web Services and operators that have been semantically labeled as performing the desired transformation, and adapt a particular invocation to the proper interface (number, types, and meaning of inputs and outputs). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. A Visual Programming tool is also being developed, but it is not required. Once an analysis has been specified for a granule or day of data, it can be easily repeated with different control parameters and over months or years of data. SciFlo uses and preserves semantics, and also generates and infers new semantic annotations. Specifically, the SciFlo engine uses semantic metadata to understand (infer) what it is doing and potentially improve the data flow; preserves semantics by saving links to the semantics of (metadata describing) the input datasets, related datasets, and the data transformations (algorithms) used to generate downstream products; generates new metadata by allowing the user to add semantic annotations to the generated data products (or simply accept automatically generated provenance annotations); and infers new semantic metadata by understanding and applying logic to the semantics of the data and the transformations performed. Much ontology development still needs to be done but, nevertheless, SciFlo documents provide a substrate for using and preserving more semantics as ontologies develop. We will give a live demonstration of the growing SciFlo network using an example dataflow in which atmospheric temperature and water vapor profiles from three Earth Observing System (EOS) instruments are retrieved using SOAP (geo-location query & data access) services, co-registered, and visually & statistically compared on demand (see http://sciflo.jpl.nasa.gov for more information).
Cantón, Rafael; Alós, Juan Ignacio; Baquero, Fernando; Calvo, Jorge; Campos, José; Castillo, Javier; Cercenado, Emilia; Domínguez, M Angeles; Liñares, Josefina; López-Cerezo, Lorena; Marco, Francesc; Mirelis, Beatriz; Morosini, María-Isabel; Navarro, Ferran; Oliver, Antonio; Pérez-Trallero, Emilio; Torres, Carmen; Martínez-Martínez, Luis
2007-01-01
The number of clinical microbiology laboratories that have incorporated automatic susceptibility testing devices has increased in recent years. The majority of these systems determine MIC values using microdilution panels or specific cards, with grouping into clinical categories (susceptible, intermediate or resistant) and incorporate expert systems to infer resistance mechanisms. This document presents the recommendations of a group of experts designated by Grupo de Estudio de los Mecanismos de Acción y Resistencia a los Antimicrobianos (GEMARA, Study group on mechanisms of action and resistance to antimicrobial agents) and Mesa Española de Normalización de la Sensibilidad y Resistencia a los Antimicrobianos (MENSURA, Spanish Group for Normalizing Antimicrobial Susceptibility and Antimicrobial Resistance), with the aim of including antimicrobial agents and selecting concentrations for the susceptibility testing panels of automatic systems. The following have been defined: various antimicrobial categories (A: must be included in the study panel; B: inclusion is recommended; and C: inclusion is secondary, but may facilitate interpretative reading of the antibiogram) and groups (0: not used in therapeutics but may facilitate the detection of resistance mechanisms; 1: must be studied and always reported; 2: must be studied and selectively reported; 3: must be studied and reported at a second level; and 4: should be studied in urinary tract pathogens isolated in urine and other specimens). Recommended antimicrobial concentrations are adapted from the breakpoints established by EUCAST, CLSI and MENSURA. This approach will lead to more accurate susceptibility testing results with better detection of resistance mechanisms, and allowing to reach the clinical goal of the antibiogram.
Khelassi, Abdeldjalil
2014-01-01
Active research is being conducted to determine the prognosis for breast cancer. However, the uncertainty is a major obstacle in this domain of medical research. In that context, explanation-aware computing has the potential for providing meaningful interactions between complex medical applications and users, which would ensure a significant reduction of uncertainty and risks. This paper presents an explanation-aware agent, supported by Intensive Knowledge-Distributed Case-Based Reasoning Classifier (IK-DCBRC), to reduce the uncertainty and risks associated with the diagnosis of breast cancer. A meaningful explanation is generated by inferring from a rule-based system according to the level of abstraction and the reasoning traces. The computer-aided detection is conducted by IK-DCBRC, which is a multi-agent system that applies the case-based reasoning paradigm and a fuzzy similarity function for the automatic prognosis by the class of breast tumors, i.e. malignant or benign, from a pattern of cytological images. A meaningful interaction between the physician and the computer-aided diagnosis system, IK-DCBRC, is achieved via an intelligent agent. The physician can observe the trace of reasoning, terms, justifications, and the strategy to be used to decrease the risks and doubts associated with the automatic diagnosis. The capability of the system we have developed was proven by an example in which conflicts were clarified and transparency was ensured. The explanation agent ensures the transparency of the automatic diagnosis of breast cancer supported by IK-DCBRC, which decreases uncertainty and risks and detects some conflicts.
Jordan, Desmond; Rose, Sydney E
2010-04-01
Medical errors from communication failures are enormous during the perioperative period of cardiac surgical patients. As caregivers change shifts or surgical patients change location within the hospital, key information is lost or misconstrued. After a baseline cognitive study of information need and caregiver workflow, we implemented an advanced clinical decision support tool of intelligent agents, medical logic modules, and text generators called the "Inference Engine" to summarize individual patient's raw medical data elements into procedural milestones, illness severity, and care therapies. The system generates two displays: 1) the continuum of care, multimedia abstract generation of intensive care data (MAGIC)-an expert system that would automatically generate a physician briefing of a cardiac patient's operative course in a multimodal format; and 2) the isolated point in time, "Inference Engine"-a system that provides a real-time, high-level, summarized depiction of a patient's clinical status. In our studies, system accuracy and efficacy was judged against clinician performance in the workplace. To test the automated physician briefing, "MAGIC," the patient's intraoperative course, was reviewed in the intensive care unit before patient arrival. It was then judged against the actual physician briefing and that given in a cohort of patients where the system was not used. To test the real-time representation of the patient's clinical status, system inferences were judged against clinician decisions. Changes in workflow and situational awareness were assessed by questionnaires and process evaluation. MAGIC provides 200% more information, twice the accuracy, and enhances situational awareness. This study demonstrates that the automation of clinical processes through AI methodologies yields positive results.
ANUBIS: artificial neuromodulation using a Bayesian inference system.
Smith, Benjamin J H; Saaj, Chakravarthini M; Allouis, Elie
2013-01-01
Gain tuning is a crucial part of controller design and depends not only on an accurate understanding of the system in question, but also on the designer's ability to predict what disturbances and other perturbations the system will encounter throughout its operation. This letter presents ANUBIS (artificial neuromodulation using a Bayesian inference system), a novel biologically inspired technique for automatically tuning controller parameters in real time. ANUBIS is based on the Bayesian brain concept and modifies it by incorporating a model of the neuromodulatory system comprising four artificial neuromodulators. It has been applied to the controller of EchinoBot, a prototype walking rover for Martian exploration. ANUBIS has been implemented at three levels of the controller; gait generation, foot trajectory planning using Bézier curves, and foot trajectory tracking using a terminal sliding mode controller. We compare the results to a similar system that has been tuned using a multilayer perceptron. The use of Bayesian inference means that the system retains mathematical interpretability, unlike other intelligent tuning techniques, which use neural networks, fuzzy logic, or evolutionary algorithms. The simulation results show that ANUBIS provides significant improvements in efficiency and adaptability of the three controller components; it allows the robot to react to obstacles and uncertainties faster than the system tuned with the MLP, while maintaining stability and accuracy. As well as advancing rover autonomy, ANUBIS could also be applied to other situations where operating conditions are likely to change or cannot be accurately modeled in advance, such as process control. In addition, it demonstrates one way in which neuromodulation could fit into the Bayesian brain framework.
Divided Attention and Processes Underlying Sense of Agency
Wen, Wen; Yamashita, Atsushi; Asama, Hajime
2016-01-01
Sense of agency refers to the subjective feeling of controlling events through one’s behavior or will. Sense of agency results from matching predictions of one’s own actions with actual feedback regarding the action. Furthermore, when an action involves a cued goal, performance-based inference contributes to sense of agency. That is, if people achieve their goal, they would believe themselves to be in control. Previous studies have shown that both action-effect comparison and performance-based inference contribute to sense of agency; however, the dominance of one process over the other may shift based on task conditions such as the presence or absence of specific goals. In this study, we examined the influence of divided attention on these two processes underlying sense of agency in two conditions. In the experimental task, participants continuously controlled a moving dot for 10 s while maintaining a string of three or seven digits in working memory. We found that when there was no cued goal (no-cued-goal condition), sense of agency was impaired by high cognitive load. Contrastingly, when participants controlled the dot based on a cued goal (cued-goal-directed condition), their sense of agency was lower than in the no-cued-goal condition and was not affected by cognitive load. The results suggest that the action-effect comparison process underlying sense of agency requires attention. On the other hand, the weaker influence of divided attention in the cued-goal-directed condition could be attributed to the dominance of performance-based inference, which is probably automatic. PMID:26858680
Akhtar, Naveed; Mian, Ajmal
2017-10-03
We present a principled approach to learn a discriminative dictionary along a linear classifier for hyperspectral classification. Our approach places Gaussian Process priors over the dictionary to account for the relative smoothness of the natural spectra, whereas the classifier parameters are sampled from multivariate Gaussians. We employ two Beta-Bernoulli processes to jointly infer the dictionary and the classifier. These processes are coupled under the same sets of Bernoulli distributions. In our approach, these distributions signify the frequency of the dictionary atom usage in representing class-specific training spectra, which also makes the dictionary discriminative. Due to the coupling between the dictionary and the classifier, the popularity of the atoms for representing different classes gets encoded into the classifier. This helps in predicting the class labels of test spectra that are first represented over the dictionary by solving a simultaneous sparse optimization problem. The labels of the spectra are predicted by feeding the resulting representations to the classifier. Our approach exploits the nonparametric Bayesian framework to automatically infer the dictionary size--the key parameter in discriminative dictionary learning. Moreover, it also has the desirable property of adaptively learning the association between the dictionary atoms and the class labels by itself. We use Gibbs sampling to infer the posterior probability distributions over the dictionary and the classifier under the proposed model, for which, we derive analytical expressions. To establish the effectiveness of our approach, we test it on benchmark hyperspectral images. The classification performance is compared with the state-of-the-art dictionary learning-based classification methods.
A recurrent self-organizing neural fuzzy inference network.
Juang, C F; Lin, C T
1999-01-01
A recurrent self-organizing neural fuzzy inference network (RSONFIN) is proposed in this paper. The RSONFIN is inherently a recurrent multilayered connectionist network for realizing the basic elements and functions of dynamic fuzzy inference, and may be considered to be constructed from a series of dynamic fuzzy rules. The temporal relations embedded in the network are built by adding some feedback connections representing the memory elements to a feedforward neural fuzzy network. Each weight as well as node in the RSONFIN has its own meaning and represents a special element in a fuzzy rule. There are no hidden nodes (i.e., no membership functions and fuzzy rules) initially in the RSONFIN. They are created on-line via concurrent structure identification (the construction of dynamic fuzzy if-then rules) and parameter identification (the tuning of the free parameters of membership functions). The structure learning together with the parameter learning forms a fast learning algorithm for building a small, yet powerful, dynamic neural fuzzy network. Two major characteristics of the RSONFIN can thus be seen: 1) the recurrent property of the RSONFIN makes it suitable for dealing with temporal problems and 2) no predetermination, like the number of hidden nodes, must be given, since the RSONFIN can find its optimal structure and parameters automatically and quickly. Moreover, to reduce the number of fuzzy rules generated, a flexible input partition method, the aligned clustering-based algorithm, is proposed. Various simulations on temporal problems are done and performance comparisons with some existing recurrent networks are also made. Efficiency of the RSONFIN is verified from these results.
Bayesian paradox in homeland security and homeland defense
NASA Astrophysics Data System (ADS)
Jannson, Tomasz; Forrester, Thomas; Wang, Wenjian
2011-06-01
In this paper we discuss a rather surprising result of Bayesian inference analysis: performance of a broad variety of sensors depends not only on a sensor system itself, but also on CONOPS parameters in such a way that even an excellent sensor system can perform poorly if absolute probabilities of a threat (target) are lower than a false alarm probability. This result, which we call Bayesian paradox, holds not only for binary sensors as discussed in the lead author's previous papers, but also for a more general class of multi-target sensors, discussed also in this paper. Examples include: ATR (automatic target recognition), luggage X-ray inspection for explosives, medical diagnostics, car engine diagnostics, judicial decisions, and many other issues.
Maps of Structured Aerosol Activity During the MY 25 Planet-encircling Dust Storm on Mars
NASA Astrophysics Data System (ADS)
Noble, J.; Wilson, R. J.; Cantor, B. A.; Kahre, M. A.; Hollingsworth, J. L.; Bridger, A. F. C.; Haberle, R. M.; Barnes, J.
2016-12-01
We have produced a sequence of 42 global maps from Ls=165.1-187.7° that delimit the areal extent of structured aerosol activity based on a synthesis of Mars Global Surveyor (MGS) data, including Mars Orbiter Camera (MOC) daily global maps (DGMs) and wide angle imagery, Thermal Emission Spectrometer (TES) dust and H2O ice opacity, and Mars general circulation model (MGCM) derived dust opacity. The primary motivation of this work is to examine the temporal and spatial relationship between dust storms observed by MOC and baroclinic eddies inferred from Fast Fourier Synoptic Mapping (FFSM) of TES temperatures in order to study the initiation and evolution of Mars year (MY) 25 planet-encircling dust storm (PDS) precursor phase dust storms. A secondary motivation is to provide improved input to MGCM simulations. Assuming that structured dust storms indicate active dust lifting, these maps allow us to define potential dust lifting regions. This work has two implications for martian atmospheric science. First, integration of MGS data has enabled us to develop improved quantitative and qualitative descriptions of storm evolution that may be used to constrain estimates of dust lifting regions, horizontal dust distribution, and to infer associated circulations. Second, we believe that these maps provide better bases and constraints for modeling storm initiation. Based on our analysis of these MGS data, we propose the following working hypothesis to explain the dynamical processes responsible for PDS initiation and expansion. Six eastward-traveling transient baroclinic eddies triggered the MY 25 precursor storms in Hellas during Ls=176.2-184.6° due to the enhanced dust lifting associated with their low-level wind and stress fields. This was followed by a seventh eddy that contributed to expansion on Ls=186.3°. Increased opacity and temperatures from dust lifting associated with the first three eddies enhanced thermal tides which supported further storm initiation and expansion out of Hellas. Constructive interference of eddies and other circulation components including sublimation flow, anabatic winds (daytime upslope), and diurnal tides may have contributed to storm onset in, and expansion out of Hellas.
Yeh, Hsiang-Yuan; Cheng, Shih-Wu; Lin, Yu-Chun; Yeh, Cheng-Yu; Lin, Shih-Fang; Soo, Von-Wun
2009-12-21
Prostate cancer is a world wide leading cancer and it is characterized by its aggressive metastasis. According to the clinical heterogeneity, prostate cancer displays different stages and grades related to the aggressive metastasis disease. Although numerous studies used microarray analysis and traditional clustering method to identify the individual genes during the disease processes, the important gene regulations remain unclear. We present a computational method for inferring genetic regulatory networks from micorarray data automatically with transcription factor analysis and conditional independence testing to explore the potential significant gene regulatory networks that are correlated with cancer, tumor grade and stage in the prostate cancer. To deal with missing values in microarray data, we used a K-nearest-neighbors (KNN) algorithm to determine the precise expression values. We applied web services technology to wrap the bioinformatics toolkits and databases to automatically extract the promoter regions of DNA sequences and predicted the transcription factors that regulate the gene expressions. We adopt the microarray datasets consists of 62 primary tumors, 41 normal prostate tissues from Stanford Microarray Database (SMD) as a target dataset to evaluate our method. The predicted results showed that the possible biomarker genes related to cancer and denoted the androgen functions and processes may be in the development of the prostate cancer and promote the cell death in cell cycle. Our predicted results showed that sub-networks of genes SREBF1, STAT6 and PBX1 are strongly related to a high extent while ETS transcription factors ELK1, JUN and EGR2 are related to a low extent. Gene SLC22A3 may explain clinically the differentiation associated with the high grade cancer compared with low grade cancer. Enhancer of Zeste Homolg 2 (EZH2) regulated by RUNX1 and STAT3 is correlated to the pathological stage. We provide a computational framework to reconstruct the genetic regulatory network from the microarray data using biological knowledge and constraint-based inferences. Our method is helpful in verifying possible interaction relations in gene regulatory networks and filtering out incorrect relations inferred by imperfect methods. We predicted not only individual gene related to cancer but also discovered significant gene regulation networks. Our method is also validated in several enriched published papers and databases and the significant gene regulatory networks perform critical biological functions and processes including cell adhesion molecules, androgen and estrogen metabolism, smooth muscle contraction, and GO-annotated processes. Those significant gene regulations and the critical concept of tumor progression are useful to understand cancer biology and disease treatment.
Delen, Guusje; Ristanović, Zoran; Mandemaker, Laurens D. B.
2017-01-01
Abstract Control over assembly, orientation, and defect‐free growth of metal‐organic framework (MOF) films is crucial for their future applications. A layer‐by‐layer approach is considered a suitable method to synthesize highly oriented films of numerous MOF topologies, but the initial stages of the film growth remain poorly understood. Here we use a combination of infrared (IR) reflection absorption spectroscopy and atomic force microscopy (AFM)‐IR imaging to investigate the assembly and growth of a surface mounted MOF (SURMOF) film, specifically HKUST‐1. IR spectra of the films were measured with monolayer sensitivity and <10 nm spatial resolution. In contrast to the common knowledge of LbL SURMOF synthesis, we find evidence for the surface‐hindered growth and large presence of copper acetate precursor species in the produced MOF thin‐films. The growth proceeds via a solution‐mediated mechanism where the presence of weakly adsorbed copper acetate species leads to the formation of crystalline agglomerates with a size that largely exceeds theoretical growth limits. We report the spectroscopic characterization of physisorbed copper acetate surface species and find evidence for the large presence of unexchanged and mixed copper‐paddle‐wheels. Based on these insights, we were able to optimize and automatize synthesis methods and produce (100) oriented HKUST‐1 thin‐films with significantly shorter synthesis times, and additionally use copper nitrate as an effective synthesis precursor. PMID:29164720
NASA Astrophysics Data System (ADS)
Park, Changkun; Nagashima, Kazuhide; Krot, Alexander N.; Huss, Gary R.; Davis, Andrew M.; Bizzarro, Martin
2017-03-01
Calcium-aluminum-rich inclusions with isotopic mass fractionation effects and unidentified nuclear isotopic anomalies (FUN CAIs) have been studied for more than 40 years, but their origins remain enigmatic. Here we report in situ high precision measurements of aluminum-magnesium isotope systematics of FUN CAIs by secondary ion mass spectrometry (SIMS). Individual minerals were analyzed in six FUN CAIs from the oxidized CV3 carbonaceous chondrites Axtell (compact Type A CAI Axtell 2271) and Allende (Type B CAIs C1 and EK1-4-1, and forsterite-bearing Type B CAIs BG82DH8, CG-14, and TE). Most of these CAIs show evidence for excess 26Mg due to the decay of 26Al. The inferred initial 26Al/27Al ratios [(26Al/27Al)0] and the initial magnesium isotopic compositions (δ26Mg0) calculated using an exponential law with an exponent β of 0.5128 are (3.1 ± 1.6) × 10-6 and 0.60 ± 0.10‰ (Axtell 2271), (3.7 ± 1.5) × 10-6 and -0.20 ± 0.05‰ (BG82DH8), (2.2 ± 1.1) × 10-6 and -0.18 ± 0.05‰ (C1), (2.3 ± 2.4) × 10-5 and -2.23 ± 0.37‰ (EK1-4-1), (1.5 ± 1.1) × 10-5 and -0.42 ± 0.08‰ (CG-14), and (5.3 ± 0.9) × 10-5 and -0.05 ± 0.08‰ (TE) with 2σ uncertainties. We infer that FUN CAIs recorded heterogeneities of magnesium isotopes and 26Al in the CAI-forming region(s). Comparison of 26Al-26Mg systematics, stable isotope (oxygen, magnesium, calcium, and titanium) and trace element studies of FUN and non-FUN igneous CAIs indicates that there is a continuum among these CAI types. Based on these observations and evaporation experiments on CAI-like melts, we propose a generic scenario for the origin of igneous (FUN and non-FUN) CAIs: (i) condensation of isotopically normal solids in an 16O-rich gas of approximately solar composition; (ii) formation of CAI precursors by aggregation of these solids together with variable abundances of isotopically anomalous grains-possible carriers of unidentified nuclear (UN) effects; and (iii) melt evaporation of these precursors accompanied by crystallization under different temperatures and gas pressures, leading to the observed variations in mass-dependent isotopic fractionation (F) effects.
Meinecke, Jena; Tzeferacos, Petros; Bell, Anthony; Bingham, Robert; Clarke, Robert; Churazov, Eugene; Crowston, Robert; Doyle, Hugo; Drake, R Paul; Heathcote, Robert; Koenig, Michel; Kuramitsu, Yasuhiro; Kuranz, Carolyn; Lee, Dongwook; MacDonald, Michael; Murphy, Christopher; Notley, Margaret; Park, Hye-Sook; Pelka, Alexander; Ravasio, Alessandra; Reville, Brian; Sakawa, Youichi; Wan, Willow; Woolsey, Nigel; Yurchak, Roman; Miniati, Francesco; Schekochihin, Alexander; Lamb, Don; Gregori, Gianluca
2015-07-07
The visible matter in the universe is turbulent and magnetized. Turbulence in galaxy clusters is produced by mergers and by jets of the central galaxies and believed responsible for the amplification of magnetic fields. We report on experiments looking at the collision of two laser-produced plasma clouds, mimicking, in the laboratory, a cluster merger event. By measuring the spectrum of the density fluctuations, we infer developed, Kolmogorov-like turbulence. From spectral line broadening, we estimate a level of turbulence consistent with turbulent heating balancing radiative cooling, as it likely does in galaxy clusters. We show that the magnetic field is amplified by turbulent motions, reaching a nonlinear regime that is a precursor to turbulent dynamo. Thus, our experiment provides a promising platform for understanding the structure of turbulence and the amplification of magnetic fields in the universe.
Wang, Zhuoran; Opembe, Naftali; Kobayashi, Takeshi; ...
2018-02-03
In this study, solid-state (SS)NMR techniques were applied to characterize the atomic-scale structures of ordered mesoporous carbon (OMC) materials prepared using Pluronic F127 as template with resorcinol and formaldehyde as polymerizing precursors. A rigorous quantitative analysis was developed using a combination of 13C SSNMR spectra acquired with direct polarization and cross polarization on natural abundant and selectively 13C-enriched series of samples pyrolyzed at various temperatures. These experiments identified and counted the key functional groups present in the OMCs at various stages of preparation and thermal treatment. Lastly, the chemical evolution of molecular networks, the average sizes of aromatic clusters andmore » the extended molecular structures of OMCs were then inferred by coupling this information with the elemental analysis results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhuoran; Opembe, Naftali; Kobayashi, Takeshi
In this study, solid-state (SS)NMR techniques were applied to characterize the atomic-scale structures of ordered mesoporous carbon (OMC) materials prepared using Pluronic F127 as template with resorcinol and formaldehyde as polymerizing precursors. A rigorous quantitative analysis was developed using a combination of 13C SSNMR spectra acquired with direct polarization and cross polarization on natural abundant and selectively 13C-enriched series of samples pyrolyzed at various temperatures. These experiments identified and counted the key functional groups present in the OMCs at various stages of preparation and thermal treatment. Lastly, the chemical evolution of molecular networks, the average sizes of aromatic clusters andmore » the extended molecular structures of OMCs were then inferred by coupling this information with the elemental analysis results.« less
Observation of fast expansion velocity with insulating tungsten wires on ∼80 kA facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, M.; Li, Y.; State Key Laboratory of Intense Pulsed Radiation Simulation and Effect, Northwest Institute of Nuclear Technology, Xi'an 710024
2016-07-15
This paper presents experimental results on the effects of insulating coatings on tungsten planar wire array Z-pinches on an 80 kA, 100 ns current facility. Expansion velocity is obviously increased from ∼0.25 km/s to ∼3.5 km/s by using the insulating coatings. It can be inferred that the wire cores are in gaseous state with this fast expansion velocity. An optical framing camera and laser probing images show that the standard wire arrays have typical ablation process which is similar to their behaviors on mega-ampere facilities. The ablation process and precursor plasma are suppressed for dielectric tungsten wires. The wire array implosion might be improvedmore » if these phenomena can be reproduced on Mega-ampere facilities.« less
New model for Jurassic microcontinent movement and Gondwana breakup in the Weddell Sea region
NASA Astrophysics Data System (ADS)
Jordan, Tom; Ferraccioli, Fausto; Leat, Philip
2017-04-01
The breakup of the Gondwana supercontinent changed the face of our planet. Precursors of supercontinental breakup are widely recognised in the Weddell Sea region in the Jurassic. These include the Karoo/Ferrar Large Igneous Province that extends from South Africa to East Antarctica and significant continental rifting and associated translation of microcontinental blocks in the Weddell Sea Embayment region. However, significant controversy surrounds the pre-breakup position, extent, timing and driving mechanism of inferred microcontinental movement. In particular geological and paleomagnetic data suggest >1000 km of translation and 90 degree rotation of the Haag-Ellsworth Whitmore block (HEW) away from East Antarctica. In contrast, some geophysical interpretations suggest little or no Jurassic or subsequent HEW block movement. Here we present a simpler tectonic model for the Weddell Sea Rift System and HEW movement, derived from our new compilation of airborne geophysical data, satellite magnetic data and potential field modelling (Jordan et al., 2016- Gondwana Res.). Based on the amount of inferred Jurassic crustal extension and pattern of magnetic anomalies we propose that the HEW was translated 500 km towards the Paleo-Pacific margin of Gondwana, possibly in response to a process of slab roll-back that led to distributed back-arc extension in the Weddell Sea Rift System. Widespread magmatism in the region was likely influenced by the presence of one or more mantle plumes impinging beneath the stretching lithosphere. A second phase of continental extension is inferred to have occurred between 180 and 165 Ma (prior to seafloor spreading) and is more closely associated with Gondwana breakup. This second phase over-printed the northern part of the older back arc system. We find no geophysical evidence indicating more than 30 degrees of syn-extensional HEW rotation during Jurassic rifting in the southern Weddell Sea Rift System. Instead, we propose the majority ( 60 degrees) of the inferred block rotation of the HEW sedimentary sequences occurred prior to Jurassic rifting, likely during the Permian-age Gondwanide orogeny as a phase of oroclinal bending in an overall transpressional intraplate orogenic setting.
Automated image analysis of uterine cervical images
NASA Astrophysics Data System (ADS)
Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen
2007-03-01
Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.
Logic programming reveals alteration of key transcription factors in multiple myeloma.
Miannay, Bertrand; Minvielle, Stéphane; Roux, Olivier; Drouin, Pierre; Avet-Loiseau, Hervé; Guérin-Charbonnel, Catherine; Gouraud, Wilfried; Attal, Michel; Facon, Thierry; Munshi, Nikhil C; Moreau, Philippe; Campion, Loïc; Magrangeas, Florence; Guziolowski, Carito
2017-08-23
Innovative approaches combining regulatory networks (RN) and genomic data are needed to extract biological information for a better understanding of diseases, such as cancer, by improving the identification of entities and thereby leading to potential new therapeutic avenues. In this study, we confronted an automatically generated RN with gene expression profiles (GEP) from a cohort of multiple myeloma (MM) patients and normal individuals using global reasoning on the RN causality to identify key-nodes. We modeled each patient by his or her GEP, the RN and the possible automatically detected repairs needed to establish a coherent flow of the information that explains the logic of the GEP. These repairs could represent cancer mutations leading to GEP variability. With this reasoning, unmeasured protein states can be inferred, and we can simulate the impact of a protein perturbation on the RN behavior to identify therapeutic targets. We showed that JUN/FOS and FOXM1 activities are altered in almost all MM patients and identified two survival markers for MM patients. Our results suggest that JUN/FOS-activation has a strong impact on the RN in view of the whole GEP, whereas FOXM1-activation could be an interesting way to perturb an MM subgroup identified by our method.
Nentjes, Lieke; Bernstein, David; Arntz, Arnoud; van Breukelen, Gerard; Slaats, Mariëtte
2015-01-01
Theory of Mind (ToM) is a social perceptual skill that refers to the ability to take someone else's perspective and infer what others think. The current study examined the effect of potential hostility biases, as well as controlled (slow) versus automatic (fast) processing on ToM performance in psychopathy. ToM abilities (as assessed with the Reading the Mind in the Eyes Test; RMET; Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001), was compared between 39 PCL-R diagnosed psychopathic offenders, 37 non-psychopathic offenders, and 26 nonoffender controls. Contrary to our hypothesis, psychopathic individuals presented with intact overall RMET performance when restrictions were imposed on how long task stimuli could be processed. In addition, psychopaths did not over-ascribe hostility to task stimuli (i.e., lack of hostility bias). However, there was a significant three-way interaction between hostility, processing speed, and psychopathy: when there was no time limit on stimulus presentation, psychopathic offenders made fewer errors in identifying more hostile eye stimuli compared to nonoffender controls, who seemed to be less accurate in detecting hostility. Psychopaths' more realistic appraisal of others' malevolent mental states is discussed in the light of theories that stress its potential adaptive function. Copyright © 2015 Elsevier Ltd. All rights reserved.
Procedural error monitoring and smart checklists
NASA Technical Reports Server (NTRS)
Palmer, Everett
1990-01-01
Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.
Earle, Paul S.; Wald, David J.; Jaiswal, Kishor S.; Allen, Trevor I.; Hearne, Michael G.; Marano, Kristin D.; Hotovec, Alicia J.; Fee, Jeremy
2009-01-01
Within minutes of a significant earthquake anywhere on the globe, the U.S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system assesses its potential societal impact. PAGER automatically estimates the number of people exposed to severe ground shaking and the shaking intensity at affected cities. Accompanying maps of the epicentral region show the population distribution and estimated ground-shaking intensity. A regionally specific comment describes the inferred vulnerability of the regional building inventory and, when available, lists recent nearby earthquakes and their effects. PAGER's results are posted on the USGS Earthquake Program Web site (http://earthquake.usgs.gov/), consolidated in a concise one-page report, and sent in near real-time to emergency responders, government agencies, and the media. Both rapid and accurate results are obtained through manual and automatic updates of PAGER's content in the hours following significant earthquakes. These updates incorporate the most recent estimates of earthquake location, magnitude, faulting geometry, and first-hand accounts of shaking. PAGER relies on a rich set of earthquake analysis and assessment tools operated by the USGS and contributing Advanced National Seismic System (ANSS) regional networks. A focused research effort is underway to extend PAGER's near real-time capabilities beyond population exposure to quantitative estimates of fatalities, injuries, and displaced population.
A novel architecture for information retrieval system based on semantic web
NASA Astrophysics Data System (ADS)
Zhang, Hui
2011-12-01
Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.
Automatic Multi-sensor Data Quality Checking and Event Detection for Environmental Sensing
NASA Astrophysics Data System (ADS)
LIU, Q.; Zhang, Y.; Zhao, Y.; Gao, D.; Gallaher, D. W.; Lv, Q.; Shang, L.
2017-12-01
With the advances in sensing technologies, large-scale environmental sensing infrastructures are pervasively deployed to continuously collect data for various research and application fields, such as air quality study and weather condition monitoring. In such infrastructures, many sensor nodes are distributed in a specific area and each individual sensor node is capable of measuring several parameters (e.g., humidity, temperature, and pressure), providing massive data for natural event detection and analysis. However, due to the dynamics of the ambient environment, sensor data can be contaminated by errors or noise. Thus, data quality is still a primary concern for scientists before drawing any reliable scientific conclusions. To help researchers identify potential data quality issues and detect meaningful natural events, this work proposes a novel algorithm to automatically identify and rank anomalous time windows from multiple sensor data streams. More specifically, (1) the algorithm adaptively learns the characteristics of normal evolving time series and (2) models the spatial-temporal relationship among multiple sensor nodes to infer the anomaly likelihood of a time series window for a particular parameter in a sensor node. Case studies using different data sets are presented and the experimental results demonstrate that the proposed algorithm can effectively identify anomalous time windows, which may resulted from data quality issues and natural events.
Ontology design patterns to disambiguate relations between genes and gene products in GENIA
2011-01-01
Motivation Annotated reference corpora play an important role in biomedical information extraction. A semantic annotation of the natural language texts in these reference corpora using formal ontologies is challenging due to the inherent ambiguity of natural language. The provision of formal definitions and axioms for semantic annotations offers the means for ensuring consistency as well as enables the development of verifiable annotation guidelines. Consistent semantic annotations facilitate the automatic discovery of new information through deductive inferences. Results We provide a formal characterization of the relations used in the recent GENIA corpus annotations. For this purpose, we both select existing axiom systems based on the desired properties of the relations within the domain and develop new axioms for several relations. To apply this ontology of relations to the semantic annotation of text corpora, we implement two ontology design patterns. In addition, we provide a software application to convert annotated GENIA abstracts into OWL ontologies by combining both the ontology of relations and the design patterns. As a result, the GENIA abstracts become available as OWL ontologies and are amenable for automated verification, deductive inferences and other knowledge-based applications. Availability Documentation, implementation and examples are available from http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/. PMID:22166341
2014-01-01
Background Non-small cell lung cancer (NSCLC) remains lethal despite the development of numerous drug therapy technologies. About 85% to 90% of lung cancers are NSCLC and the 5-year survival rate is at best still below 50%. Thus, it is important to find drugable target genes for NSCLC to develop an effective therapy for NSCLC. Results Integrated analysis of publically available gene expression and promoter methylation patterns of two highly aggressive NSCLC cell lines generated by in vivo selection was performed. We selected eleven critical genes that may mediate metastasis using recently proposed principal component analysis based unsupervised feature extraction. The eleven selected genes were significantly related to cancer diagnosis. The tertiary protein structure of the selected genes was inferred by Full Automatic Modeling System, a profile-based protein structure inference software, to determine protein functions and to specify genes that could be potential drug targets. Conclusions We identified eleven potentially critical genes that may mediate NSCLC metastasis using bioinformatic analysis of publically available data sets. These genes are potential target genes for the therapy of NSCLC. Among the eleven genes, TINAGL1 and B3GALNT1 are possible candidates for drug compounds that inhibit their gene expression. PMID:25521548
Skinner, Brian; Guy, Stephen J
2015-01-01
Player tracking data represents a revolutionary new data source for basketball analysis, in which essentially every aspect of a player's performance is tracked and can be analyzed numerically. We suggest a way by which this data set, when coupled with a network-style model of the offense that relates players' skills to the team's success at running different plays, can be used to automatically learn players' skills and predict the performance of untested 5-man lineups in a way that accounts for the interaction between players' respective skill sets. After developing a general analysis procedure, we present as an example a specific implementation of our method using a simplified network model. While player tracking data is not yet available in the public domain, we evaluate our model using simulated data and show that player skills can be accurately inferred by a simple statistical inference scheme. Finally, we use the model to analyze games from the 2011 playoff series between the Memphis Grizzlies and the Oklahoma City Thunder and we show that, even with a very limited data set, the model can consistently describe a player's interactions with a given lineup based only on his performance with a different lineup.
Skinner, Brian; Guy, Stephen J.
2015-01-01
Player tracking data represents a revolutionary new data source for basketball analysis, in which essentially every aspect of a player’s performance is tracked and can be analyzed numerically. We suggest a way by which this data set, when coupled with a network-style model of the offense that relates players’ skills to the team’s success at running different plays, can be used to automatically learn players’ skills and predict the performance of untested 5-man lineups in a way that accounts for the interaction between players’ respective skill sets. After developing a general analysis procedure, we present as an example a specific implementation of our method using a simplified network model. While player tracking data is not yet available in the public domain, we evaluate our model using simulated data and show that player skills can be accurately inferred by a simple statistical inference scheme. Finally, we use the model to analyze games from the 2011 playoff series between the Memphis Grizzlies and the Oklahoma City Thunder and we show that, even with a very limited data set, the model can consistently describe a player’s interactions with a given lineup based only on his performance with a different lineup. PMID:26351846
Quantum-Like Bayesian Networks for Modeling Decision Making
Moreira, Catarina; Wichert, Andreas
2016-01-01
In this work, we explore an alternative quantum structure to perform quantum probabilistic inferences to accommodate the paradoxical findings of the Sure Thing Principle. We propose a Quantum-Like Bayesian Network, which consists in replacing classical probabilities by quantum probability amplitudes. However, since this approach suffers from the problem of exponential growth of quantum parameters, we also propose a similarity heuristic that automatically fits quantum parameters through vector similarities. This makes the proposed model general and predictive in contrast to the current state of the art models, which cannot be generalized for more complex decision scenarios and that only provide an explanatory nature for the observed paradoxes. In the end, the model that we propose consists in a nonparametric method for estimating inference effects from a statistical point of view. It is a statistical model that is simpler than the previous quantum dynamic and quantum-like models proposed in the literature. We tested the proposed network with several empirical data from the literature, mainly from the Prisoner's Dilemma game and the Two Stage Gambling game. The results obtained show that the proposed quantum Bayesian Network is a general method that can accommodate violations of the laws of classical probability theory and make accurate predictions regarding human decision-making in these scenarios. PMID:26858669
NASA Astrophysics Data System (ADS)
Vasheghani Farahani, Jamileh; Zare, Mehdi; Lucas, Caro
2012-04-01
Thisarticle presents an adaptive neuro-fuzzy inference system (ANFIS) for classification of low magnitude seismic events reported in Iran by the network of Tehran Disaster Mitigation and Management Organization (TDMMO). ANFIS classifiers were used to detect seismic events using six inputs that defined the seismic events. Neuro-fuzzy coding was applied using the six extracted features as ANFIS inputs. Two types of events were defined: weak earthquakes and mining blasts. The data comprised 748 events (6289 signals) ranging from magnitude 1.1 to 4.6 recorded at 13 seismic stations between 2004 and 2009. We surveyed that there are almost 223 earthquakes with M ≤ 2.2 included in this database. Data sets from the south, east, and southeast of the city of Tehran were used to evaluate the best short period seismic discriminants, and features as inputs such as origin time of event, distance (source to station), latitude of epicenter, longitude of epicenter, magnitude, and spectral analysis (fc of the Pg wave) were used, increasing the rate of correct classification and decreasing the confusion rate between weak earthquakes and quarry blasts. The performance of the ANFIS model was evaluated for training and classification accuracy. The results confirmed that the proposed ANFIS model has good potential for determining seismic events.
Two-dimensional thermal video analysis of offshore bird and bat flight
Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.
2015-09-11
Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less
Two-dimensional thermal video analysis of offshore bird and bat flight
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.
Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less
Supervised dictionary learning for inferring concurrent brain networks.
Zhao, Shijie; Han, Junwei; Lv, Jinglei; Jiang, Xi; Hu, Xintao; Zhao, Yu; Ge, Bao; Guo, Lei; Liu, Tianming
2015-10-01
Task-based fMRI (tfMRI) has been widely used to explore functional brain networks via predefined stimulus paradigm in the fMRI scan. Traditionally, the general linear model (GLM) has been a dominant approach to detect task-evoked networks. However, GLM focuses on task-evoked or event-evoked brain responses and possibly ignores the intrinsic brain functions. In comparison, dictionary learning and sparse coding methods have attracted much attention recently, and these methods have shown the promise of automatically and systematically decomposing fMRI signals into meaningful task-evoked and intrinsic concurrent networks. Nevertheless, two notable limitations of current data-driven dictionary learning method are that the prior knowledge of task paradigm is not sufficiently utilized and that the establishment of correspondences among dictionary atoms in different brains have been challenging. In this paper, we propose a novel supervised dictionary learning and sparse coding method for inferring functional networks from tfMRI data, which takes both of the advantages of model-driven method and data-driven method. The basic idea is to fix the task stimulus curves as predefined model-driven dictionary atoms and only optimize the other portion of data-driven dictionary atoms. Application of this novel methodology on the publicly available human connectome project (HCP) tfMRI datasets has achieved promising results.
Rizzo, Gaia; Tonietto, Matteo; Castellaro, Marco; Raffeiner, Bernd; Coran, Alessandro; Fiocco, Ugo; Stramare, Roberto; Grisan, Enrico
2017-04-01
Contrast Enhanced Ultrasound (CEUS) is a sensitive imaging technique to assess tissue vascularity and it can be particularly useful in early detection and grading of arthritis. In a recent study we have shown that a Gamma-variate can accurately quantify synovial perfusion and it is flexible enough to describe many heterogeneous patterns. However, in some cases the heterogeneity of the kinetics can be such that even the Gamma model does not properly describe the curve, with a high number of outliers. In this work we apply to CEUS data the single compartment recirculation model (SCR) which takes explicitly into account the trapping of the microbubbles contrast agent by adding to the single Gamma-variate model its integral. The SCR model, originally proposed for dynamic-susceptibility magnetic resonance imaging, is solved here at pixel level within a Bayesian framework using Variational Bayes (VB). We also include the automatic relevant determination (ARD) algorithm to automatically infer the model complexity (SCR vs. Gamma model) from the data. We demonstrate that the inclusion of trapping best describes the CEUS patterns in 50% of the pixels, with the other 50% best fitted by a single Gamma. Such results highlight the necessity of the use ARD, to automatically exclude the irreversible component where not supported by the data. VB with ARD returns precise estimates in the majority of the kinetics (88% of total percentage of pixels) in a limited computational time (on average, 3.6 min per subject). Moreover, the impact of the additional trapping component has been evaluated for the differentiation of rheumatoid and non-rheumatoid patients, by means of a support vector machine classifier with backward feature selection. The results show that the trapping parameter is always present in the selected feature set, and improves the classification.
NASA Astrophysics Data System (ADS)
Bakhous, Christine; Aubert, Benjamin; Vazquez, Carlos; Cresson, Thierry; Parent, Stefan; De Guise, Jacques
2018-02-01
The 3D analysis of the spine deformities (scoliosis) has a high potential in its clinical diagnosis and treatment. In a biplanar radiographs context, a 3D analysis requires a 3D reconstruction from a pair of 2D X-rays. Whether being fully-/semiautomatic or manual, this task is complex because of the noise, the structure superimposition and partial information due to a limited projections number. Being involved in the axial vertebra rotation (AVR), which is a fundamental clinical parameter for scoliosis diagnosis, pedicles are important landmarks for the 3D spine modeling and pre-operative planning. In this paper, we focus on the extension of a fully-automatic 3D spine reconstruction method where the Vertebral Body Centers (VBCs) are automatically detected using Convolutional Neural Network (CNN) and then regularized using a Statistical Shape Model (SSM) framework. In this global process, pedicles are inferred statistically during the SSM regularization. Our contribution is to add a CNN-based regression model for pedicle detection allowing a better pedicle localization and improving the clinical parameters estimation (e.g. AVR, Cobb angle). Having 476 datasets including healthy patients and Adolescent Idiopathic Scoliosis (AIS) cases with different scoliosis grades (Cobb angles up to 116°), we used 380 for training, 48 for testing and 48 for validation. Adding the local CNN-based pedicle detection decreases the mean absolute error of the AVR by 10%. The 3D mean Euclidian distance error between detected pedicles and ground truth decreases by 17% and the maximum error by 19%. Moreover, a general improvement is observed in the 3D spine reconstruction and reflected in lower errors on the Cobb angle estimation.
NASA Astrophysics Data System (ADS)
Filippatos, Konstantinos; Boehler, Tobias; Geisler, Benjamin; Zachmann, Harald; Twellmann, Thorsten
2010-02-01
With its high sensitivity, dynamic contrast-enhanced MR imaging (DCE-MRI) of the breast is today one of the first-line tools for early detection and diagnosis of breast cancer, particularly in the dense breast of young women. However, many relevant findings are very small or occult on targeted ultrasound images or mammography, so that MRI guided biopsy is the only option for a precise histological work-up [1]. State-of-the-art software tools for computer-aided diagnosis of breast cancer in DCE-MRI data offer also means for image-based planning of biopsy interventions. One step in the MRI guided biopsy workflow is the alignment of the patient position with the preoperative MR images. In these images, the location and orientation of the coil localization unit can be inferred from a number of fiducial markers, which for this purpose have to be manually or semi-automatically detected by the user. In this study, we propose a method for precise, full-automatic localization of fiducial markers, on which basis a virtual localization unit can be subsequently placed in the image volume for the purpose of determining the parameters for needle navigation. The method is based on adaptive thresholding for separating breast tissue from background followed by rigid registration of marker templates. In an evaluation of 25 clinical cases comprising 4 different commercial coil array models and 3 different MR imaging protocols, the method yielded a sensitivity of 0.96 at a false positive rate of 0.44 markers per case. The mean distance deviation between detected fiducial centers and ground truth information that was appointed from a radiologist was 0.94mm.
A segmentation approach for a delineation of terrestrial ecoregions
NASA Astrophysics Data System (ADS)
Nowosad, J.; Stepinski, T.
2017-12-01
Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for global ecological and conservation studies.
Inference and quantification of peptidoforms in large sample cohorts by SWATH-MS
Röst, Hannes L; Ludwig, Christina; Buil, Alfonso; Bensimon, Ariel; Soste, Martin; Spector, Tim D; Dermitzakis, Emmanouil T; Collins, Ben C; Malmström, Lars; Aebersold, Ruedi
2017-01-01
The consistent detection and quantification of protein post-translational modifications (PTMs) across sample cohorts is an essential prerequisite for the functional analysis of biological processes. Data-independent acquisition (DIA), a bottom-up mass spectrometry based proteomic strategy, exemplified by SWATH-MS, provides complete precursor and fragment ion information of a sample and thus, in principle, the information to identify peptidoforms, the modified variants of a peptide. However, due to the convoluted structure of DIA data sets the confident and systematic identification and quantification of peptidoforms has remained challenging. Here we present IPF (Inference of PeptidoForms), a fully automated algorithm that uses spectral libraries to query, validate and quantify peptidoforms in DIA data sets. The method was developed on data acquired by SWATH-MS and benchmarked using a synthetic phosphopeptide reference data set and phosphopeptide-enriched samples. The data indicate that IPF reduced false site-localization by more than 7-fold in comparison to previous approaches, while recovering 85.4% of the true signals. IPF was applied to detect and quantify peptidoforms carrying ten different types of PTMs in DIA data acquired from more than 200 samples of undepleted blood plasma of a human twin cohort. The data approportioned, for the first time, the contribution of heritable, environmental and longitudinal effects on the observed quantitative variability of specific modifications in blood plasma of a human population. PMID:28604659
NASA Technical Reports Server (NTRS)
Turner, T. J.; Weaver, K. A.; Mushotzky, R. F.; Holt, S. S.; Madejski, G. M.
1991-01-01
The X-ray spectra of 25 Seyfert galaxies measured with the Solid State Spectrometer on the Einstein Observatory have been investigated. This new investigation utilizes simultaneous data from the Monitor Proportional Counter, and automatic correction for systematic effects in the Solid State Spectrometer which were previously handled subjectively. It is found that the best-fit single-power-law indices generally agree with those previously reported, but that soft excesses of some form are inferred for about 48 percent of the sources. One possible explanation of the soft excess emission is a blend of soft X-ray lines, centered around 0.8 keV. The implications of these results for accretion disk models are discussed.
Self-organizing maps for learning the edit costs in graph matching.
Neuhaus, Michel; Bunke, Horst
2005-06-01
Although graph matching and graph edit distance computation have become areas of intensive research recently, the automatic inference of the cost of edit operations has remained an open problem. In the present paper, we address the issue of learning graph edit distance cost functions for numerically labeled graphs from a corpus of sample graphs. We propose a system of self-organizing maps (SOMs) that represent the distance measuring spaces of node and edge labels. Our learning process is based on the concept of self-organization. It adapts the edit costs in such a way that the similarity of graphs from the same class is increased, whereas the similarity of graphs from different classes decreases. The learning procedure is demonstrated on two different applications involving line drawing graphs and graphs representing diatoms, respectively.
Video denoising using low rank tensor decomposition
NASA Astrophysics Data System (ADS)
Gui, Lihua; Cui, Gaochao; Zhao, Qibin; Wang, Dongsheng; Cichocki, Andrzej; Cao, Jianting
2017-03-01
Reducing noise in a video sequence is of vital important in many real-world applications. One popular method is block matching collaborative filtering. However, the main drawback of this method is that noise standard deviation for the whole video sequence is known in advance. In this paper, we present a tensor based denoising framework that considers 3D patches instead of 2D patches. By collecting the similar 3D patches non-locally, we employ the low-rank tensor decomposition for collaborative filtering. Since we specify the non-informative prior over the noise precision parameter, the noise variance can be inferred automatically from observed video data. Therefore, our method is more practical, which does not require knowing the noise variance. The experimental on video denoising demonstrates the effectiveness of our proposed method.
Visual exploration of parameter influence on phylogenetic trees.
Hess, Martin; Bremm, Sebastian; Weissgraeber, Stephanie; Hamacher, Kay; Goesele, Michael; Wiemeyer, Josef; von Landesberger, Tatiana
2014-01-01
Evolutionary relationships between organisms are frequently derived as phylogenetic trees inferred from multiple sequence alignments (MSAs). The MSA parameter space is exponentially large, so tens of thousands of potential trees can emerge for each dataset. A proposed visual-analytics approach can reveal the parameters' impact on the trees. Given input trees created with different parameter settings, it hierarchically clusters the trees according to their structural similarity. The most important clusters of similar trees are shown together with their parameters. This view offers interactive parameter exploration and automatic identification of relevant parameters. Biologists applied this approach to real data of 16S ribosomal RNA and protein sequences of ion channels. It revealed which parameters affected the tree structures. This led to a more reliable selection of the best trees.
CAD system for automatic analysis of CT perfusion maps
NASA Astrophysics Data System (ADS)
Hachaj, T.; Ogiela, M. R.
2011-03-01
In this article, authors present novel algorithms developed for the computer-assisted diagnosis (CAD) system for analysis of dynamic brain perfusion, computer tomography (CT) maps, cerebral blood flow (CBF), and cerebral blood volume (CBV). Those methods perform both quantitative analysis [detection and measurement and description with brain anatomy atlas (AA) of potential asymmetries/lesions] and qualitative analysis (semantic interpretation of visualized symptoms). The semantic interpretation (decision about type of lesion: ischemic/hemorrhagic, is the brain tissue at risk of infraction or not) of visualized symptoms is done by, so-called, cognitive inference processes allowing for reasoning on character of pathological regions based on specialist image knowledge. The whole system is implemented in.NET platform (C# programming language) and can be used on any standard PC computer with.NET framework installed.
Probabilistic Low-Rank Multitask Learning.
Kong, Yu; Shao, Ming; Li, Kang; Fu, Yun
2018-03-01
In this paper, we consider the problem of learning multiple related tasks simultaneously with the goal of improving the generalization performance of individual tasks. The key challenge is to effectively exploit the shared information across multiple tasks as well as preserve the discriminative information for each individual task. To address this, we propose a novel probabilistic model for multitask learning (MTL) that can automatically balance between low-rank and sparsity constraints. The former assumes a low-rank structure of the underlying predictive hypothesis space to explicitly capture the relationship of different tasks and the latter learns the incoherent sparse patterns private to each task. We derive and perform inference via variational Bayesian methods. Experimental results on both regression and classification tasks on real-world applications demonstrate the effectiveness of the proposed method in dealing with the MTL problems.
Bayesian analysis of the dynamic cosmic web in the SDSS galaxy survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, Florent; Wandelt, Benjamin; Jasche, Jens, E-mail: florent.leclercq@polytechnique.org, E-mail: jasche@iap.fr, E-mail: wandelt@iap.fr
Recent application of the Bayesian algorithm \\textsc(borg) to the Sloan Digital Sky Survey (SDSS) main sample galaxies resulted in the physical inference of the formation history of the observed large-scale structure from its origin to the present epoch. In this work, we use these inferences as inputs for a detailed probabilistic cosmic web-type analysis. To do so, we generate a large set of data-constrained realizations of the large-scale structure using a fast, fully non-linear gravitational model. We then perform a dynamic classification of the cosmic web into four distinct components (voids, sheets, filaments, and clusters) on the basis of themore » tidal field. Our inference framework automatically and self-consistently propagates typical observational uncertainties to web-type classification. As a result, this study produces accurate cosmographic classification of large-scale structure elements in the SDSS volume. By also providing the history of these structure maps, the approach allows an analysis of the origin and growth of the early traces of the cosmic web present in the initial density field and of the evolution of global quantities such as the volume and mass filling fractions of different structures. For the problem of web-type classification, the results described in this work constitute the first connection between theory and observations at non-linear scales including a physical model of structure formation and the demonstrated capability of uncertainty quantification. A connection between cosmology and information theory using real data also naturally emerges from our probabilistic approach. Our results constitute quantitative chrono-cosmography of the complex web-like patterns underlying the observed galaxy distribution.« less
Three-dimensional motion of the uncovertebral joint during head rotation.
Nagamoto, Yukitaka; Ishii, Takahiro; Iwasaki, Motoki; Sakaura, Hironobu; Moritomo, Hisao; Fujimori, Takahito; Kashii, Masafumi; Murase, Tsuyoshi; Yoshikawa, Hideki; Sugamoto, Kazuomi
2012-10-01
The uncovertebral joints are peculiar but clinically important anatomical structures of the cervical vertebrae. In the aged or degenerative cervical spine, osteophytes arising from an uncovertebral joint can cause cervical radiculopathy, often necessitating decompression surgery. Although these joints are believed to bear some relationship to head rotation, how the uncovertebral joints work during head rotation remains unclear. The purpose of this study is to elucidate 3D motion of the uncovertebral joints during head rotation. Study participants were 10 healthy volunteers who underwent 3D MRI of the cervical spine in 11 positions during head rotation: neutral (0°) and 15° increments to maximal head rotation on each side (left and right). Relative motions of the cervical spine were calculated by automatically superimposing a segmented 3D MR image of the vertebra in the neutral position over images of each position using the volume registration method. The 3D intervertebral motions of all 10 volunteers were standardized, and the 3D motion of uncovertebral joints was visualized on animations using data for the standardized motion. Inferred contact areas of uncovertebral joints were also calculated using a proximity mapping technique. The 3D animation of uncovertebral joints during head rotation showed that the joints alternate between contact and separation. Inferred contact areas of uncovertebral joints were situated directly lateral at the middle cervical spine and dorsolateral at the lower cervical spine. With increasing angle of rotation, inferred contact areas increased in the middle cervical spine, whereas areas in the lower cervical spine slightly decreased. In this study, the 3D motions of uncovertebral joints during head rotation were depicted precisely for the first time.
The VLF Scattering Pattern of Lightning-Induced Ionospheric Disturbances
NASA Astrophysics Data System (ADS)
Cohen, M.; Golkowski, M.
2016-12-01
Very Low Frequency (VLF) transmitter remote sensing is a well-employed technique to diagnose the impact of lightning on the D-region ionosphere, from the EMP, quasi-static charge, and radiation belt electron precipitation. When lightning disturbs the ionosphere, propagation of VLF (3-30 kHz) narrow-frequency signals through that region are subsequently scattered, which can be detected as transient changes in amplitude and phase at distant receivers. In principle it is possible to then infer the ionospheric disturbance but in practice this is difficult to do reliably. One of the challenges of this process is that VLF perturbations are like snowflakes - no two events are the same. The transmitter-receiver geometry, lightning properties, and ionospheric condition before the event, all impact the VLF scattering. This makes it very difficult, based on case studies which observe only one or two slivers at a time, to infer the scattering pattern of VLF events, and therefore, to infer what happened to the ionosphere. Our aim is to get around that by looking at a huge database of lightning-induced ionospheric disturbances, taken over several years of recordings. We utilize an automatic extraction algorithm to find, identify, and characterize VLF perturbations on a massive scale. From there, we can investigate how the VLF perturbations change as a function of the parameters of the event. If it turns out that there is exists a "canonical" lightning-induced disturbance as a function of geometry and lightning parameters, it will go a long way toward identifying the causative mechanisms and being able to accurately simulate and reproduce any lightning-induced ionospheric disturbance. We present results of our efforts to do just that.
Variational Bayesian Learning for Wavelet Independent Component Analysis
NASA Astrophysics Data System (ADS)
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
Li, Guo-Zhong; Vissers, Johannes P C; Silva, Jeffrey C; Golick, Dan; Gorenstein, Marc V; Geromanos, Scott J
2009-03-01
A novel database search algorithm is presented for the qualitative identification of proteins over a wide dynamic range, both in simple and complex biological samples. The algorithm has been designed for the analysis of data originating from data independent acquisitions, whereby multiple precursor ions are fragmented simultaneously. Measurements used by the algorithm include retention time, ion intensities, charge state, and accurate masses on both precursor and product ions from LC-MS data. The search algorithm uses an iterative process whereby each iteration incrementally increases the selectivity, specificity, and sensitivity of the overall strategy. Increased specificity is obtained by utilizing a subset database search approach, whereby for each subsequent stage of the search, only those peptides from securely identified proteins are queried. Tentative peptide and protein identifications are ranked and scored by their relative correlation to a number of models of known and empirically derived physicochemical attributes of proteins and peptides. In addition, the algorithm utilizes decoy database techniques for automatically determining the false positive identification rates. The search algorithm has been tested by comparing the search results from a four-protein mixture, the same four-protein mixture spiked into a complex biological background, and a variety of other "system" type protein digest mixtures. The method was validated independently by data dependent methods, while concurrently relying on replication and selectivity. Comparisons were also performed with other commercially and publicly available peptide fragmentation search algorithms. The presented results demonstrate the ability to correctly identify peptides and proteins from data independent acquisition strategies with high sensitivity and specificity. They also illustrate a more comprehensive analysis of the samples studied; providing approximately 20% more protein identifications, compared to a more conventional data directed approach using the same identification criteria, with a concurrent increase in both sequence coverage and the number of modified peptides.
NASA Astrophysics Data System (ADS)
Knežević, Sladjana; Läsker, Ronald; van de Ven, Glenn; Font, Joan; Raymond, John C.; Bailer-Jones, Coryn A. L.; Beckman, John; Morlino, Giovanni; Ghavamian, Parviz; Hughes, John P.; Heng, Kevin
2017-09-01
We present Hα spectroscopic observations and detailed modeling of the Balmer filaments in the supernova remnant (SNR) Tycho (SN 1572). We used GH α FaS (Galaxy Hα Fabry-Pérot Spectrometer) on the William Herschel Telescope with a 3.‧4 × 3.‧4 field of view, 0.″2 pixel scale, and {σ }{instr}=8.1 km s-1 resolution at 1″ seeing for ˜10 hr, resulting in 82 spatial-spectral bins that resolve the narrow Hα line in the entire SN 1572 northeastern rim. For the first time, we can therefore mitigate artificial line broadening from unresolved differential motion and probe Hα emission parameters in varying shock and ambient medium conditions. Broad Hα line remains unresolved within spectral coverage of 392 km s-1. We employed Bayesian inference to obtain reliable parameter confidence intervals and to quantify the evidence for models with multiple line components. The median Hα narrow-line (NL) FWHM of all bins and models is {W}{NL}=(54.8+/- 1.8) km s-1 at the 95% confidence level, varying within [35, 72] km s-1 between bins and clearly broadened compared to the intrinsic (thermal) ≈20 km s-1. Possible line splits are accounted for, significant in ≈ 18 % of the filament, and presumably due to remaining projection effects. We also find widespread evidence for intermediate-line emission of a broad-neutral precursor, with a median {W}{IL}=(180+/- 14) km s-1 (95% confidence). Finally, we present a measurement of the remnant’s systemic velocity, {V}{LSR}=-34 km s-1, and map differential line-of-sight motions. Our results confirm the existence and interplay of shock precursors in Tycho’s remnant. In particular, we show that suprathermal NL emission is near-universal in SN 1572, and that, in the absence of an alternative explanation, collisionless SNR shocks constitute a viable acceleration source for Galactic TeV cosmic-ray protons.
NASA Astrophysics Data System (ADS)
Priyono, S.; Lubis, B. M.; Humaidi, S.; Prihandoko, B.
2018-05-01
The synthesis of Li4Ti5O12 (LTO) and study of the heating effect on the manufacturing process of LTO sheet on the electrochemical performance have been investigated. LTO anode material composed with LiOH.H2O, TiO2 as raw materials were synthesized by the solid-state process. All raw materials were stoichiometrically mixed and milled with a planetary ball mill for 4 h to become the precursor of LTO. The precursor was characterized by Simultaneous Thermal Analyzer (STA) to determine sintering temperature. The STA analysis revealed that the minimum temperature to sinter the precursor was 600 °C. The precursor was sintered by using high-temperature furnace at 900 °C for 2 h in air atmosphere. The final product was ground and sieved with a screen to get finer and more homogenous particles. The final product was characterized by X-ray Diffraction (XRD) to determined crystal structure and phases. LTO sheet was prepared by mixing LTO powders with PTFE and AB in ratio 85:10:5 wt% by varrying heating process with 40 °C, 50 °C and 70 °C to become slurry. The slurry was coated on Cu foil with doctor blade method and dried at 80 °C for 1 h. LTO sheet was characterized by FTIR to analyze functional groups. LTO sheet was cut into circular discs with 16 mm in diameter. LTO sheet was arranged with a separator, metallic lithium and electrolyte become coin cell in a glove box. Automatic battery cycler was used to measure electrochemical performance and specific capacity of the cell. From the XRD analysis showed that single phase of LTO phase with a cubic crystal structure is formed. FTIR testing showed that there are stretching vibrations of Ti-O and H-F from tetrahedral TiO6 and PTFE respectively. Increasing temperature on LTO sheet manufacturing doesn’t change the structure of LTO. Cyclic voltammetry analysis showed that sample with heating of 40 °C showed better redox process than others. Charge-discharge test also showed that sample with heating of 40 °C has higher specific capacity than other samples with 53 mAh·g-1.
Passive Infrared Thermographic Imaging for Mobile Robot Object Identification
NASA Astrophysics Data System (ADS)
Hinders, M. K.; Fehlman, W. L.
2010-02-01
The usefulness of thermal infrared imaging as a mobile robot sensing modality is explored, and a set of thermal-physical features used to characterize passive thermal objects in outdoor environments is described. Objects that extend laterally beyond the thermal camera's field of view, such as brick walls, hedges, picket fences, and wood walls as well as compact objects that are laterally within the thermal camera's field of view, such as metal poles and tree trunks, are considered. Classification of passive thermal objects is a subtle process since they are not a source for their own emission of thermal energy. A detailed analysis is included of the acquisition and preprocessing of thermal images, as well as the generation and selection of thermal-physical features from these objects within thermal images. Classification performance using these features is discussed, as a precursor to the design of a physics-based model to automatically classify these objects.
An Analysis of Eruptions Detected by the LMSAL Eruption Patrol
NASA Astrophysics Data System (ADS)
Hurlburt, N. E.; Higgins, P. A.; Jaffey, S.
2014-12-01
Observations of the solar atmosphere reveals a wide range of real and apparent motions, from small scale jets and spicules to global-scale coronal mass ejections. Identifying and characterizing these motions are essential to advance our understanding the drivers of space weather. Automated and visual identifications are used in identifying CMEs. To date, the precursors to these — eruptions near the solar surface — have been identified primarily by visual inspection. Here we report on an analysis of the eruptions detected by the Eruption Patrol, a data mining module designed to automatically identify eruptions from data collected by Solar Dynamics Observatory's Atmospheric Imaging Assembly (SDO/AIA). We describe the module and use it both to explore relations with other solar events recorded in the Heliophysics Event Knowledgebase and to identify and access data collected by the Interface Region Imaging Spectrograph (IRIS) and Solar Optical Telescope (SOT) on Hinode for further analysis.
NASA Astrophysics Data System (ADS)
Borsdorff, Tobias; Andrasec, Josip; aan de Brugh, Joost; Hu, Haili; Aben, Ilse; Landgraf, Jochen
2018-05-01
In the perspective of the upcoming TROPOMI Sentinel-5 Precursor carbon monoxide data product, we discuss the benefit of using CO total column retrievals from cloud-contaminated SCIAMACHY 2.3 µm shortwave infrared spectra to detect atmospheric CO enhancements on regional and urban scales due to emissions from cities and wildfires. The study uses the operational Sentinel-5 Precursor algorithm SICOR, which infers the vertically integrated CO column together with effective cloud parameters. We investigate its capability to detect localized CO enhancements distinguishing between clear-sky observations and observations with low (< 1.5 km) and medium-high clouds (1.5-5 km). As an example, we analyse CO enhancements over the cities Paris, Los Angeles and Tehran as well as the wildfire events in Mexico-Guatemala 2005 and Alaska-Canada 2004. The CO average of the SCIAMACHY full-mission data set of clear-sky observations can detect weak CO enhancements of less than 10 ppb due to air pollution in these cities. For low-cloud conditions, the CO data product performs similarly well. For medium-high clouds, the observations show a reduced CO signal both over Tehran and Los Angeles, while for Paris no significant CO enhancement can be detected. This indicates that information about the vertical distribution of CO can be obtained from the SCIAMACHY measurements. Moreover, for the Mexico-Guatemala fires, the low-cloud CO data captures a strong outflow of CO over the Gulf of Mexico and the Pacific Ocean and so provides complementary information to clear-sky retrievals, which can only be obtained over land. For both burning events, enhanced CO values are even detectable with medium-high-cloud retrievals, confirming a distinct vertical extension of the pollution. The larger number of additional measurements, and hence the better spatial coverage, significantly improve the detection of wildfire pollution using both the clear-sky and cloudy CO retrievals. Due to the improved instrument performance of the TROPOMI instrument with respect to its precursor SCIAMACHY, the upcoming Sentinel-5 Precursor CO data product will allow improved detection of CO emissions and their vertical extension over cities and fires, making new research applications possible.
Quantum Bose-Hubbard model with an evolving graph as a toy model for emergent spacetime
NASA Astrophysics Data System (ADS)
Hamma, Alioscia; Markopoulou, Fotini; Lloyd, Seth; Caravelli, Francesco; Severini, Simone; Markström, Klas
2010-05-01
We present a toy model for interacting matter and geometry that explores quantum dynamics in a spin system as a precursor to a quantum theory of gravity. The model has no a priori geometric properties; instead, locality is inferred from the more fundamental notion of interaction between the matter degrees of freedom. The interaction terms are themselves quantum degrees of freedom so that the structure of interactions and hence the resulting local and causal structures are dynamical. The system is a Hubbard model where the graph of the interactions is a set of quantum evolving variables. We show entanglement between spatial and matter degrees of freedom. We study numerically the quantum system and analyze its entanglement dynamics. We analyze the asymptotic behavior of the classical model. Finally, we discuss analogues of trapped surfaces and gravitational attraction in this simple model.
NASA Astrophysics Data System (ADS)
Adamkovics, M.; Boering, K. A.
2003-12-01
The presence of photochemically-generated hazes has a significant impact on radiative transfer in planetary atmospheres. While the rates of particle formation have been inferred from photochemical or microphysical models constrained to match observations, these rates have not been determined experimentally. Thus, the fundamental kinetics of particle formation are not known and remain highly parameterized in planetary atmospheric models. We have developed instrumentation for measuring the formation rates and optical properties of organic aerosols produced by irradiating mixtures of precursor gases via in situ optical (633nm) scattering and online quadrupole mass spectrometry (1-200 amu). Results for the generation of particulate hydrocarbons from the irradiation of pure, gas-phase CH4 as well as CH4/CO2 mixtures with vacuum ultraviolet (120-160nm) light, along with simultaneous measurements of the evolution of higher gas-phase hydrocarbons will be presented.
Clinopyroxene precursors to amphibole sponge in arc crust
Smith, Daniel J.
2014-01-01
The formation of amphibole cumulates beneath arc volcanoes is a key control on magma geochemistry, and generates a hydrous lower crust. Despite being widely inferred from trace element geochemistry as a major lower crustal phase, amphibole is neither abundant nor common as a phenocryst phase in arc lavas and erupted pyroclasts, prompting some authors to refer to it as a ‘cryptic’ fractionating phase. This study provides evidence that amphibole develops by evolved melts overprinting earlier clinopyroxene—a near-ubiquitous mineral in arc magmas. Reaction-replacement of clinopyroxene ultimately forms granoblastic amphibole lithologies. Reaction-replacement amphiboles have more primitive trace element chemistry (for example, lower concentrations of incompatible Pb) than amphibole phenocrysts, but still have chemistries suitable for producing La/Yb and Dy/Yb ‘amphibole sponge’ signatures. Amphibole can fractionate cryptically as reactions between melt and mush in lower crustal ‘hot zones’ produce amphibole-rich assemblages, without significant nucleation and growth of amphibole phenocrysts. PMID:25002269
Influence of heating procedures on the surface structure of stabilized polyacrylonitrile fibers
NASA Astrophysics Data System (ADS)
Zhao, Rui-Xue; Sun, Peng-fei; Liu, Rui-jian; Ding, Zhan-hui; Li, Xiang-shan; Liu, Xiao-yang; Zhao, Xu-dong; Gao, Zhong-min
2018-03-01
The stabilized polyacrylonitrile (PAN) fibers were obtained after heating the precursor PAN fibers under air atmosphere by different procedures. The surface structures and compositions of as-prepared stabilized PAN fibers have been investigated by SEM, SSNMR, XPS and Raman spectroscopy. The results show that 200 °C, 220 °C, 250 °C, and 280 °C are key temperatures for the preparation of stabilized PAN fibers. The effect of heating gradient on the structure of stabilized PAN fibers has been studied. The possible chemical structural formulas for the PAN fibers is provided, which include the stable and unstable structure. The stable structure (α-type) could endure the strong chemical reactions and the unstable structure (β- or γ-type) could mitigate the drastic oxidation reactions. The inferences of chemical formula of stabilized PAN fibers are benefit to the design of appropriate surface structure for the production for high quality carbon fibers.
The Sensitivity of U.S. Surface Ozone Formation to NOx, and VOCs as Viewed from Space
NASA Technical Reports Server (NTRS)
Duncan, Bryan N.; Yoshida, Yasuko; Sillman, Sanford; Retscher, Christian; Pickering, Kenneth E.; Martin, Randall V.; Celarier, Edward A.
2009-01-01
We investigated variations in the sensitivity of surface ozone formation in summer to precursor species concentrations of volatile organic compounds (VOCs) and nitrogen oxides (NO(x)) as inferred from the ratio of tropospheric columns of formaldehyde and nitrogen dioxide from the Aura Ozone Monitoring Instrument (OMI). The data indicate that ozone formation became: 1. more sensitive to NO(x) over most of the U.S, from 2005 to 2007 because of substantial decreases in NO(x) emissions primarily from stationary sources, and 2. more sensitive to NO(x) with increasing temperature, in part because emissions of highly reactive, biogenic isoprene increase with temperature, thus increasing the total VOC reactivity. Based on our interpretation of the data, current strategies implemented to reduce unhealthy levels of surface ozone should focus more on reducing NO(x) emissions, except in some downtown areas which have historically benefited from reductions in VOC emissions.
Meinecke, Jena; Tzeferacos, Petros; Bell, Anthony; ...
2015-06-22
The visible matter in the universe is turbulent and magnetized. Turbulence in galaxy clusters is produced by mergers and by jets of the central galaxies and believed responsible for the amplification of magnetic fields. We report on experiments looking at the collision of two laser-produced plasma clouds, mimicking, in the laboratory, a cluster merger event. By measuring the spectrum of the density fluctuations, we infer developed, Kolmogorov-like turbulence. From spectral line broadening, we estimate a level of turbulence consistent with turbulent heating balancing radiative cooling, as it likely does in galaxy clusters. We show that the magnetic field is amplifiedmore » by turbulent motions, reaching a nonlinear regime that is a precursor to turbulent dynamo. Thus, our experiment provides a promising platform for understanding the structure of turbulence and the amplification of magnetic fields in the universe.« less
Simultaneous VLA observations of a flare at 6 and 20 centimeter wavelengths
NASA Technical Reports Server (NTRS)
Velusamy, T.; Kundu, M. R.; Schmahl, E. J.; Mccabe, M.
1987-01-01
Using the Very Large Array at 6 and 20 cm wavelengths, a May 15, 1980 solar flare was mapped. The 1B flare, as observed in H alpha at Mees Solar Observatory, Maui, Hawaii, appeared as two sequential flares occurring at different neutral lines. The peaks of the time profile at 20 cm were delayed with respect to the 6 cm counterparts, but they were related to each other and to the H alpha activity. At 20 cm, precursor activity occurred, and an oppositely polarized source an arcminute away from the main burst may have triggered the flare. The main 20 cm emission appeared to be displaced limbward from the 6 cm burst. If both the 6 and 20 cm emission originated in the same system of loops, it is inferred that the apparent lateral displacement was caused by a height difference of 33,000 km in the sources of emission.
Meinecke, Jena; Tzeferacos, Petros; Bell, Anthony; Bingham, Robert; Clarke, Robert; Churazov, Eugene; Crowston, Robert; Doyle, Hugo; Drake, R. Paul; Heathcote, Robert; Koenig, Michel; Kuramitsu, Yasuhiro; Kuranz, Carolyn; Lee, Dongwook; MacDonald, Michael; Murphy, Christopher; Notley, Margaret; Park, Hye-Sook; Pelka, Alexander; Ravasio, Alessandra; Reville, Brian; Sakawa, Youichi; Wan, Willow; Woolsey, Nigel; Yurchak, Roman; Miniati, Francesco; Schekochihin, Alexander; Lamb, Don; Gregori, Gianluca
2015-01-01
The visible matter in the universe is turbulent and magnetized. Turbulence in galaxy clusters is produced by mergers and by jets of the central galaxies and believed responsible for the amplification of magnetic fields. We report on experiments looking at the collision of two laser-produced plasma clouds, mimicking, in the laboratory, a cluster merger event. By measuring the spectrum of the density fluctuations, we infer developed, Kolmogorov-like turbulence. From spectral line broadening, we estimate a level of turbulence consistent with turbulent heating balancing radiative cooling, as it likely does in galaxy clusters. We show that the magnetic field is amplified by turbulent motions, reaching a nonlinear regime that is a precursor to turbulent dynamo. Thus, our experiment provides a promising platform for understanding the structure of turbulence and the amplification of magnetic fields in the universe. PMID:26100873
Hu, Qi-Hou; Xie, Zhou-Qing; Wang, Xin-Ming; Kang, Hui; He, Quan-Fu; Zhang, Pengfei
2013-01-01
Isoprene and monoterpenes are important precursors of secondary organic aerosols (SOA) in continents. However, their contributions to aerosols over oceans are still inconclusive. Here we analyzed SOA tracers from isoprene and monoterpenes in aerosol samples collected over oceans during the Chinese Arctic and Antarctic Research Expeditions. Combined with literature reports elsewhere, we found that the dominant tracers are the oxidation products of isoprene. The concentrations of tracers varied considerably. The mean average values were approximately one order of magnitude higher in the Northern Hemisphere than in the Southern Hemisphere. High values were generally observed in coastal regions. This phenomenon was ascribed to the outflow influence from continental sources. High levels of isoprene could emit from oceans and consequently have a significant impact on marine SOA as inferred from isoprene SOA during phytoplankton blooms, which may abruptly increase up to 95 ng/m³ in the boundary layer over remote oceans.
Earthquake forecasting studies using radon time series data in Taiwan
NASA Astrophysics Data System (ADS)
Walia, Vivek; Kumar, Arvind; Fu, Ching-Chou; Lin, Shih-Jung; Chou, Kuang-Wu; Wen, Kuo-Liang; Chen, Cheng-Hong
2017-04-01
For few decades, growing number of studies have shown usefulness of data in the field of seismogeochemistry interpreted as geochemical precursory signals for impending earthquakes and radon is idendified to be as one of the most reliable geochemical precursor. Radon is recognized as short-term precursor and is being monitored in many countries. This study is aimed at developing an effective earthquake forecasting system by inspecting long term radon time series data. The data is obtained from a network of radon monitoring stations eastblished along different faults of Taiwan. The continuous time series radon data for earthquake studies have been recorded and some significant variations associated with strong earthquakes have been observed. The data is also examined to evaluate earthquake precursory signals against environmental factors. An automated real-time database operating system has been developed recently to improve the data processing for earthquake precursory studies. In addition, the study is aimed at the appraisal and filtrations of these environmental parameters, in order to create a real-time database that helps our earthquake precursory study. In recent years, automatic operating real-time database has been developed using R, an open source programming language, to carry out statistical computation on the data. To integrate our data with our working procedure, we use the popular and famous open source web application solution, AMP (Apache, MySQL, and PHP), creating a website that could effectively show and help us manage the real-time database.
Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach
NASA Astrophysics Data System (ADS)
Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.
2015-04-01
Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.
Balloon-borne air traffic management (ATM) as a precursor to space-based ATM
NASA Astrophysics Data System (ADS)
Brodsky, Yuval; Rieber, Richard; Nordheim, Tom
2012-01-01
The International Space University—Balloon Air traffic control Technology Experiment (I-BATE ) has flown on board two stratospheric balloons and has tracked nearby aircraft by receiving their Automatic Dependent Surveillance-Broadcast (ADS-B) transmissions. Air traffic worldwide is facing increasing congestion. It is predicted that daily European flight volumes will more than double by 2030 compared to 2009 volumes. ADS-B is an air traffic management system being used to mitigate air traffic congestion. Each aircraft is equipped with both a GPS receiver and an ADS-B transponder. The transponder transmits an equipped aircraft's unique identifier, position, heading, and velocity once per second. The ADS-B transmissions can then be received by ground stations for use in traditional air traffic management. Airspace not monitored by these ground stations or other traditional means remains uncontrolled and poorly monitored. A constellation of space-based ADS-B receivers could close these gaps and provide global air traffic monitoring. By flying an ADS-B receiver on a stratospheric balloon, I-BATE has served as a precursor to a constellation of ADS-B-equipped Earth-orbiting satellites. From the ˜30 km balloon altitude, I-BATE tracked aircraft ranging up to 850 km. The experiment has served as a proof of concept for space-based air traffic management and supports a technology readiness level 6 of space-based ADS-B reception. I-BATE: International Space University—Balloon Air traffic control Technology Experiment.
Static analysis of class invariants in Java programs
NASA Astrophysics Data System (ADS)
Bonilla-Quintero, Lidia Dionisia
2011-12-01
This paper presents a technique for the automatic inference of class invariants from Java bytecode. Class invariants are very important for both compiler optimization and as an aid to programmers in their efforts to reduce the number of software defects. We present the original DC-invariant analysis from Adam Webber, talk about its shortcomings and suggest several different ways to improve it. To apply the DC-invariant analysis to identify DC-invariant assertions, all that one needs is a monotonic method analysis function and a suitable assertion domain. The DC-invariant algorithm is very general; however, the method analysis can be highly tuned to the problem in hand. For example, one could choose shape analysis as the method analysis function and use the DC-invariant analysis to simply extend it to an analysis that would yield class-wide invariants describing the shapes of linked data structures. We have a prototype implementation: a system we refer to as "the analyzer" that infers DC-invariant unary and binary relations and provides them to the user in a human readable format. The analyzer uses those relations to identify unnecessary array bounds checks in Java programs and perform null-reference analysis. It uses Adam Webber's relational constraint technique for the class-invariant binary relations. Early results with the analyzer were very imprecise in the presence of "dirty-called" methods. A dirty-called method is one that is called, either directly or transitively, from any constructor of the class, or from any method of the class at a point at which a disciplined field has been altered. This result was unexpected and forced an extensive search for improved techniques. An important contribution of this paper is the suggestion of several ways to improve the results by changing the way dirty-called methods are handled. The new techniques expand the set of class invariants that can be inferred over Webber's original results. The technique that produces better results uses in-line analysis. Final results are promising: we can infer sound class invariants for full-scale, not just toy applications.
2009-01-01
Background Prostate cancer is a world wide leading cancer and it is characterized by its aggressive metastasis. According to the clinical heterogeneity, prostate cancer displays different stages and grades related to the aggressive metastasis disease. Although numerous studies used microarray analysis and traditional clustering method to identify the individual genes during the disease processes, the important gene regulations remain unclear. We present a computational method for inferring genetic regulatory networks from micorarray data automatically with transcription factor analysis and conditional independence testing to explore the potential significant gene regulatory networks that are correlated with cancer, tumor grade and stage in the prostate cancer. Results To deal with missing values in microarray data, we used a K-nearest-neighbors (KNN) algorithm to determine the precise expression values. We applied web services technology to wrap the bioinformatics toolkits and databases to automatically extract the promoter regions of DNA sequences and predicted the transcription factors that regulate the gene expressions. We adopt the microarray datasets consists of 62 primary tumors, 41 normal prostate tissues from Stanford Microarray Database (SMD) as a target dataset to evaluate our method. The predicted results showed that the possible biomarker genes related to cancer and denoted the androgen functions and processes may be in the development of the prostate cancer and promote the cell death in cell cycle. Our predicted results showed that sub-networks of genes SREBF1, STAT6 and PBX1 are strongly related to a high extent while ETS transcription factors ELK1, JUN and EGR2 are related to a low extent. Gene SLC22A3 may explain clinically the differentiation associated with the high grade cancer compared with low grade cancer. Enhancer of Zeste Homolg 2 (EZH2) regulated by RUNX1 and STAT3 is correlated to the pathological stage. Conclusions We provide a computational framework to reconstruct the genetic regulatory network from the microarray data using biological knowledge and constraint-based inferences. Our method is helpful in verifying possible interaction relations in gene regulatory networks and filtering out incorrect relations inferred by imperfect methods. We predicted not only individual gene related to cancer but also discovered significant gene regulation networks. Our method is also validated in several enriched published papers and databases and the significant gene regulatory networks perform critical biological functions and processes including cell adhesion molecules, androgen and estrogen metabolism, smooth muscle contraction, and GO-annotated processes. Those significant gene regulations and the critical concept of tumor progression are useful to understand cancer biology and disease treatment. PMID:20025723
Zhang, Hongzhi R; Huynh, Lam K; Kungwan, Nawee; Yang, Zhiwei; Zhang, Shaowen
2007-05-17
The Utah Surrogate Mechanism was extended in order to model a stoichiometric premixed cyclohexane flame (P = 30 Torr). Generic rates were assigned to reaction classes of hydrogen abstraction, beta scission, and isomerization, and the resulting mechanism was found to be adequate in describing the combustion chemistry of cyclohexane. Satisfactory results were obtained in comparison with the experimental data of oxygen, major products and important intermediates, which include major soot precursors of C2-C5 unsaturated species. Measured concentrations of immediate products of fuel decomposition were also successfully reproduced. For example, the maximum concentrations of benzene and 1,3-butadiene, two major fuel decomposition products via competing pathways, were predicted within 10% of the measured values. Ring-opening reactions compete with those of cascading dehydrogenation for the decomposition of the conjugate cyclohexyl radical. The major ring-opening pathways produce 1-buten-4-yl radical, molecular ethylene, and 1,3-butadiene. The butadiene species is formed via beta scission after a 1-4 internal hydrogen migration of 1-hexen-6-yl radical. Cascading dehydrogenation also makes an important contribution to the fuel decomposition and provides the exclusive formation pathway of benzene. Benzene formation routes via combination of C2-C4 hydrocarbon fragments were found to be insignificant under current flame conditions, inferred by the later concentration peak of fulvene, in comparison with benzene, because the analogous species series for benzene formation via dehydrogenation was found to be precursors with regard to parent species of fulvene.
MILLER, WARREN B.; BARD, DAVID E.; PASTA, DAVID J.; RODGERS, JOSEPH LEE
2010-01-01
In spite of long-held beliefs that traits related to reproductive success tend to become fixed by evolution with little or no genetic variation, there is now considerable evidence that the natural variation of fertility within populations is genetically influenced and that a portion of that influence is related to the motivational precursors to fertility. We conduct a two-stage analysis to examine these inferences in a time-ordered multivariate context. First, using data from the National Longitudinal Survey of Youth, 1979, and LISREL analysis, we develop a structural equation model in which five hypothesized motivational precursors to fertility, measured in 1979–1982, predict both a child-timing and a child-number outcome, measured in 2002. Second, having chosen two time-ordered sequences of six variables from the SEM to represent our phenotypic models, we use Mx to conduct both univariate and multivariate behavioral genetic analyses with the selected variables. Our results indicate that one or more genes acting within a gene network have additive effects that operate through child-number desires to affect both the timing of the next child born and the final number of children born, that one or more genes acting through a separate network may have additive effects operating through gender role attitudes to produce downstream effects on the two fertility outcomes, and that no genetic variance is associated with either child-timing intentions or educational intentions. PMID:20608103
The fidelity of Kepler eclipsing binary parameters inferred by the neural network
NASA Astrophysics Data System (ADS)
Holanda, N.; da Silva, J. R. P.
2018-04-01
This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 eclipsing binary detached obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cos ω and e sin ω, but orbital inclination is clearly underestimated in neural network tests.
Unsupervised learning of natural languages
Solan, Zach; Horn, David; Ruppin, Eytan; Edelman, Shimon
2005-01-01
We address the problem, fundamental to linguistics, bioinformatics, and certain other disciplines, of using corpora of raw symbolic sequential data to infer underlying rules that govern their production. Given a corpus of strings (such as text, transcribed speech, chromosome or protein sequence data, sheet music, etc.), our unsupervised algorithm recursively distills from it hierarchically structured patterns. The adios (automatic distillation of structure) algorithm relies on a statistical method for pattern extraction and on structured generalization, two processes that have been implicated in language acquisition. It has been evaluated on artificial context-free grammars with thousands of rules, on natural languages as diverse as English and Chinese, and on protein data correlating sequence with function. This unsupervised algorithm is capable of learning complex syntax, generating grammatical novel sentences, and proving useful in other fields that call for structure discovery from raw data, such as bioinformatics. PMID:16087885
Unsupervised learning of natural languages.
Solan, Zach; Horn, David; Ruppin, Eytan; Edelman, Shimon
2005-08-16
We address the problem, fundamental to linguistics, bioinformatics, and certain other disciplines, of using corpora of raw symbolic sequential data to infer underlying rules that govern their production. Given a corpus of strings (such as text, transcribed speech, chromosome or protein sequence data, sheet music, etc.), our unsupervised algorithm recursively distills from it hierarchically structured patterns. The adios (automatic distillation of structure) algorithm relies on a statistical method for pattern extraction and on structured generalization, two processes that have been implicated in language acquisition. It has been evaluated on artificial context-free grammars with thousands of rules, on natural languages as diverse as English and Chinese, and on protein data correlating sequence with function. This unsupervised algorithm is capable of learning complex syntax, generating grammatical novel sentences, and proving useful in other fields that call for structure discovery from raw data, such as bioinformatics.
Using ProMED-Mail and MedWorm blogs for cross-domain pattern analysis in epidemic intelligence.
Stewart, Avaré; Denecke, Kerstin
2010-01-01
In this work we motivate the use of medical blog user generated content for gathering facts about disease reporting events to support biosurveillance investigation. Given the characteristics of blogs, the extraction of such events is made more difficult due to noise and data abundance. We address the problem of automatically inferring disease reporting event extraction patterns in this more noisy setting. The sublanguage used in outbreak reports is exploited to align with the sequences of disease reporting sentences in blogs. Based our Cross Domain Pattern Analysis Framework, experimental results show that Phase-Level sequences tend to produce more overlap across the domains than Word-Level sequences. The cross domain alignment process is effective at filtering noisy sequences from blogs and extracting good candidate sequence patterns from an abundance of text.
From seconds to months: an overview of multi-scale dynamics of mobile telephone calls
NASA Astrophysics Data System (ADS)
Saramäki, Jari; Moro, Esteban
2015-06-01
Big Data on electronic records of social interactions allow approaching human behaviour and sociality from a quantitative point of view with unforeseen statistical power. Mobile telephone Call Detail Records (CDRs), automatically collected by telecom operators for billing purposes, have proven especially fruitful for understanding one-to-one communication patterns as well as the dynamics of social networks that are reflected in such patterns. We present an overview of empirical results on the multi-scale dynamics of social dynamics and networks inferred from mobile telephone calls. We begin with the shortest timescales and fastest dynamics, such as burstiness of call sequences between individuals, and "zoom out" towards longer temporal and larger structural scales, from temporal motifs formed by correlated calls between multiple individuals to long-term dynamics of social groups. We conclude this overview with a future outlook.
Discrete mathematics for spatial data classification and understanding
NASA Astrophysics Data System (ADS)
Mussio, Luigi; Nocera, Rossella; Poli, Daniela
1998-12-01
Data processing, in the field of information technology, requires new tools, involving discrete mathematics, like data compression, signal enhancement, data classification and understanding, hypertexts and multimedia (considering educational aspects too), because the mass of data implies automatic data management and doesn't permit any a priori knowledge. The methodologies and procedures used in this class of problems concern different kinds of segmentation techniques and relational strategies, like clustering, parsing, vectorization, formalization, fitting and matching. On the other hand, the complexity of this approach imposes to perform optimal sampling and outlier detection just at the beginning, in order to define the set of data to be processed: rough data supply very poor information. For these reasons, no hypotheses about the distribution behavior of the data can be generally done and a judgment should be acquired by distribution-free inference only.
Video repairing under variable illumination using cyclic motions.
Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung
2006-05-01
This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.
The fidelity of Kepler eclipsing binary parameters inferred by the neural network
NASA Astrophysics Data System (ADS)
Holanda, N.; da Silva, J. R. P.
2018-07-01
This work aims to test the fidelity and efficiency of obtaining automatic orbital elements of eclipsing binary systems, from light curves using neural network models. We selected a random sample with 78 systems, from over 1400 detached eclipsing binaries obtained from the Kepler Eclipsing Binaries Catalog, processed using the neural network approach. The orbital parameters of the sample systems were measured applying the traditional method of light-curve adjustment with uncertainties calculated by the bootstrap method, employing the JKTEBOP code. These estimated parameters were compared with those obtained by the neural network approach for the same systems. The results reveal a good agreement between techniques for the sum of the fractional radii and moderate agreement for e cosω and e sinω, but orbital inclination is clearly underestimated in neural network tests.
Efforts to integrate CMIP metadata and standards into NOAA-GFDL's climate model workflow
NASA Astrophysics Data System (ADS)
Blanton, C.; Lee, M.; Mason, E. E.; Radhakrishnan, A.
2017-12-01
Modeling centers participating in CMIP6 run model simulations, publish requested model output (conforming to community data standards), and document models and simulations using ES-DOC. GFDL developed workflow software implementing some best practices to meet these metadata and documentation requirements. The CMIP6 Data Request defines the variables that should be archived for each experiment and specifies their spatial and temporal structure. We used the Data Request's dreqPy python library to write GFDL model configuration files as an alternative to hand-crafted tables. There was also a largely successful effort to standardize variable names within the model to reduce the additional overhead of translating "GFDL to CMOR" variables at a later stage in the pipeline. The ES-DOC ecosystem provides tools and standards to create, publish, and view various types of community-defined CIM documents, most notably model and simulation documents. Although ES-DOC will automatically create simulation documents during publishing by harvesting NetCDF global attributes, the information must be collected, stored, and placed in the NetCDF files by the workflow. We propose to develop a GUI to collect the simulation document precursors. In addition, a new MIP for CMIP6-CPMIP, a comparison of computational performance of climate models-is documented using machine and performance CIM documents. We used ES-DOC's pyesdoc python library to automatically create these machine and performance documents. We hope that these and similar efforts will become permanent features of the GFDL workflow to facilitate future participation in CMIP-like activities.
NASA Astrophysics Data System (ADS)
Moebius, B.; Pfennigbauer, M.; Pereira do Carmo, J.
2017-11-01
During the previous 15 years, Rendezvous and Docking Sensors (RVS) were developed, manufactured and qualified. In the mean time they were successfully applied in some space missions: For automatic docking of the European ATV "Jules Verne" on the International Space Station in 2008; for automatic berthing of the first Japanese HTV in 2009, and even the precursor model ARP-RVS for measurements during Shuttle Atlantis flights STS-84 and STS-86 to the MIR station. Up to now, about twenty RVS Flight Models for application on ATV, HTV and the American Cygnus Spacecraft were manufactured and delivered to the respective customers. RVS is designed for tracking of customer specific, cooperative targets (i.e. retro reflectors that are arranged in specific geometries). Once RVS has acquired the target, the sensor measures the distance to the target by timeof- flight determination of a pulsed laser beam. Any echo return provokes an interrupt signal and thus the readout of the according encoder positions of the two scan mirrors that represent Azimuth and Elevation measurement direction to the target. [2], [3]. The capability of the RVS for 3D mapping of the scene makes the fully space qualified RVS to be real 3D Lidar sensors; thus they are a sound technical base for the compact 3D Lidar breadboard that was developed in the course of the Imaging Lidar Technology (ILT) project.
Fully automatic segmentation of white matter hyperintensities in MR images of the elderly.
Admiraal-Behloul, F; van den Heuvel, D M J; Olofsen, H; van Osch, M J P; van der Grond, J; van Buchem, M A; Reiber, J H C
2005-11-15
The role of quantitative image analysis in large clinical trials is continuously increasing. Several methods are available for performing white matter hyperintensity (WMH) volume quantification. They vary in the amount of the human interaction involved. In this paper, we describe a fully automatic segmentation that was used to quantify WMHs in a large clinical trial on elderly subjects. Our segmentation method combines information from 3 different MR images: proton density (PD), T2-weighted and fluid-attenuated inversion recovery (FLAIR) images; our method uses an established artificial intelligent technique (fuzzy inference system) and does not require extensive computations. The reproducibility of the segmentation was evaluated in 9 patients who underwent scan-rescan with repositioning; an inter-class correlation coefficient (ICC) of 0.91 was obtained. The effect of differences in image resolution was tested in 44 patients, scanned with 6- and 3-mm slice thickness FLAIR images; we obtained an ICC value of 0.99. The accuracy of the segmentation was evaluated on 100 patients for whom manual delineation of WMHs was available; the obtained ICC was 0.98 and the similarity index was 0.75. Besides the fact that the approach demonstrated very high volumetric and spatial agreement with expert delineation, the software did not require more than 2 min per patient (from loading the images to saving the results) on a Pentium-4 processor (512 MB RAM).
An ontology-driven tool for structured data acquisition using Web forms.
Gonçalves, Rafael S; Tu, Samson W; Nyulas, Csongor I; Tierney, Michael J; Musen, Mark A
2017-08-01
Structured data acquisition is a common task that is widely performed in biomedicine. However, current solutions for this task are far from providing a means to structure data in such a way that it can be automatically employed in decision making (e.g., in our example application domain of clinical functional assessment, for determining eligibility for disability benefits) based on conclusions derived from acquired data (e.g., assessment of impaired motor function). To use data in these settings, we need it structured in a way that can be exploited by automated reasoning systems, for instance, in the Web Ontology Language (OWL); the de facto ontology language for the Web. We tackle the problem of generating Web-based assessment forms from OWL ontologies, and aggregating input gathered through these forms as an ontology of "semantically-enriched" form data that can be queried using an RDF query language, such as SPARQL. We developed an ontology-based structured data acquisition system, which we present through its specific application to the clinical functional assessment domain. We found that data gathered through our system is highly amenable to automatic analysis using queries. We demonstrated how ontologies can be used to help structuring Web-based forms and to semantically enrich the data elements of the acquired structured data. The ontologies associated with the enriched data elements enable automated inferences and provide a rich vocabulary for performing queries.
Nierat, Marie-Cécile; Demiri, Suela; Dupuis-Lozeron, Elise; Allali, Gilles; Morélot-Panzini, Capucine; Similowski, Thomas; Adler, Dan
2016-01-01
Human breathing stems from automatic brainstem neural processes. It can also be operated by cortico-subcortical networks, especially when breathing becomes uncomfortable because of external or internal inspiratory loads. How the "irruption of breathing into consciousness" interacts with cognition remains unclear, but a case report in a patient with defective automatic breathing (Ondine's curse syndrome) has shown that there was a cognitive cost of breathing when the respiratory cortical networks were engaged. In a pilot study of putative breathing-cognition interactions, the present study relied on a randomized design to test the hypothesis that experimentally loaded breathing in 28 young healthy subjects would have a negative impact on cognition as tested by "timed up-and-go" test (TUG) and its imagery version (iTUG). Progressive inspiratory threshold loading resulted in slower TUG and iTUG performance. Participants consistently imagined themselves faster than they actually were. However, progressive inspiratory loading slowed iTUG more than TUG, a finding that is unexpected with regard to the known effects of dual tasking on TUG and iTUG (slower TUG but stable iTUG). Insofar as the cortical networks engaged in response to inspiratory loading are also activated during complex locomotor tasks requiring cognitive inputs, we infer that competition for cortical resources may account for the breathing-cognition interference that is evidenced here.
Towards a Certified Lightweight Array Bound Checker for Java Bytecode
NASA Technical Reports Server (NTRS)
Pichardie, David
2009-01-01
Dynamic array bound checks are crucial elements for the security of a Java Virtual Machines. These dynamic checks are however expensive and several static analysis techniques have been proposed to eliminate explicit bounds checks. Such analyses require advanced numerical and symbolic manipulations that 1) penalize bytecode loading or dynamic compilation, 2) complexify the trusted computing base. Following the Foundational Proof Carrying Code methodology, our goal is to provide a lightweight bytecode verifier for eliminating array bound checks that is both efficient and trustable. In this work, we define a generic relational program analysis for an imperative, stackoriented byte code language with procedures, arrays and global variables and instantiate it with a relational abstract domain as polyhedra. The analysis has automatic inference of loop invariants and method pre-/post-conditions, and efficient checking of analysis results by a simple checker. Invariants, which can be large, can be specialized for proving a safety policy using an automatic pruning technique which reduces their size. The result of the analysis can be checked efficiently by annotating the program with parts of the invariant together with certificates of polyhedral inclusions. The resulting checker is sufficiently simple to be entirely certified within the Coq proof assistant for a simple fragment of the Java bytecode language. During the talk, we will also report on our ongoing effort to scale this approach for the full sequential JVM.
Márquez Neila, Pablo; Baumela, Luis; González-Soriano, Juncal; Rodríguez, Jose-Rodrigo; DeFelipe, Javier; Merchán-Pérez, Ángel
2016-04-01
Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.
Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie
2016-03-01
In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automating approximate Bayesian computation by local linear regression.
Thornton, Kevin R
2009-07-07
In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.
Do dogs follow behavioral cues from an unreliable human?
Takaoka, Akiko; Maeda, Tomomi; Hori, Yusuke; Fujita, Kazuo
2015-03-01
Dogs are known to consistently follow human pointing gestures. In this study, we asked whether dogs "automatically" do this or whether they flexibly adjust their behavior depending upon the reliability of the pointer, demonstrated in an immediately preceding event. We tested pet dogs in a version of the object choice task in which a piece of food was hidden in one of the two containers. In Experiment 1, Phase 1, an experimenter pointed at the baited container; the second container was empty. In Phase 2, after showing the contents of both containers to the dogs, the experimenter pointed at the empty container. In Phase 3, the procedure was exactly as in Phase 1. We compared the dogs' responses to the experimenter's pointing gestures in Phases 1 and 3. Most dogs followed pointing in Phase 1, but many fewer did so in Phase 3. In Experiment 2, dogs followed a new experimenter's pointing in Phase 3 following replication of procedures of Phases 1 and 2 in Experiment 1. This ruled out the possibility that dogs simply lost motivation to participate in the task in later phases. These results suggest that not only dogs are highly skilled at understanding human pointing gestures, but also they make inferences about the reliability of a human who presents cues and consequently modify their behavior flexibly depending on the inference.
Zhang, Guo-Qiang; Luo, Lingyun; Ogbuji, Chime; Joslyn, Cliff; Mejino, Jose; Sahoo, Satya S
2012-01-01
The interaction of multiple types of relationships among anatomical classes in the Foundational Model of Anatomy (FMA) can provide inferred information valuable for quality assurance. This paper introduces a method called Motif Checking (MOCH) to study the effects of such multi-relation type interactions for detecting logical inconsistencies as well as other anomalies represented by the motifs. MOCH represents patterns of multi-type interaction as small labeled (with multiple types of edges) sub-graph motifs, whose nodes represent class variables, and labeled edges represent relational types. By representing FMA as an RDF graph and motifs as SPARQL queries, fragments of FMA are automatically obtained as auditing candidates. Leveraging the scalability and reconfigurability of Semantic Web Technology, we performed exhaustive analyses of a variety of labeled sub-graph motifs. The quality assurance feature of MOCH comes from the distinct use of a subset of the edges of the graph motifs as constraints for disjointness, whereby bringing in rule-based flavor to the approach as well. With possible disjointness implied by antonyms, we performed manual inspection of the resulting FMA fragments and tracked down sources of abnormal inferred conclusions (logical inconsistencies), which are amendable for programmatic revision of the FMA. Our results demonstrate that MOCH provides a unique source of valuable information for quality assurance. Since our approach is general, it is applicable to any ontological system with an OWL representation.
Zhang, Guo-Qiang; Luo, Lingyun; Ogbuji, Chime; Joslyn, Cliff; Mejino, Jose; Sahoo, Satya S
2012-01-01
The interaction of multiple types of relationships among anatomical classes in the Foundational Model of Anatomy (FMA) can provide inferred information valuable for quality assurance. This paper introduces a method called Motif Checking (MOCH) to study the effects of such multi-relation type interactions for detecting logical inconsistencies as well as other anomalies represented by the motifs. MOCH represents patterns of multi-type interaction as small labeled (with multiple types of edges) sub-graph motifs, whose nodes represent class variables, and labeled edges represent relational types. By representing FMA as an RDF graph and motifs as SPARQL queries, fragments of FMA are automatically obtained as auditing candidates. Leveraging the scalability and reconfigurability of Semantic Web Technology, we performed exhaustive analyses of a variety of labeled sub-graph motifs. The quality assurance feature of MOCH comes from the distinct use of a subset of the edges of the graph motifs as constraints for disjointness, whereby bringing in rule-based flavor to the approach as well. With possible disjointness implied by antonyms, we performed manual inspection of the resulting FMA fragments and tracked down sources of abnormal inferred conclusions (logical inconsistencies), which are amendable for programmatic revision of the FMA. Our results demonstrate that MOCH provides a unique source of valuable information for quality assurance. Since our approach is general, it is applicable to any ontological system with an OWL representation. PMID:23304382
Observations of the marine environment from spaceborne side-looking real aperture radars
NASA Technical Reports Server (NTRS)
Kalmykov, A. I.; Velichko, S. A.; Tsymbal, V. N.; Kuleshov, Yu. A.; Weinman, J. A.; Jurkevich, I.
1993-01-01
Real aperture, side looking X-band radars have been operated from the Soviet Cosmos-1500, -1602, -1766 and Ocean satellites since 1984. Wind velocities were inferred from sea surface radar scattering for speeds ranging from approximately 2 m/s to those of hurricane proportions. The wind speeds were within 10-20 percent of the measured in situ values, and the direction of the wind velocity agreed with in situ direction measurements within 20-50 deg. Various atmospheric mesoscale eddies and tropical cyclones were thus located, and their strengths were inferred from sea surface reflectivity measurements. Rain cells were observed over both land and sea with these spaceborne radars. Algorithms to retrieve rainfall rates from spaceborne radar measurements were also developed. Spaceborne radars have been used to monitor various marine hazards. For example, information derived from those radars was used to plan rescue operations of distressed ships trapped in sea ice. Icebergs have also been monitored, and oil spills were mapped. Tsunamis produced by underwater earthquakes were also observed from space by the radars on the Cosmos 1500 series of satellites. The Cosmos-1500 satellite series have provided all weather radar imagery of the earths surface to a user community in real time by means of a 137.4 MHz Automatic Picture Transmission channel. This feature enabled the radar information to be used in direct support of Soviet polar maritime activities.
Semantator: semantic annotator for converting biomedical text to linked data.
Tao, Cui; Song, Dezhao; Sharma, Deepak; Chute, Christopher G
2013-10-01
More than 80% of biomedical data is embedded in plain text. The unstructured nature of these text-based documents makes it challenging to easily browse and query the data of interest in them. One approach to facilitate browsing and querying biomedical text is to convert the plain text to a linked web of data, i.e., converting data originally in free text to structured formats with defined meta-level semantics. In this paper, we introduce Semantator (Semantic Annotator), a semantic-web-based environment for annotating data of interest in biomedical documents, browsing and querying the annotated data, and interactively refining annotation results if needed. Through Semantator, information of interest can be either annotated manually or semi-automatically using plug-in information extraction tools. The annotated results will be stored in RDF and can be queried using the SPARQL query language. In addition, semantic reasoners can be directly applied to the annotated data for consistency checking and knowledge inference. Semantator has been released online and was used by the biomedical ontology community who provided positive feedbacks. Our evaluation results indicated that (1) Semantator can perform the annotation functionalities as designed; (2) Semantator can be adopted in real applications in clinical and transactional research; and (3) the annotated results using Semantator can be easily used in Semantic-web-based reasoning tools for further inference. Copyright © 2013 Elsevier Inc. All rights reserved.
AlexSys: a knowledge-based expert system for multiple sequence alignment construction and analysis
Aniba, Mohamed Radhouene; Poch, Olivier; Marchler-Bauer, Aron; Thompson, Julie Dawn
2010-01-01
Multiple sequence alignment (MSA) is a cornerstone of modern molecular biology and represents a unique means of investigating the patterns of conservation and diversity in complex biological systems. Many different algorithms have been developed to construct MSAs, but previous studies have shown that no single aligner consistently outperforms the rest. This has led to the development of a number of ‘meta-methods’ that systematically run several aligners and merge the output into one single solution. Although these methods generally produce more accurate alignments, they are inefficient because all the aligners need to be run first and the choice of the best solution is made a posteriori. Here, we describe the development of a new expert system, AlexSys, for the multiple alignment of protein sequences. AlexSys incorporates an intelligent inference engine to automatically select an appropriate aligner a priori, depending only on the nature of the input sequences. The inference engine was trained on a large set of reference multiple alignments, using a novel machine learning approach. Applying AlexSys to a test set of 178 alignments, we show that the expert system represents a good compromise between alignment quality and running time, making it suitable for high throughput projects. AlexSys is freely available from http://alnitak.u-strasbg.fr/∼aniba/alexsys. PMID:20530533
Short lived 36Cl and its decay products 36Ar and 36S in the early solar system
NASA Astrophysics Data System (ADS)
Turner, G.; Crowther, S. A.; Burgess, R.; Gilmour, J. D.; Kelley, S. P.; Wasserburg, G. J.
2013-12-01
Variable excesses of 36S have previously been reported in sodalite in the Allende and Ningqiang meteorites and used to infer the presence of 36Cl in the early solar system. Until now no unambiguous evidence of the major decay product, 36Ar (98%), has been found. Using low fluence fast neutron activation we have measured small amounts of 36Ar in the Allende sodalite Pink Angel, corresponding to 36Cl/35Cl = (1.9 ± 0.5) × 10-8. This is a factor of 200 lower than the highest value inferred from 36S excesses in sodalite. High resolution I-Xe analyses confirm that the sodalite formed between 4561 and 4558 Ma ago. The core of Pink Angel sodalite yielded a precise formation age of 4559.4 ± 0.6 Ma. Deposition of sodalite containing live 36Cl, seven million years or so after the formation of the CAI, appears to require a local production mechanism involving intense neutron irradiation within the solar nebula. The constraint imposed by the near absence of neutron induced 128Xe is most easily satisfied if the 36Cl were produced in a fluid precursor of the sodalite. The low level of 36Ar could be accounted for as a result of residual in-situ36Cl decay, up to 1-2 Ma after formation of the sodalite, and/or later diffusive loss, in line with the low activation energy for Ar diffusion in sodalite.
Archean geochemistry of formaldehyde and cyanide and the oligomerization of cyanohydrin
NASA Technical Reports Server (NTRS)
Arrhenius, T.; Arrhenius, G.; Paplawsky, W.
1994-01-01
The sources and speciation of reduced carbon and nitrogen inferred for the early Archean are reviewed in terms of current observations and models, and known chemical reactions. Within this framework hydrogen cyanide and cyanide ion in significant concentration would have been eliminated by reaction with excess formaldehyde to form cyanohydrin (glycolonitrile), and with ferrous ion to formferrocyanide. Natural reactions of these molecules would under such conditions deserve special consideration in modeling of primordial organochemical processes. As a step in this direction, transformation reactions have been investigated involving glycolonitrile in the presence of water. We find that glycolonitrile, formed from formaldehyde and hydrogen cyanide or cyanide ion, spontaneously cyclodimerizes to 4-amino-2-hydroxymethyloxazole. The crystalline dimer is the major product at low temperatue (approximately 0 C); the yield diminishes with increasing temperature at the expense of polymerization and hydrolysis products. Hydrolysis of glycolamide and of oxazole yields a number of simpler organic molecules, including ammonia and glycolamide. The spontaneous polymerization of glycolonitrile and its dimer gives rise to soluble, cationic oligomers of as yet unknown structure, and, unless arrested, to a viscous liquid, insoluble in water. A loss of cyanide by reaction with formaldehyde, inferred for the early terrestrial hydrosphere and cryosphere would present a dilemma for hypotheses invoking cyanide and related compounds as concentrated reactants capable of forming biomolecular precursor species. Attempts to escape from its horns may take advantage of the efficient concentration and separation of cyanide as solid ferriferrocyanide, and most directly of reactions of glycolonitrile and its derivatives.
Estimating the Size and Timing of Maximum Amplitude for Cycle 23 from Its Early Cycle Behavior
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1998-01-01
On the basis of the lowest observed smoothed monthly mean sunspot number, cycle 23 appears to have conventionally begun in May 1996, in conjunction with the first appearance of a new cycle, high-latitude spot-group. Such behavior, however, is considered rather unusual, since, previously (based upon the data- available cycles 12-22), the first appearance of a new cycle, high-latitude spot- group has always preceded conventional onset by at least 3 months. Furthermore, accepting May 1996 as the official start for cycle 23 poses a dilemma regarding its projected size and timing of maximum amplitude. Specifically, from the max-min and amplitude-period relationships we infer that cycle 23 should be above average in size and a fast riser, with maximum amplitude occurring before May 2000 (being in agreement with projections for cycle 23 based on precursor information), yet from its initial languid rate of rise (during the first 6 months of the cycle) we infer that it should be below average in size and a slow riser, with maximum amplitude occurring after May 2000. The dilemma vanishes, however, when we use a slightly later-occurring onset. For example, using August 1996, a date associated with a local secondary minimum prior to the rapid rise that began shortly thereafter (in early 1997), we infer that cycle 23's rate of rise is above that for the mean of cycles 1-22, the mean of cycles 10-22 (the modern era cycles), the mean of the modern era'fast risers,' and the largest of the modern era 'slow risers' (i.e., cycle 20), thereby, suggesting that cycle 23 will be both fast-rising and above average in size, peaking before August 2000. Additionally, presuming cycle 23 to be a well- behaved fast-rising cycle (regardless of whichever onset date is used), we also infer that its maximum amplitude likely will measure about 144.0 q+/- 28.8 (from the general behavior found for the bulk of modern era fast risers; i.e., 5 of 7 have had their maximum amplitude to lie within 20% of the mean curve for modern era fast risers). It is apparent, then, that sunspot number growth during 1998 will prove crucial for correctly establishing the size and shape of cycle 23.
Ramirez-Ambrosi, M; Abad-Garcia, B; Viloria-Bernal, M; Garmon-Lobato, S; Berrueta, L A; Gallo, B
2013-11-05
A new, rapid, selective and sensitive ultrahigh performance liquid chromatography with diode array detection coupled to electrospray ionization and quadrupole time-of-flight mass spectrometry (UHPLC-DAD-ESI-Q-ToF-MS) strategy using automatic and simultaneous acquisition of exact mass at high and low collision energy, MS(E), has been developed to obtain polyphenolic profile of apples, apple pomace and apple juice from Asturian cider apples in a single run injection of 22 min. MS(E) spectral data acquisition overcomes chromatographic co-elution problems, performing simultaneous collection of precursor ions as well as other ions produced as a result of their fragmentation, which allows resolving complex spectra from mixtures of precursor ions in an unsupervised way and eases their interpretation. Using this technique, 52 phenolic compounds of five different classes were readily characterized in these apple extracts in both positive and negative ionization modes. The spectral data for phenolic compounds obtained using this acquisition mode are comparable to those obtained by conventional LC-MS/MS as exemplified in this work. Among the 52 phenolic compounds identified in this work, 2 dihydrochalcones and 3 flavonols have been tentatively identified for the first time in apple products. Moreover, 2 flavanols, 4 dihydrochalcones, 9 hydroxycinnamic acids and 4 flavonols had not been previously reported in apple by ToF analysis to our knowledge. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Önnerud, Hans; Wallin, Sara; Östmark, Henric; Menning, Dennis; Ek, Stefan; Ellis, Hanna; Kölhed, Malin
2011-06-01
Results of dispersion experiments and dispersion modelling of explosives, drugs, and their precursors will be presented. The dispersion of chemicals evolving during preparation of home made explosives and a drug produced in an improvised manner in an ordinary kitchen has been measured. Experiments with concentration of hydrogen peroxide have been performed during spring and summer of 2009 and 2010 and further experiments with concentration of hydrogen peroxide, synthesis and drying of TATP and Methamphetamine are planned for the spring and summer of 2011. Results from the experiments are compared to dispersion modelling to achieve a better understanding of the dispersion processes and the resulting substances and amounts available for detection outside the kitchen at distances of 10-30 m and longer. Typical concentration levels have been determined as a function of environmental conditions. The experiments and modelling are made as a part of the LOTUS project aimed at detecting and locating the illicit production of explosives and drugs in an urban environment. It can be concluded that the proposed LOTUS system concept, using mobile automatic sensors, data transfer, location via GSM/GPS for on-line detection of illicit production of explosive or precursors to explosives and drugs is a viable approach and is in accordance with historical and today's illicit bomb manufacturing. The overall objective and approach of the LOTUS project will also be presented together with two more projects called PREVAIL and EMPHASIS both aiming at hindering or finding illicit production of home made explosives.
What's skill got to do with it? Vehicle automation and driver mental workload.
Young, M S; Stanton, N A
2007-08-01
Previous research has found that vehicle automation systems can reduce driver mental workload, with implications for attentional resources that can be detrimental to performance. The present paper considers how the development of automaticity within the driving task may influence performance in underload situations. Driver skill and vehicle automation were manipulated in a driving simulator, with four levels of each variable. Mental workload was assessed using a secondary task measure and eye movements were recorded to infer attentional capacity. The effects of automation on driver mental workload were quite robust across skill levels, but the most intriguing findings were from the eye movement data. It was found that, with little exception, attentional capacity and mental workload were directly related at all levels of driver skill, consistent with earlier studies. The results are discussed with reference to applied theories of cognition and the design of automation.
Real Time 3D Facial Movement Tracking Using a Monocular Camera
Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng
2016-01-01
The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714
Using Bayesian Networks for Candidate Generation in Consistency-based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Mengshoel, Ole
2008-01-01
Consistency-based diagnosis relies heavily on the assumption that discrepancies between model predictions and sensor observations can be detected accurately. When sources of uncertainty like sensor noise and model abstraction exist robust schemes have to be designed to make a binary decision on whether predictions are consistent with observations. This risks the occurrence of false alarms and missed alarms when an erroneous decision is made. Moreover when multiple sensors (with differing sensing properties) are available the degree of match between predictions and observations can be used to guide the search for fault candidates. In this paper we propose a novel approach to handle this problem using Bayesian networks. In the consistency- based diagnosis formulation, automatically generated Bayesian networks are used to encode a probabilistic measure of fit between predictions and observations. A Bayesian network inference algorithm is used to compute most probable fault candidates.
Head Pose Estimation Using Multilinear Subspace Analysis for Robot Human Awareness
NASA Technical Reports Server (NTRS)
Ivanov, Tonislav; Matthies, Larry; Vasilescu, M. Alex O.
2009-01-01
Mobile robots, operating in unconstrained indoor and outdoor environments, would benefit in many ways from perception of the human awareness around them. Knowledge of people's head pose and gaze directions would enable the robot to deduce which people are aware of the its presence, and to predict future motions of the people for better path planning. To make such inferences, requires estimating head pose on facial images that are combination of multiple varying factors, such as identity, appearance, head pose, and illumination. By applying multilinear algebra, the algebra of higher-order tensors, we can separate these factors and estimate head pose regardless of subject's identity or image conditions. Furthermore, we can automatically handle uncertainty in the size of the face and its location. We demonstrate a pipeline of on-the-move detection of pedestrians with a robot stereo vision system, segmentation of the head, and head pose estimation in cluttered urban street scenes.
Real Time 3D Facial Movement Tracking Using a Monocular Camera.
Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng
2016-07-25
The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
Utecht, Joseph; Brochhausen, Mathias; Judkins, John; Schneider, Jodi; Boyce, Richard D
2017-01-01
In this research we aim to demonstrate that an ontology-based system can categorize potential drug-drug interaction (PDDI) evidence items into complex types based on a small set of simple questions. Such a method could increase the transparency and reliability of PDDI evidence evaluation, while also reducing the variations in content and seriousness ratings present in PDDI knowledge bases. We extended the DIDEO ontology with 44 formal evidence type definitions. We then manually annotated the evidence types of 30 evidence items. We tested an RDF/OWL representation of answers to a small number of simple questions about each of these 30 evidence items and showed that automatic inference can determine the detailed evidence types based on this small number of simpler questions. These results show proof-of-concept for a decision support infrastructure that frees the evidence evaluator from mastering relatively complex written evidence type definitions.
How the deployment of attention determines what we see
Treisman, Anne
2007-01-01
Attention is a tool to adapt what we see to our current needs. It can be focused narrowly on a single object or spread over several or distributed over the scene as a whole. In addition to increasing or decreasing the number of attended objects, these different deployments may have different effects on what we see. This chapter describes some research both on focused attention and its use in binding features, and on distributed attention and the kinds of information we gain and lose with the attention window opened wide. One kind of processing that we suggest occurs automatically with distributed attention results in a statistical description of sets of similar objects. Another gives the gist of the scene, which may be inferred from sets of features registered in parallel. Flexible use of these different modes of attention allows us to reconcile sharp capacity limits with a richer understanding of the visual scene. PMID:17387378
Disadvantageous associations: Reversible spatial cueing effects in a discrimination task
Nico, Daniele; Daprati, Elena
2015-01-01
Current theories describe learning in terms of cognitive or associative mechanisms. To assess whether cognitive mechanisms interact with automaticity of associative processes we devised a shape-discrimination task in which participants received both explicit instructions and implicit information. Instructions further allowed for the inference that a first event would precede the target. Albeit irrelevant to respond, this event acted as response prime and implicit spatial cue (i.e. it predicted target location). To modulate cognitive involvement, in three experiments we manipulated modality and salience of the spatial cue. Results always showed evidence for a priming effect, confirming that the first stimulus was never ignored. More importantly, although participants failed to consciously recognize the association, responses to spatially cued trials became either slower or faster depending on salience of the first event. These findings provide an empirical demonstration that cognitive and associative learning mechanisms functionally co-exist and interact to regulate behaviour. PMID:26534830
Corruption and air pollution in Europe.
Ivanova, Kate
2011-01-01
This paper examines how the effectiveness of regulatory framework influences levels of sulphur emissions in a scenario where, to reduce its (emission-) tax payments, a polluting firm may under-report emissions level at the risk of being audited and fined. First, a model to explain how changes in regulatory framework (e.g., audit effectiveness) and transboundary spillovers affect both actual and reported emissions is developed. Then the theoretical predictions using data for 39 European countries from 1999 to 2003 are tested and inferences about true emission levels are made. The empirical analysis supports the theoretical predictions with significant implications for the interpretation of pollution data reported to international monitoring agencies. Countries with effective regulation are likely to have relatively high reported emissions of sulphur. But this should not automatically be interpreted as weak environmental performance, because their actual pollution levels are likely to be lower than in nations with less effective regulation.
Advanced, Analytic, Automated (AAA) Measurement of Engagement During Learning
D’Mello, Sidney; Dieterle, Ed; Duckworth, Angela
2017-01-01
It is generally acknowledged that engagement plays a critical role in learning. Unfortunately, the study of engagement has been stymied by a lack of valid and efficient measures. We introduce the advanced, analytic, and automated (AAA) approach to measure engagement at fine-grained temporal resolutions. The AAA measurement approach is grounded in embodied theories of cognition and affect, which advocate a close coupling between thought and action. It uses machine-learned computational models to automatically infer mental states associated with engagement (e.g., interest, flow) from machine-readable behavioral and physiological signals (e.g., facial expressions, eye tracking, click-stream data) and from aspects of the environmental context. We present15 case studies that illustrate the potential of the AAA approach for measuring engagement in digital learning environments. We discuss strengths and weaknesses of the AAA approach, concluding that it has significant promise to catalyze engagement research. PMID:29038607
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pichara, Karim; Protopapas, Pavlos
We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine howmore » classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.« less
Morphologic dating of fault scarps using airborne laser swath mapping (ALSM) data
Hilley, G.E.; Delong, S.; Prentice, C.; Blisniuk, K.; Arrowsmith, J.R.
2010-01-01
Models of fault scarp morphology have been previously used to infer the relative age of different fault scarps in a fault zone using labor-intensive ground surveying. We present a method for automatically extracting scarp morphologic ages within high-resolution digital topography. Scarp degradation is modeled as a diffusive mass transport process in the across-scarp direction. The second derivative of the modeled degraded fault scarp was normalized to yield the best-fitting (in a least-squared sense) scarp height at each point, and the signal-to-noise ratio identified those areas containing scarp-like topography. We applied this method to three areas along the San Andreas Fault and found correspondence between the mapped geometry of the fault and that extracted by our analysis. This suggests that the spatial distribution of scarp ages may be revealed by such an analysis, allowing the recent temporal development of a fault zone to be imaged along its length.
Auditing SNOMED Relationships Using a Converse Abstraction Network
Wei, Duo; Halper, Michael; Elhanan, Gai; Chen, Yan; Perl, Yehoshua; Geller, James; Spackman, Kent A.
2009-01-01
In SNOMED CT, a given kind of attribute relationship is defined between two hierarchies, a source and a target. Certain hierarchies (or subhierarchies) serve only as targets, with no outgoing relationships of their own. However, converse relationships—those pointing in a direction opposite to the defined relationships—while not explicitly represented in SNOMED’s inferred view, can be utilized in forming an alternative view of a source. In particular, they can help shed light on a source hierarchy’s overall relationship structure. Toward this end, an abstraction network, called the converse abstraction network (CAN), derived automatically from a given SNOMED hierarchy is presented. An auditing methodology based on the CAN is formulated. The methodology is applied to SNOMED’s Device subhierarchy and the related device relationships of the Procedure hierarchy. The results indicate that the CAN is useful in finding opportunities for refining and improving SNOMED. PMID:20351941
Probabilistic graphlet transfer for photo cropping.
Zhang, Luming; Song, Mingli; Zhao, Qi; Liu, Xiao; Bu, Jiajun; Chen, Chun
2013-02-01
As one of the most basic photo manipulation processes, photo cropping is widely used in the printing, graphic design, and photography industries. In this paper, we introduce graphlets (i.e., small connected subgraphs) to represent a photo's aesthetic features, and propose a probabilistic model to transfer aesthetic features from the training photo onto the cropped photo. In particular, by segmenting each photo into a set of regions, we construct a region adjacency graph (RAG) to represent the global aesthetic feature of each photo. Graphlets are then extracted from the RAGs, and these graphlets capture the local aesthetic features of the photos. Finally, we cast photo cropping as a candidate-searching procedure on the basis of a probabilistic model, and infer the parameters of the cropped photos using Gibbs sampling. The proposed method is fully automatic. Subjective evaluations have shown that it is preferred over a number of existing approaches.
A Bayesian approach to traffic light detection and mapping
NASA Astrophysics Data System (ADS)
Hosseinyalamdary, Siavash; Yilmaz, Alper
2017-03-01
Automatic traffic light detection and mapping is an open research problem. The traffic lights vary in color, shape, geolocation, activation pattern, and installation which complicate their automated detection. In addition, the image of the traffic lights may be noisy, overexposed, underexposed, or occluded. In order to address this problem, we propose a Bayesian inference framework to detect and map traffic lights. In addition to the spatio-temporal consistency constraint, traffic light characteristics such as color, shape and height is shown to further improve the accuracy of the proposed approach. The proposed approach has been evaluated on two benchmark datasets and has been shown to outperform earlier studies. The results show that the precision and recall rates for the KITTI benchmark are 95.78 % and 92.95 % respectively and the precision and recall rates for the LARA benchmark are 98.66 % and 94.65 % .
NASA Technical Reports Server (NTRS)
Bemra, R. S.; Rastogi, P. K.; Balsley, B. B.
1986-01-01
An analysis of frequency spectra at periods of about 5 days to 5 min from two 20-day sets of velocity measurements in the stratosphere and troposphere region obtained with the Poker Flat mesosphere-stratosphere-troposphere (MST) radar during January and June, 1984 is presented. A technique based on median filtering and averaged order statistics for automatic editing, smoothing and spectral analysis of velocity time series contaminated with spurious data points or outliers is outlined. The validity of this technique and its effects on the inferred spectral index was tested through simulation. Spectra obtained with this technique are discussed. The measured spectral indices show variability with season and height, especially across the tropopause. The discussion briefly outlines the need for obtaining better climatologies of velocity spectra and for the refinements of the existing theories to explain their behavior.
Advanced, Analytic, Automated (AAA) Measurement of Engagement During Learning.
D'Mello, Sidney; Dieterle, Ed; Duckworth, Angela
2017-01-01
It is generally acknowledged that engagement plays a critical role in learning. Unfortunately, the study of engagement has been stymied by a lack of valid and efficient measures. We introduce the advanced, analytic, and automated (AAA) approach to measure engagement at fine-grained temporal resolutions. The AAA measurement approach is grounded in embodied theories of cognition and affect, which advocate a close coupling between thought and action. It uses machine-learned computational models to automatically infer mental states associated with engagement (e.g., interest, flow) from machine-readable behavioral and physiological signals (e.g., facial expressions, eye tracking, click-stream data) and from aspects of the environmental context. We present15 case studies that illustrate the potential of the AAA approach for measuring engagement in digital learning environments. We discuss strengths and weaknesses of the AAA approach, concluding that it has significant promise to catalyze engagement research.
NASA Astrophysics Data System (ADS)
Singh, Chandralekha
2009-07-01
One finding of cognitive research is that people do not automatically acquire usable knowledge by spending lots of time on task. Because students' knowledge hierarchy is more fragmented, "knowledge chunks" are smaller than those of experts. The limited capacity of short term memory makes the cognitive load high during problem solving tasks, leaving few cognitive resources available for meta-cognition. The abstract nature of the laws of physics and the chain of reasoning required to draw meaningful inferences makes these issues critical. In order to help students, it is crucial to consider the difficulty of a problem from the perspective of students. We are developing and evaluating interactive problem-solving tutorials to help students in the introductory physics courses learn effective problem-solving strategies while solidifying physics concepts. The self-paced tutorials can provide guidance and support for a variety of problem solving techniques, and opportunity for knowledge and skill acquisition.
Optical Generation of Fuzzy-Based Rules
NASA Astrophysics Data System (ADS)
Gur, Eran; Mendlovic, David; Zalevsky, Zeev
2002-08-01
In the last third of the 20th century, fuzzy logic has risen from a mathematical concept to an applicable approach in soft computing. Today, fuzzy logic is used in control systems for various applications, such as washing machines, train-brake systems, automobile automatic gear, and so forth. The approach of optical implementation of fuzzy inferencing was given by the authors in previous papers, giving an extra emphasis to applications with two dominant inputs. In this paper the authors introduce a real-time optical rule generator for the dual-input fuzzy-inference engine. The paper briefly goes over the dual-input optical implementation of fuzzy-logic inferencing. Then, the concept of constructing a set of rules from given data is discussed. Next, the authors show ways to implement this procedure optically. The discussion is accompanied by an example that illustrates the transformation from raw data into fuzzy set rules.
Ioannidis, Vassilios; van Nimwegen, Erik; Stockinger, Heinz
2016-01-01
ISMARA ( ismara.unibas.ch) automatically infers the key regulators and regulatory interactions from high-throughput gene expression or chromatin state data. However, given the large sizes of current next generation sequencing (NGS) datasets, data uploading times are a major bottleneck. Additionally, for proprietary data, users may be uncomfortable with uploading entire raw datasets to an external server. Both these problems could be alleviated by providing a means by which users could pre-process their raw data locally, transferring only a small summary file to the ISMARA server. We developed a stand-alone client application that pre-processes large input files (RNA-seq or ChIP-seq data) on the user's computer for performing ISMARA analysis in a completely automated manner, including uploading of small processed summary files to the ISMARA server. This reduces file sizes by up to a factor of 1000, and upload times from many hours to mere seconds. The client application is available from ismara.unibas.ch/ISMARA/client. PMID:28232860
NASA Astrophysics Data System (ADS)
Umam, F.; Budiarto, H.
2018-01-01
Shrimp farming becomes the main commodity of society in Madura Island East Java Indonesia. Because of Madura island has a very extreme weather, farmers have difficulty in keeping the balance of pond water. As a consequence of this condition, there are some farmers experienced losses. In this study an adaptive control system was developed using ANFIS method to control pH balance (7.5-8.5), Temperature (25-31°C), water level (70-120 cm) and Dissolved Oxygen (4-7,5 ppm). Each parameter (pH, temperature, level and DO) is controlled separately but can work together. The output of the control system is in the form of pump activation which provides the antidote to the imbalance that occurs in pond water. The system is built with two modes at once, which are automatic mode and manual mode. The manual control interface based on android which is easy to use.
Semantic Analysis of Email Using Domain Ontologies and WordNet
NASA Technical Reports Server (NTRS)
Berrios, Daniel C.; Keller, Richard M.
2005-01-01
The problem of capturing and accessing knowledge in paper form has been supplanted by a problem of providing structure to vast amounts of electronic information. Systems that can construct semantic links for natural language documents like email messages automatically will be a crucial element of semantic email tools. We have designed an information extraction process that can leverage the knowledge already contained in an existing semantic web, recognizing references in email to existing nodes in a network of ontology instances by using linguistic knowledge and knowledge of the structure of the semantic web. We developed a heuristic score that uses several forms of evidence to detect references in email to existing nodes in the Semanticorganizer repository's network. While these scores cannot directly support automated probabilistic inference, they can be used to rank nodes by relevance and link those deemed most relevant to email messages.
Investigations on optimizing the energy transmission of ultrafast optical pulses in pure water
NASA Astrophysics Data System (ADS)
Lukofsky, David
Many of today's communication and imaging technologies share the common challenge of signal deterioration due to water's large absorption coefficient. As an example, it is water molecules that contaminate the fused silica of optical fibers and account for most of the absorption they exhibit at communication wavelengths. It is also water (in the form of vapor) that makes it challenging to devise practical THz spectroscopic systems. As such, this thesis examines how the transmission of electromagnetic radiation through water could be improved as a stepping stone towards bettering a wide array of communication and imaging applications. Recent time-domain approaches have noted the connection between pulse rise-time and precursor waveform absorption. This thesis represents the first in-depth analysis of precursors using an intuitive frequency-domain approach. It was shown with well-known physical metrics that precursors are a linear effect resulting from the temporal representation of a Beer's law of absorption for broadband pulses. Experimental validation was achieved with a spatial light modulator used in conjunction with Frequency-Resolved-Optical-Gating (FROG) to obtain the first measurement of the amplitude and phase of an optical precursor. The semi-classical two-level atom model was used to infer the transitional dipole moments of the 1447 nm and 2:94 mum vibrational resonances of the medium. These values supported finite-difference-time-domain simulations suggesting how 52 fs sech2 pulses of 220 GW/cm2 peak intensity could propagate with negligible attenuation over 15 absorption lengths when tuned to the 2:94 mum transition of water. Extensive use of 1550 nm lasers in communication systems and the presence of the second vibrational overtone resonance of water at 1447 nm were the motivation for transmission experiments completed at the Naval Research Laboratory (Washington, DC) at this transition. As much as a 500% increase in absolute transmission was observed in a 5 mm sample of distilled water when compared to steady-state transmission. Different causes for this increase in transmission were examined, including coherent and incoherent bleaching effects. Overall, this study reveals that efficient propagation of optical pulses in water requires pulses of near single-cycle duration and large intensities and/or fluence. While these large intensities would make it difficult to apply this work to medical imaging applications, there remains a window of opportunity for efficient underwater communication. Indeed, assuming a channel of water with few physical obstructions, the advent of sufficiently intense, robust, and high repetition-rate laser technology might one day lead to the implementation of a practical underwater communication link at optical wavelengths.
Saygin, Z M; Kliemann, D; Iglesias, J E; van der Kouwe, A J W; Boyd, E; Reuter, M; Stevens, A; Van Leemput, K; McKee, A; Frosch, M P; Fischl, B; Augustinack, J C
2017-07-15
The amygdala is composed of multiple nuclei with unique functions and connections in the limbic system and to the rest of the brain. However, standard in vivo neuroimaging tools to automatically delineate the amygdala into its multiple nuclei are still rare. By scanning postmortem specimens at high resolution (100-150µm) at 7T field strength (n = 10), we were able to visualize and label nine amygdala nuclei (anterior amygdaloid, cortico-amygdaloid transition area; basal, lateral, accessory basal, central, cortical medial, paralaminar nuclei). We created an atlas from these labels using a recently developed atlas building algorithm based on Bayesian inference. This atlas, which will be released as part of FreeSurfer, can be used to automatically segment nine amygdala nuclei from a standard resolution structural MR image. We applied this atlas to two publicly available datasets (ADNI and ABIDE) with standard resolution T1 data, used individual volumetric data of the amygdala nuclei as the measure and found that our atlas i) discriminates between Alzheimer's disease participants and age-matched control participants with 84% accuracy (AUC=0.915), and ii) discriminates between individuals with autism and age-, sex- and IQ-matched neurotypically developed control participants with 59.5% accuracy (AUC=0.59). For both datasets, the new ex vivo atlas significantly outperformed (all p < .05) estimations of the whole amygdala derived from the segmentation in FreeSurfer 5.1 (ADNI: 75%, ABIDE: 54% accuracy), as well as classification based on whole amygdala volume (using the sum of all amygdala nuclei volumes; ADNI: 81%, ABIDE: 55% accuracy). This new atlas and the segmentation tools that utilize it will provide neuroimaging researchers with the ability to explore the function and connectivity of the human amygdala nuclei with unprecedented detail in healthy adults as well as those with neurodevelopmental and neurodegenerative disorders. Copyright © 2017 Elsevier Inc. All rights reserved.
Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour
Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola
2016-01-01
Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non-human animal behaviour science. Further improvements and validation are needed, and future applications and limitations are discussed. PMID:27415814
Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.
Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola
2016-01-01
Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non-human animal behaviour science. Further improvements and validation are needed, and future applications and limitations are discussed.
Automatic Identification of Web-Based Risk Markers for Health Events
Borsa, Diana; Hayward, Andrew C; McKendry, Rachel A; Cox, Ingemar J
2015-01-01
Background The escalating cost of global health care is driving the development of new technologies to identify early indicators of an individual’s risk of disease. Traditionally, epidemiologists have identified such risk factors using medical databases and lengthy clinical studies but these are often limited in size and cost and can fail to take full account of diseases where there are social stigmas or to identify transient acute risk factors. Objective Here we report that Web search engine queries coupled with information on Wikipedia access patterns can be used to infer health events associated with an individual user and automatically generate Web-based risk markers for some of the common medical conditions worldwide, from cardiovascular disease to sexually transmitted infections and mental health conditions, as well as pregnancy. Methods Using anonymized datasets, we present methods to first distinguish individuals likely to have experienced specific health events, and classify them into distinct categories. We then use the self-controlled case series method to find the incidence of health events in risk periods directly following a user’s search for a query category, and compare to the incidence during other periods for the same individuals. Results Searches for pet stores were risk markers for allergy. We also identified some possible new risk markers; for example: searching for fast food and theme restaurants was associated with a transient increase in risk of myocardial infarction, suggesting this exposure goes beyond a long-term risk factor but may also act as an acute trigger of myocardial infarction. Dating and adult content websites were risk markers for sexually transmitted infections, such as human immunodeficiency virus (HIV). Conclusions Web-based methods provide a powerful, low-cost approach to automatically identify risk factors, and support more timely and personalized public health efforts to bring human and economic benefits. PMID:25626480
Changes in O3 and NO2 due to emissions from Fracking in the UK.
NASA Astrophysics Data System (ADS)
Archibald, Alexander; Ordonez, Carlos
2016-04-01
Poor air quality is a problem that affects millions of people around the world. Understanding the driving forces behind air pollution is complicated as the precursor gases which combine to produce air pollutants react in a highly non-linear manner and are subject to a range of atmospheric transport mechanisms compounded by the weather. A great deal of money has been spent on mitigating air pollution and so it's important to assess the impacts that new technologies that emit air pollutant precursors may have on local and regional air pollution. One of the most highly discussed new technologies that could impact air quality is the adoption of wide-scale hydraulic fracturing or "fracking" for natural gas. Indeed in regions of the USA where fracking is commonplace large levels of ozone (O3 - a key air pollutant) have been observed and attributed directly to the fracking process. In this study, a numerical modelling framework was used to assess possible impacts of fracking in the UK where at present no large scale fracking facilities are in operation. A number of emissions scenarios were developed for the principle gas phase air pollution precursors: the oxides of nitrogen (NOx) and volatile organic compounds (VOCs). These emissions scenarios were then used in a state-of-the-art numerical air quality model (the UK Met Office operational air quality forecasting model AQUM) to determine potential impacts related to fracking on UK air quality. Comparison of base model results and observations for the year 2013 of NOx, O3 and VOCs from the UK Automatic Urban and Rural Network (AURN) showed that AQUM has good skill at simulating these gas phase air pollutants (O3 r=0.64, NMGE=0.3; NO2 r=0.62, NMGE=0.51). Analysis of the simulations with fracking emissions demonstrate that there are large changes in 1hr max NO2 (11.6±6.6 ppb) with modest increases in monthly mean NO2, throughout the British Isles (150±100 ppt). These results highlight that stringent measures should be applied to prevent deleterious impacts on air quality from emissions related to fracking in the UK.
Revising the embryonic origin of thyroid C cells in mice and humans
Johansson, Ellen; Andersson, Louise; Örnros, Jessica; Carlsson, Therese; Ingeson-Carlsson, Camilla; Liang, Shawn; Dahlberg, Jakob; Jansson, Svante; Parrillo, Luca; Zoppoli, Pietro; Barila, Guillermo O.; Altschuler, Daniel L.; Padula, Daniela; Lickert, Heiko; Fagman, Henrik; Nilsson, Mikael
2015-01-01
Current understanding infers a neural crest origin of thyroid C cells, the major source of calcitonin in mammals and ancestors to neuroendocrine thyroid tumors. The concept is primarily based on investigations in quail–chick chimeras involving fate mapping of neural crest cells to the ultimobranchial glands that regulate Ca2+ homeostasis in birds, reptiles, amphibians and fishes, but whether mammalian C cell development involves a homologous ontogenetic trajectory has not been experimentally verified. With lineage tracing, we now provide direct evidence that Sox17+ anterior endoderm is the only source of differentiated C cells and their progenitors in mice. Like many gut endoderm derivatives, embryonic C cells were found to coexpress pioneer factors forkhead box (Fox) a1 and Foxa2 before neuroendocrine differentiation takes place. In the ultimobranchial body epithelium emerging from pharyngeal pouch endoderm in early organogenesis, differential Foxa1/Foxa2 expression distinguished two spatially separated pools of C cell precursors with different growth properties. A similar expression pattern was recapitulated in medullary thyroid carcinoma cells in vivo, consistent with a growth-promoting role of Foxa1. In contrast to embryonic precursor cells, C cell-derived tumor cells invading the stromal compartment downregulated Foxa2, foregoing epithelial-to-mesenchymal transition designated by loss of E-cadherin; both Foxa2 and E-cadherin were re-expressed at metastatic sites. These findings revise mammalian C cell ontogeny, expand the neuroendocrine repertoire of endoderm and redefine the boundaries of neural crest diversification. The data further underpin distinct functions of Foxa1 and Foxa2 in both embryonic and tumor development. PMID:26395490
Science verification of operational aerosol and cloud products for TROPOMI on Sentinel-5 precursor
NASA Astrophysics Data System (ADS)
Lelli, Luca; Gimeno-Garcia, Sebastian; Sanders, Abram; Sneep, Maarten; Rozanov, Vladimir V.; Kokhanvosky, Alexander A.; Loyola, Diego; Burrows, John P.
2016-04-01
With the approaching launch of the Sentinel-5 precursor (S-5P) satellite, scheduled by mid 2016, one preparatory task of the L2 working group (composed by the Institute of Environmental Physics IUP Bremen, the Royal Netherlands Meteorological Institute KNMI De Bilt, and the German Aerospace Center DLR Oberpfaffenhofen) has been the assessment of biases among aerosol and cloud products, that are going to be inferred by the respective algorithms from measurements of the platform's payload TROPOspheric Monitoring Instrument (TROPOMI). The instrument will measure terrestrial radiance with varying moderate spectral resolutions from the ultraviolet throughout the shortwave infrared. Specifically, all the operational and verification algorithms involved in this comparison exploit the sensitivity of molecular oxygen absorption (the A-band, 755-775 nm, with a resolution of 0.54 nm) to changes in optical and geometrical parameters of tropospheric scattering layers. Therefore, aerosol layer height (ALH) and thickness (AOT), cloud top height (CTH), thickness (COT) and albedo (CA) are the targeted properties. First, the verification of these properties has been accomplished upon synchronisation of the respective forward radiative transfer models for a variety of atmospheric scenarios. Then, biases against independent techniques have been evaluated with real measurements of selected GOME-2 orbits. Global seasonal bias assessment has been carried out for CTH, CA and COT, whereas the verification of ALH and AOT is based on the analysis of the ash plume emitted by the icelandic volcanic eruption Eyjafjallajökull in May 2010 and selected dust scenes off the Saharan west coast sensed by SCIAMACHY in year 2009.
Evaluation of an artificial intelligence guided inverse planning system: clinical case study.
Yan, Hui; Yin, Fang-Fang; Willett, Christopher
2007-04-01
An artificial intelligence (AI) guided method for parameter adjustment of inverse planning was implemented on a commercial inverse treatment planning system. For evaluation purpose, four typical clinical cases were tested and the results from both plans achieved by automated and manual methods were compared. The procedure of parameter adjustment mainly consists of three major loops. Each loop is in charge of modifying parameters of one category, which is carried out by a specially customized fuzzy inference system. A physician prescribed multiple constraints for a selected volume were adopted to account for the tradeoff between prescription dose to the PTV and dose-volume constraints for critical organs. The searching process for an optimal parameter combination began with the first constraint, and proceeds to the next until a plan with acceptable dose was achieved. The initial setup of the plan parameters was the same for each case and was adjusted independently by both manual and automated methods. After the parameters of one category were updated, the intensity maps of all fields were re-optimized and the plan dose was subsequently re-calculated. When final plan arrived, the dose statistics were calculated from both plans and compared. For planned target volume (PTV), the dose for 95% volume is up to 10% higher in plans using the automated method than those using the manual method. For critical organs, an average decrease of the plan dose was achieved. However, the automated method cannot improve the plan dose for some critical organs due to limitations of the inference rules currently employed. For normal tissue, there was no significant difference between plan doses achieved by either automated or manual method. With the application of AI-guided method, the basic parameter adjustment task can be accomplished automatically and a comparable plan dose was achieved in comparison with that achieved by the manual method. Future improvements to incorporate case-specific inference rules are essential to fully automate the inverse planning process.
A Proposal to Develop Interactive Classification Technology
NASA Technical Reports Server (NTRS)
deBessonet, Cary
1998-01-01
Research for the first year was oriented towards: 1) the design of an interactive classification tool (ICT); and 2) the development of an appropriate theory of inference for use in ICT technology. The general objective was to develop a theory of classification that could accommodate a diverse array of objects, including events and their constituent objects. Throughout this report, the term "object" is to be interpreted in a broad sense to cover any kind of object, including living beings, non-living physical things, events, even ideas and concepts. The idea was to produce a theory that could serve as the uniting fabric of a base technology capable of being implemented in a variety of automated systems. The decision was made to employ two technologies under development by the principal investigator, namely, SMS (Symbolic Manipulation System) and SL (Symbolic Language) [see debessonet, 1991, for detailed descriptions of SMS and SL]. The plan was to enhance and modify these technologies for use in an ICT environment. As a means of giving focus and direction to the proposed research, the investigators decided to design an interactive, classificatory tool for use in building accessible knowledge bases for selected domains. Accordingly, the proposed research was divisible into tasks that included: 1) the design of technology for classifying domain objects and for building knowledge bases from the results automatically; 2) the development of a scheme of inference capable of drawing upon previously processed classificatory schemes and knowledge bases; and 3) the design of a query/ search module for accessing the knowledge bases built by the inclusive system. The interactive tool for classifying domain objects was to be designed initially for textual corpora with a view to having the technology eventually be used in robots to build sentential knowledge bases that would be supported by inference engines specially designed for the natural or man-made environments in which the robots would be called upon to operate.
NASA Astrophysics Data System (ADS)
Dura-Gomez, I.; Addison, A.; Knapp, C. C.; Talwani, P.; Chapman, A.
2005-12-01
During the 1886 Charleston earthquake, two parallel tabby walls of Fort Dorchester broke left-laterally, and a strike of ~N25°W was inferred for the causative Sawmill Branch fault. To better define this fault, which does not have any surface expression, we planned to cut trenches across it. However, as Fort Dorchester is a protected archeological site, we were required to locate the fault accurately away from the fort, before permission could be obtained to cut short trenches. The present GPR investigations were planned as a preliminary step to determine locations for trenching. A pulseEKKO 100 GPR was used to collect data along eight profiles (varying in length from 10 m to 30 m) that were run across the projected strike of the fault, and one 50 m long profile that was run parallel to it. The locations of the profiles were obtained using a total station. To capture the signature of the fault, sixteen common-offset (COS) lines were acquired by using different antennas (50, 100 and 200 MHz) and stacking 64 times to increase the signal-to-noise ratio. The location of trees and stumps were recorded. In addition, two common-midpoint (CMP) tests were carried out, and gave an average velocity of about 0.097 m/ns. Processing included the subtraction of the low frequency "wow" on the trace (dewow), automatic gain control (AGC) and the application of bandpass filters. The signals using the 50 MHz, 100 MHz and 200 MHz antennas were found to penetrate up to about 30 meters, 20 meters and 12 meters respectively. Vertically offset reflectors and disruptions of the electrical signal were used to infer the location of the fault(s). Comparisons of the locations of these disruptions on various lines were used to infer the presence of a N30°W fault zone We plan to confirm these locations by cutting shallow trenches.
Challenges in Characterizing and Controlling Complex Cellular Systems
NASA Astrophysics Data System (ADS)
Wikswo, John
2011-03-01
Multicellular dynamic biological processes such as developmental differentiation, wound repair, disease, aging, and even homeostasis can be represented by trajectories through a phase space whose extent reflects the genetic, post-translational, and metabolic complexity of the process - easily extending to tens of thousands of dimensions. Intra- and inter-cellular sensing and regulatory systems and their nested, redundant, and non-linear feed-forward and feed-back controls create high-dimensioned attractors in this phase space. Metabolism provides free energy to drive non-equilibrium processes and dynamically reconfigure attractors. Studies of single molecules and cells provide only minimalist projections onto a small number of axes. It may be difficult to infer larger-scale emergent behavior from linearized experiments that perform only small amplitude perturbations on a limited number of the dimensions. Complete characterization may succeed for bounded component problems, such as an individual cell cycle or signaling cascade, but larger systems problems will require a coarse-grained approach. Hence a new experimental and analytical framework is needed. Possibly one could utilize high-amplitude, multi-variable driving of the system to infer coarse-grained, effective models, which in turn can be tested by their ability to control systems behavior. Navigation at will between attractors in a high-dimensioned dynamical system will provide not only detailed knowledge of the shape of attractor basins, but also measures of underlying stochastic events such as noise in gene expression or receptor binding and how both affect system stability and robustness. Needed for this are wide-bandwidth methods to sense and actuate large numbers of intracellular and extracellular variables and automatically and rapidly infer dynamic control models. The success of this approach may be determined by how broadly the sensors and actuators can span the full dimensionality of the phase space. Supported by the Defense Threat Reduction Agency HDTRA-09-1-0013, NIH National Institute on Drug Abuse RC2DA028981, the National Academies Keck Futures Initiative, and the Vanderbilt Institute for Integrative Biosystems Research and Education.
Bayesian Monitoring Systems for the CTBT: Historical Development and New Results
NASA Astrophysics Data System (ADS)
Russell, S.; Arora, N. S.; Moore, D.
2016-12-01
A project at Berkeley, begun in 2009 in collaboration with CTBTO andmore recently with LLNL, has reformulated the global seismicmonitoring problem in a Bayesian framework. A first-generation system,NETVISA, has been built comprising a spatial event prior andgenerative models of event transmission and detection, as well as aMonte Carlo inference algorithm. The probabilistic model allows forseamless integration of various disparate sources of information,including negative information (the absence of detections). Workingfrom arrivals extracted by traditional station processing fromInternational Monitoring System (IMS) data, NETVISA achieves areduction of around 60% in the number of missed events compared withthe currently deployed network processing system. It also finds manyevents that are missed by the human analysts who postprocess the IMSoutput. Recent improvements include the integration of models forinfrasound and hydroacoustic detections and a global depth model fornatural seismicity trained from ISC data. NETVISA is now fullycompatible with the CTBTO operating environment. A second-generation model called SIGVISA extends NETVISA's generativemodel all the way from events to raw signal data, avoiding theerror-prone bottom-up detection phase of station processing. SIGVISA'smodel automatically captures the phenomena underlying existingdetection and location techniques such as multilateration, waveformcorrelation matching, and double-differencing, and integrates theminto a global inference process that also (like NETVISA) handles denovo events. Initial results for the Western US in early 2008 (whenthe transportable US Array was operating) shows that SIGVISA finds,from IMS data only, more than twice the number of events recorded inthe CTBTO Late Event Bulletin (LEB). For mb 1.0-2.5, the ratio is more than10; put another way, for this data set, SIGVISA lowers the detectionthreshold by roughly one magnitude compared to LEB. The broader message of this work is that probabilistic inference basedon a vertically integrated generative model that directly expressesgeophysical knowledge can be a much more effective approach forinterpreting scientific data than the traditional bottom-up processingpipeline.
Accurate airway centerline extraction based on topological thinning using graph-theoretic analysis.
Bian, Zijian; Tan, Wenjun; Yang, Jinzhu; Liu, Jiren; Zhao, Dazhe
2014-01-01
The quantitative analysis of the airway tree is of critical importance in the CT-based diagnosis and treatment of popular pulmonary diseases. The extraction of airway centerline is a precursor to identify airway hierarchical structure, measure geometrical parameters, and guide visualized detection. Traditional methods suffer from extra branches and circles due to incomplete segmentation results, which induce false analysis in applications. This paper proposed an automatic and robust centerline extraction method for airway tree. First, the centerline is located based on the topological thinning method; border voxels are deleted symmetrically to preserve topological and geometrical properties iteratively. Second, the structural information is generated using graph-theoretic analysis. Then inaccurate circles are removed with a distance weighting strategy, and extra branches are pruned according to clinical anatomic knowledge. The centerline region without false appendices is eventually determined after the described phases. Experimental results show that the proposed method identifies more than 96% branches and keep consistency across different cases and achieves superior circle-free structure and centrality.
Kaddi, Chanchala D.; Bennett, Rachel V.; Paine, Martin R. L.; Banks, Mitchel D.; Weber, Arthur L.; Fernández, Facundo M.; Wang, May D.
2016-01-01
Full characterization of complex reaction mixtures is necessary to understand mechanisms, optimize yields, and elucidate secondary reaction pathways. Molecular-level information for species in such mixtures can be readily obtained by coupling mass spectrometry imaging (MSI) with thin layer chromatography (TLC) separations. User-guided investigation of imaging data for mixture components with known m/z values is generally straightforward; however, spot detection for unknowns is highly tedious, and limits the applicability of MSI in conjunction with TLC. To accelerate imaging data mining, we developed DetectTLC, an approach that automatically identifies m/z values exhibiting TLC spot-like regions in MS molecular images. Furthermore, DetectTLC can also spatially match m/z values for spots acquired during alternating high and low collision-energy scans, pairing product ions with precursors to enhance structural identification. As an example, DetectTLC is applied to the identification and structural confirmation of unknown, yet significant, products of abiotic pyrazinone and aminopyrazine nucleoside analog synthesis. PMID:26508443
NASA Astrophysics Data System (ADS)
Nygren, David
2015-10-01
To proceed toward effective ``discovery class'' ton-scale detectors in the search for neutrino-less double beta decay, a robust technique for rejection of all radioactivity-induced backgrounds is urgently needed. An efficient technique for detection of the barium daughter in the decay 136Xe -->136Ba + 2e- would provide a long-sought pathway toward this goal. Single-molecule fluorescent imaging appears to offer a new way to detect the barium daughter atom, which emerges naturally in an ionized state in pure xenon. A doubly charged barium ion can initiate a chelation process with a non-fluorescent precursor molecule, leading to a highly fluorescent complex. Repeated photo-excitation of the complex can reveal both presence and location of a single ionized atom with high precision and selectivity. Detection within the active volume of a xenon gas Time Projection Chamber operating at high pressure would be automatic, and with a capability for redundant confirmation.
Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling
Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...
2014-07-14
Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less
NASA Astrophysics Data System (ADS)
Hong, Pengyu; Sun, Hui; Sha, Long; Pu, Yi; Khatri, Kshitij; Yu, Xiang; Tang, Yang; Lin, Cheng
2017-08-01
A major challenge in glycomics is the characterization of complex glycan structures that are essential for understanding their diverse roles in many biological processes. We present a novel efficient computational approach, named GlycoDeNovo, for accurate elucidation of the glycan topologies from their tandem mass spectra. Given a spectrum, GlycoDeNovo first builds an interpretation-graph specifying how to interpret each peak using preceding interpreted peaks. It then reconstructs the topologies of peaks that contribute to interpreting the precursor ion. We theoretically prove that GlycoDeNovo is highly efficient. A major innovative feature added to GlycoDeNovo is a data-driven IonClassifier which can be used to effectively rank candidate topologies. IonClassifier is automatically learned from experimental spectra of known glycans to distinguish B- and C-type ions from all other ion types. Our results showed that GlycoDeNovo is robust and accurate for topology reconstruction of glycans from their tandem mass spectra. [Figure not available: see fulltext.
Extracting semantically enriched events from biomedical literature
2012-01-01
Background Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Results Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP’09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP’09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. Conclusions We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare. PMID:22621266
Extracting semantically enriched events from biomedical literature.
Miwa, Makoto; Thompson, Paul; McNaught, John; Kell, Douglas B; Ananiadou, Sophia
2012-05-23
Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP'09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP'09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare.
Chen, Longfei; Li, Yingying; Zhang, Qian; Wang, Dan; Akhberdi, Oren; Wei, Dongsheng; Pan, Jiao; Zhu, Xudong
2017-02-01
Pestalotiollide B, an analog of dibenzodioxocinones which are inhibitors of cholesterol ester transfer proteins, is produced by Pestalotiopsis microspora NK17. To increase the production of pestalotiollide B, we attempted to eliminate competing polyketide products by deleting the genes responsible for their biosynthesis. We successfully deleted 41 out of 48 putative polyketide synthases (PKSs) in the genome of NK17. Nine of the 41 PKS deleted strains had significant increased production of pestalotiollide B (P < 0.05). For instance, deletion of pks35, led to an increase of pestalotiollide B by 887%. We inferred that these nine PKSs possibly lead to branch pathways that compete for precursors with pestalotiollide B, or that convert the product. Deletion of some other PKS genes such as pks8 led to a significant decrease of pestalotiollide B, suggesting they are responsible for its biosynthesis. Our data demonstrated that improvement of pestalotiollide B production can be achieved by eliminating competing polyketides.
Wilson, Rachel L; Simion, Cristian Eugen; Blackman, Christopher S; Carmalt, Claire J; Stanoiu, Adelina; Di Maggio, Francesco; Covington, James A
2018-03-01
Analyte sensitivity for gas sensors based on semiconducting metal oxides should be highly dependent on the film thickness, particularly when that thickness is on the order of the Debye length. This thickness dependence has previously been demonstrated for SnO₂ and inferred for TiO₂. In this paper, TiO₂ thin films have been prepared by Atomic Layer Deposition (ALD) using titanium isopropoxide and water as precursors. The deposition process was performed on standard alumina gas sensor platforms and microscope slides (for analysis purposes), at a temperature of 200 °C. The TiO₂ films were exposed to different concentrations of CO, CH₄, NO₂, NH₃ and SO₂ to evaluate their gas sensitivities. These experiments showed that the TiO₂ film thickness played a dominant role within the conduction mechanism and the pattern of response for the electrical resistance towards CH₄ and NH₃ exposure indicated typical n -type semiconducting behavior. The effect of relative humidity on the gas sensitivity has also been demonstrated.
NASA Technical Reports Server (NTRS)
Armstrong, J. T.; El Goresy, A.; Wasserburg, G. J.
1985-01-01
The structure and composition of Willy, a 150-micron-diameter Fremdling in CAI 5241 from the Allende meteorite, are investigated using optical, secondary-electron, and electron-backscatter microscopy and electron-microprobe analysis. The results are presented in diagrams, maps, tables, graphs, and micrographs and compared with those for other Allende Fremdlinge. Willy is found to have a concentric-zone structure comprising a complex porous core of magnetite, metal, sulfide, scheelite, and other minor phases; a compact magnetite-apatite mantle; a thin (20 microns or less) reaction-assemblage zone; and a dense outer rim of fassaite with minor spinel. A multistage formation sequence involving changes in T and fO2 and preceding the introduction of Willy into the CAI (which itself preceded CAI spinel and silicate formation) is postulated, and it is inferred from the apparent lack of post-capture recrystallization that Willy has not been subjected to temperatures in excess of 600 C and may represent the precursor material for many other Fremdlinge.
McCarthy, Ryan C; Park, Yun-Hee; Kosman, Daniel J
2014-01-01
A sequence within the E2 domain of soluble amyloid precursor protein (sAPP) stimulates iron efflux. This activity has been attributed to a ferroxidase activity suggested for this motif. We demonstrate that the stimulation of efflux supported by this peptide and by sAPPα is due to their stabilization of the ferrous iron exporter, ferroportin (Fpn), in the plasma membrane of human brain microvascular endothelial cells (hBMVEC). The peptide does not bind ferric iron explaining why it does not and thermodynamically cannot promote ferrous iron autoxidation. This peptide specifically pulls Fpn down from the plasma membrane of hBMVEC; based on these results, FTP, for ferroportin-targeting peptide, correctly identifies the function of this peptide. The data suggest that in stabilizing Fpn via the targeting due to the FTP sequence, sAPP will increase the flux of iron into the cerebral interstitium. This inference correlates with the observation of significant iron deposition in the amyloid plaques characteristic of Alzheimer’s disease. PMID:24867889
Thermal Conductivity and Thermopower near the 2D Metal-Insulator transition, Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarachik, Myriam P.
2015-02-20
STUDIES OF STRONGLY-INTERACTING 2D ELECTRON SYSTEMS – There is a great deal of current interest in the properties of systems in which the interaction between electrons (their potential energy) is large compared to their kinetic energy. We have investigated an apparent, unexpected metal-insulator transition inferred from the behavior of the temperature-dependence of the resistivity; moreover, detailed analysis of the behavior of the magnetoresistance suggests that the electrons’ effective mass diverges, supporting this scenario. Whether this is a true phase transition or crossover behavior has been strenuously debated over the past 20 years. Our measurements have now shown that the thermoelectricmore » power of these 2D materials diverges at a finite density, providing clear evidence that this is, in fact, a phase transition to a new low-density phase which may be a precursor or a direct transition to the long sought-after electronic crystal predicted by Eugene Wigner in 1934.« less
Hu, Qi-Hou; Xie, Zhou-Qing; Wang, Xin-Ming; Kang, Hui; He, Quan-Fu; Zhang, Pengfei
2013-01-01
Isoprene and monoterpenes are important precursors of secondary organic aerosols (SOA) in continents. However, their contributions to aerosols over oceans are still inconclusive. Here we analyzed SOA tracers from isoprene and monoterpenes in aerosol samples collected over oceans during the Chinese Arctic and Antarctic Research Expeditions. Combined with literature reports elsewhere, we found that the dominant tracers are the oxidation products of isoprene. The concentrations of tracers varied considerably. The mean average values were approximately one order of magnitude higher in the Northern Hemisphere than in the Southern Hemisphere. High values were generally observed in coastal regions. This phenomenon was ascribed to the outflow influence from continental sources. High levels of isoprene could emit from oceans and consequently have a significant impact on marine SOA as inferred from isoprene SOA during phytoplankton blooms, which may abruptly increase up to 95 ng/m3 in the boundary layer over remote oceans. PMID:23880782
Visualizing heavy fermion confinement and Pauli-limited superconductivity in layered CeCoIn 5
Gyenis, András; Feldman, Benjamin E.; Randeria, Mallika T.; ...
2018-02-07
Layered material structures play a key role in enhancing electron–electron interactions to create correlated metallic phases that can transform into unconventional superconducting states. The quasi-two-dimensional electronic properties of such compounds are often inferred indirectly through examination of bulk properties. Here we use scanning tunneling microscopy to directly probe in cross-section the quasi-two-dimensional electronic states of the heavy fermion superconductor CeCoIn 5. Our measurements reveal the strong confined nature of quasiparticles, anisotropy of tunneling characteristics, and layer-by-layer modulated behavior of the precursor pseudogap gap phase. In the interlayer coupled superconducting state, the orientation of line defects relative to the d-wave ordermore » parameter determines whether in-gap states form due to scattering. Spectroscopic imaging of the anisotropic magnetic vortex cores directly characterizes the short interlayer superconducting coherence length and shows an electronic phase separation near the upper critical in-plane magnetic field, consistent with a Pauli-limited first-order phase transition into a pseudogap phase.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyenis, András; Feldman, Benjamin E.; Randeria, Mallika T.
Layered material structures play a key role in enhancing electron–electron interactions to create correlated metallic phases that can transform into unconventional superconducting states. The quasi-two-dimensional electronic properties of such compounds are often inferred indirectly through examination of bulk properties. Here we use scanning tunneling microscopy to directly probe in cross-section the quasi-two-dimensional electronic states of the heavy fermion superconductor CeCoIn 5. Our measurements reveal the strong confined nature of quasiparticles, anisotropy of tunneling characteristics, and layer-by-layer modulated behavior of the precursor pseudogap gap phase. In the interlayer coupled superconducting state, the orientation of line defects relative to the d-wave ordermore » parameter determines whether in-gap states form due to scattering. Spectroscopic imaging of the anisotropic magnetic vortex cores directly characterizes the short interlayer superconducting coherence length and shows an electronic phase separation near the upper critical in-plane magnetic field, consistent with a Pauli-limited first-order phase transition into a pseudogap phase.« less
NASA Astrophysics Data System (ADS)
Frederickson, Kraig; Musci, Ben; Rich, J. William; Adamovich, Igor
2015-09-01
Recent results demonstrating the formation of vibrationally excited carbon monoxide from carbon vapor and molecular oxygen will be presented. Previous reaction dynamics simulations and crossed molecular beam experiments have shown that gas-phase reaction of carbon atoms and molecular oxygen produces vibrationally excited carbon monoxide. The present work examines the product distribution of this reaction in a collision dominated environment, at a pressure of several Torr. Carbon vapor is produced in an AC arc discharge in argon buffer operated at a voltage of approximately 1 kV and current of 10 A, and mixed with molecular oxygen, which may also be excited by an auxiliary RF discharge, in a flowing chemical reactor. Identification of chemical reaction products and inference of their vibrational populations is performed by comparing infrared emission spectra of the flow in the reactor, taken by a Fourier Transform IR spectrometer, with synthetic spectra. Estimates of vibrationally excited carbon monoxide concentration and relative vibrational level populations will be presented.
Lin, Tao; Zhou, Dongju; Yu, Shilin; Chen, Wei
2016-09-01
The removal process of 2,2-dichloroacetamide (DCAcAm), a new disinfection by-product (DBP) in conventional drinking water treatment plant (C-DWTP) and advanced DWTP (ADWTP) was studied with newly maximum formation potential (MFP) process. It was demonstrated that the advanced treatment displayed greater removal efficiency towards DCAcAm formation potential (MFP) than the conventional treatment. The hydrophilic natural organic matter and natural organic matter with molecular weight <1 kDa or >10 kDa leaded to more DCAcAm formation, and the aromatic protein was inferred as one part of DCAcAm precursor. DCAcAm was found to cause delayed development and malformation to zebrafish embryos at embryonic growth stage. Compared with heart toxicity, it caused a significant neuron toxicity. It also could cause the acute DNA damage to adult zebrafish, which should be extremely cautioned. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sieve-based relation extraction of gene regulatory networks from biological literature
2015-01-01
Background Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. Results We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Conclusions Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains. PMID:26551454
Sieve-based relation extraction of gene regulatory networks from biological literature.
Žitnik, Slavko; Žitnik, Marinka; Zupan, Blaž; Bajec, Marko
2015-01-01
Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming data into skip-mention sequences is appropriate for detecting relations between distant mentions. Linear-chain conditional random fields, along with appropriate data transformations, can be efficiently used to extract relations. The sieve-based architecture simplifies the system as new sieves can be easily added or removed and each sieve can utilize the results of previous ones. Furthermore, sieves with conditional random fields can be trained on arbitrary text data and hence are applicable to broad range of relation extraction tasks and data domains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guo Qiang; Luo, Lingyun; Ogbuji, Chime
The interaction of multiple types of relationships among anatomical classes in the Foundational Model of Anatomy (FMA) can provide inferred information valuable for quality assurance. This paper introduces a method called Motif Checking (MOCH) to study the effects of such multi-relation type interactions. MOCH represents patterns of multitype interaction as small labeled sub-graph motifs, whose nodes represent class variables, and labeled edges represent relational types. By representing FMA as an RDF graph and motifs as SPARQL queries, fragments of FMA are automatically obtained as auditing candidates. Leveraging the scalability and reconfigurability of Semantic Web Technology (OWL, RDF and SPARQL) andmore » Virtuoso, we performed exhaustive analyses of three 2-node motifs, resulting in 638 matching FMA configurations; twelve 3-node motifs, resulting in 202,960 configurations. Using the Principal Ideal Explorer (PIE) methodology as an extension of MOCH, we were able to identify 755 root nodes with 4,100 respective descendants with opposing antonyms in their class names for arbitrary-length motifs. With possible disjointness implied by antonyms, we performed manual inspection of a subset of the resulting FMA fragments and tracked down a source of abnormal inferred conclusions (captured by the motifs), coming from a gender-neutral class being modeled as a part of gender-specific class, such as “Urinary system” is a part of “Female human body.” Our results demonstrate that MOCH and PIE provide a unique source of valuable information for quality assurance. Since our approach is general, it is applicable to any ontological system with an OWL representation.« less
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
Bishop, Christopher M
2013-02-13
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes
NASA Astrophysics Data System (ADS)
Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes.
Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Bishop, Christopher M.
2013-01-01
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications. PMID:23277612
Wright, Adam; Laxmisan, Archana; Ottosen, Madelene J; McCoy, Jacob A; Butten, David; Sittig, Dean F
2012-01-01
Objective We describe a novel, crowdsourcing method for generating a knowledge base of problem–medication pairs that takes advantage of manually asserted links between medications and problems. Methods Through iterative review, we developed metrics to estimate the appropriateness of manually entered problem–medication links for inclusion in a knowledge base that can be used to infer previously unasserted links between problems and medications. Results Clinicians manually linked 231 223 medications (55.30% of prescribed medications) to problems within the electronic health record, generating 41 203 distinct problem–medication pairs, although not all were accurate. We developed methods to evaluate the accuracy of the pairs, and after limiting the pairs to those meeting an estimated 95% appropriateness threshold, 11 166 pairs remained. The pairs in the knowledge base accounted for 183 127 total links asserted (76.47% of all links). Retrospective application of the knowledge base linked 68 316 medications not previously linked by a clinician to an indicated problem (36.53% of unlinked medications). Expert review of the combined knowledge base, including inferred and manually linked problem–medication pairs, found a sensitivity of 65.8% and a specificity of 97.9%. Conclusion Crowdsourcing is an effective, inexpensive method for generating a knowledge base of problem–medication pairs that is automatically mapped to local terminologies, up-to-date, and reflective of local prescribing practices and trends. PMID:22582202
Lightning Mapping Observations: What we are learning.
NASA Astrophysics Data System (ADS)
Krehbiel, P.
2001-12-01
The use of radio frequency time-of-arrival techniques for accurately mapping lightning discharges is revolutionizing our ability to study lightning discharge processes and to investigate thunderstorms. Different types of discharges are being observed that we have not been able to study before or knew existed. Included are a variety of inverted and normal polarity intracloud and cloud-to-ground discharges, frequent short-duration discharges at high altitude in storms and in overshooting convective tops, highly energetic impulsive discharge events, and horizontally extensive `spider' lightning discharges in large mesoscale convective systems. High time resolution measurements valuably complement interferometric observations and are starting to exceed the ability of interferometers to provide detailed pictures of flash development. Mapping observations can be used to infer the polarity of the breakdown channels and hence the location and sign of charge regions in the storm. The lightning activity in large, severe storms is found to be essentially continuous and volume-filling, with substantially more lightning inside the storm than between the cloud and ground. Spectacular dendritic structures are observed in many flashes. The lightning observations can be used to infer the electrical structure of a storm and therefore to study the electrification processes. The results are raising fundamental questions about how storms become electrified and how the electrification evolves with time. Supercell storms are commonly observed to electrify in an inverted or anomalous manner, raising questions about how these storms are different from normal storms, and even what is `normal'. The high lightning rates in severe storms raises the distinct possibility that the discharges themselves might be sustaining or enhancing the electrification. Correlated observations with radar, instrumented balloons and aircraft, and ground-based measurements are leading to greatly improved understanding of the electrical processes in storms. The mapping observations also provide possible diagnostics of storm type and severity. Lightning `holes' are observed as storms intensify and are robust indicators of strong updrafts and precursors of tornadic activity. Lightning in overshooting convective tops provides another indicator of strong convective surges and a valuable precursor of severity. The lightning observations show the locations of convective cores in storms and can be obtained in real time to monitor and track convective activity, much like meteorological radar. Mapping systems are able to passively detect and track aircraft flying through ice crystal clouds, as well as airborne or ground-based instruments or vehicles carrying active transmitters. Finally, the mapping techniques could readily be adapted to monitor noise and detect faults on power transmission lines.
Process to make structured particles
Knapp, Angela Michelle; Richard, Monique N; Luhrs, Claudia; Blada, Timothy; Phillips, Jonathan
2014-02-04
Disclosed is a process for making a composite material that contains structured particles. The process includes providing a first precursor in the form of a dry precursor powder, a precursor liquid, a precursor vapor of a liquid and/or a precursor gas. The process also includes providing a plasma that has a high field zone and passing the first precursor through the high field zone of the plasma. As the first precursor passes through the high field zone of the plasma, at least part of the first precursor is decomposed. An aerosol having a second precursor is provided downstream of the high field zone of the plasma and the decomposed first material is allowed to condense onto the second precursor to from structured particles.
Williams, Sean; Hume, Patria A; Kara, Stephen
2011-11-01
Football codes (rugby union, soccer, American football) train and play matches on natural and artificial turfs. A review of injuries on different turfs was needed to inform practitioners and sporting bodies on turf-related injury mechanisms and risk factors. Therefore, the aim of this review was to compare the incidence, nature and mechanisms of injuries sustained on newer generation artificial turfs and natural turfs. Electronic databases were searched using the keywords 'artificial turf', 'natural turf', 'grass' and 'inj*'. Delimitation of 120 articles sourced to those addressing injuries in football codes and those using third and fourth generation artificial turfs or natural turfs resulted in 11 experimental papers. These 11 papers provided 20 cohorts that could be assessed using magnitude-based inferences for injury incidence rate ratio calculations pertaining to differences between surfaces. Analysis showed that 16 of the 20 cohorts showed trivial effects for overall incidence rate ratios between surfaces. There was increased risk of ankle injury playing on artificial turf in eight cohorts, with incidence rate ratios from 0.7 to 5.2. Evidence concerning risk of knee injuries on the two surfaces was inconsistent, with incidence rate ratios from 0.4 to 2.8. Two cohorts showed beneficial inferences over the 90% likelihood value for effects of artificial surface on muscle injuries for soccer players; however, there were also two harmful, four unclear and five trivial inferences across the three football codes. Inferences relating to injury severity were inconsistent, with the exception that artificial turf was very likely to have harmful effects for minor injuries in rugby union training and severe injuries in young female soccer players. No clear differences between surfaces were evident in relation to training versus match injuries. Potential mechanisms for differing injury patterns on artificial turf compared with natural turf include increased peak torque and rotational stiffness properties of shoe-surface interfaces, decreased impact attenuation properties of surfaces, differing foot loading patterns and detrimental physiological responses. Changing between surfaces may be a precursor for injury in soccer. In conclusion, studies have provided strong evidence for comparable rates of injury between new generation artificial turfs and natural turfs. An exception is the likely increased risk of ankle injury on third and fourth generation artificial turfs. Therefore, ankle injury prevention strategies must be a priority for athletes who play on artificial turf regularly. Clarification of effects of artificial surfaces on muscle and knee injuries are required given inconsistencies in incidence rate ratios depending on the football code, athlete, gender or match versus training.
NASA Astrophysics Data System (ADS)
Soto-Pinto, C. A.; Arellano-Baeza, A. A.; Ouzounov, D. P.
2012-12-01
Among a variety of processes involved in seismic activity, the principal process is the accumulation and relaxation of stress in the crust, which takes place at the depth of tens of kilometers. While the Earth's surface bears at most the indirect sings of the accumulation and relaxation of the crust stress, it has long been understood that there is a strong correspondence between the structure of the underlying crust and the landscape. We assume the structure of the lineaments reflects an internal structure of the Earth's crust, and the variation of the lineament number and arrangement reflects the changes in the stress patterns related to the seismic activity. Contrary to the existing assumptions that lineament structure changes only at the geological timescale, we have found that the much faster seismic activity strongly affects the system of lineaments extracted from the high-resolution multispectral satellite images. Previous studies have shown that accumulation of the stress in the crust previous to a strong earthquake is directly related to the number increment and preferential orientation of lineament configuration present in the satellite images of epicenter zones. This effect increases with the earthquake magnitude and can be observed approximately since one month before. To study in details this effect we have developed a software based on a series of algorithms for automatic detection of lineaments. It was found that the Hough transform implemented after the application of discontinuity detection mechanisms like Canny edge detector or directional filters is the most robust technique for detection and characterization of changes in the lineament patterns related to strong earthquakes, which can be used as a robust long-term precursor of earthquakes indicating regions of strong stress accumulation.
A novel assembly used for hot-shock consolidation
NASA Astrophysics Data System (ADS)
Chen, Pengwan; Zhou, Qiang; State Key Laboratory of Explosion Science and Technique Team
2013-06-01
A novel assembly characterized by an automatic set-up was developed for hot-shock consolidations of powders. The under-water shock wave and the high-temperature preheating, which are considered as two effective ways to eliminate cracks, were combined in the system. In this work, a SHS reaction mixture was used as chemical furnace to preheat the precursor powder, and the water column as well as the explosive attached to it was detached from the furnace by a solenoid valve fixed on the slide guide. When the precursor powders was preheated to the designed temperature, the solenoid valve was switched on, then the water column and the explosive slid down along the slide guide by gravity. At the moment the water container contacted with the lower part, the explosive was initiated, and the generated shock wave propagated through the water column to compact the powders. So the explosive and water column can be kept cool during the preheating process. The intensity of shock wave loading can be adjusted by changing the heights of water column. And the preheating temperature is controlled in the range of 700 ~1300 °C by changing the mass of the SHS mixture. In this work, pure tungsten powders and tungsten-copper mixture were separately compacted using this new assembly. The pure tungsten powder with a grain size of 2 μm were compacted to high density (96%T.D.) at 1300 °C, and the 90W-10Cu (wt pct) mixtures were compacted to nearly theoretical density at 1000 °C. The results showed that both samples were free of cracks. The consolidated specimens were then characterized by SEM analysis and micro-hardness testing.
Oyler, Benjamin L; Khan, Mohd M; Smith, Donald F; Harberts, Erin M; Kilgour, David P A; Ernst, Robert K; Cross, Alan S; Goodlett, David R
2018-06-01
Recent advances in lipopolysaccharide (LPS) biology have led to its use in drug discovery pipelines, including vaccine and vaccine adjuvant discovery. Desirable characteristics for LPS vaccine candidates include both the ability to produce a specific antibody titer in patients and a minimal host inflammatory response directed by the innate immune system. However, in-depth chemical characterization of most LPS extracts has not been performed; hence, biological activities of these extracts are unpredictable. Additionally, the most widely adopted workflow for LPS structure elucidation includes nonspecific chemical decomposition steps before analyses, making structures inferred and not necessarily biologically relevant. In this work, several different mass spectrometry workflows that have not been previously explored were employed to show proof-of-principle for top down LPS primary structure elucidation, specifically for a rough-type mutant (J5) E. coli-derived LPS component of a vaccine candidate. First, ion mobility filtered precursor ions were subjected to collision induced dissociation (CID) to define differences in native J5 LPS v. chemically detoxified J5 LPS (dLPS). Next, ultra-high mass resolving power, accurate mass spectrometry was employed for unequivocal precursor and product ion empirical formulae generation. Finally, MS 3 analyses in an ion trap instrument showed that previous knowledge about dissociation of LPS components can be used to reconstruct and sequence LPS in a top down fashion. A structural rationale is also explained for differential inflammatory dose-response curves, in vitro, when HEK-Blue hTLR4 cells were administered increasing concentrations of native J5 LPS v. dLPS, which will be useful in future drug discovery efforts. Graphical Abstract ᅟ.
A Molecular Clock Infers Heterogeneous Tissue Age Among Patients with Barrett’s Esophagus
Wong, Chao-Jen; Hazelton, William D.; Kaz, Andrew M.; Willis, Joseph E.; Grady, William M.; Luebeck, E. Georg
2016-01-01
Biomarkers that drift differentially with age between normal and premalignant tissues, such as Barrett’s esophagus (BE), have the potential to improve the assessment of a patient’s cancer risk by providing quantitative information about how long a patient has lived with the precursor (i.e., dwell time). In the case of BE, which is a metaplastic precursor to esophageal adenocarcinoma (EAC), such biomarkers would be particularly useful because EAC risk may change with BE dwell time and it is generally not known how long a patient has lived with BE when a patient is first diagnosed with this condition. In this study we first describe a statistical analysis of DNA methylation data (both cross-sectional and longitudinal) derived from tissue samples from 50 BE patients to identify and validate a set of 67 CpG dinucleotides in 51 CpG islands that undergo age-related methylomic drift. Next, we describe how this information can be used to estimate a patient’s BE dwell time. We introduce a Bayesian model that incorporates longitudinal methylomic drift rates, patient age, and methylation data from individually paired BE and normal squamous tissue samples to estimate patient-specific BE onset times. Our application of the model to 30 sporadic BE patients’ methylomic profiles first exposes a wide heterogeneity in patient-specific BE onset times. Furthermore, independent application of this method to a cohort of 22 familial BE (FBE) patients reveals significantly earlier mean BE onset times. Our analysis supports the conjecture that differential methylomic drift occurs in BE (relative to normal squamous tissue) and hence allows quantitative estimation of the time that a BE patient has lived with BE. PMID:27168458
NASA Astrophysics Data System (ADS)
Oyler, Benjamin L.; Khan, Mohd M.; Smith, Donald F.; Harberts, Erin M.; Kilgour, David P. A.; Ernst, Robert K.; Cross, Alan S.; Goodlett, David R.
2018-02-01
Recent advances in lipopolysaccharide (LPS) biology have led to its use in drug discovery pipelines, including vaccine and vaccine adjuvant discovery. Desirable characteristics for LPS vaccine candidates include both the ability to produce a specific antibody titer in patients and a minimal host inflammatory response directed by the innate immune system. However, in-depth chemical characterization of most LPS extracts has not been performed; hence, biological activities of these extracts are unpredictable. Additionally, the most widely adopted workflow for LPS structure elucidation includes nonspecific chemical decomposition steps before analyses, making structures inferred and not necessarily biologically relevant. In this work, several different mass spectrometry workflows that have not been previously explored were employed to show proof-of-principle for top down LPS primary structure elucidation, specifically for a rough-type mutant (J5) E. coli-derived LPS component of a vaccine candidate. First, ion mobility filtered precursor ions were subjected to collision induced dissociation (CID) to define differences in native J5 LPS v. chemically detoxified J5 LPS (dLPS). Next, ultra-high mass resolving power, accurate mass spectrometry was employed for unequivocal precursor and product ion empirical formulae generation. Finally, MS3 analyses in an ion trap instrument showed that previous knowledge about dissociation of LPS components can be used to reconstruct and sequence LPS in a top down fashion. A structural rationale is also explained for differential inflammatory dose-response curves, in vitro, when HEK-Blue hTLR4 cells were administered increasing concentrations of native J5 LPS v. dLPS, which will be useful in future drug discovery efforts. [Figure not available: see fulltext.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Simone; Burkhardt, Christoph; Budde, Gerrit
2017-05-20
Chondrules formed by the melting of dust aggregates in the solar protoplanetary disk and as such provide unique insights into how solid material was transported and mixed within the disk. Here, we show that chondrules from enstatite and ordinary chondrites show only small {sup 50}Ti variations and scatter closely around the {sup 50}Ti composition of their host chondrites. By contrast, chondrules from carbonaceous chondrites have highly variable {sup 50}Ti compositions, which, relative to the terrestrial standard, range from the small {sup 50}Ti deficits measured for enstatite and ordinary chondrite chondrules to the large {sup 50}Ti excesses known from Ca–Al-rich inclusionsmore » (CAIs). These {sup 50}Ti variations can be attributed to the addition of isotopically heterogeneous CAI-like material to enstatite and ordinary chondrite-like chondrule precursors. The new Ti isotopic data demonstrate that isotopic variations among carbonaceous chondrite chondrules do not require formation over a wide range of orbital distances, but can instead be fully accounted for by the incorporation of isotopically anomalous “nuggets” into chondrule precursors. As such, these data obviate the need for disk-wide transport of chondrules prior to chondrite parent body accretion and are consistent with formation of chondrules from a given chondrite group in localized regions of the disk. Finally, the ubiquitous presence of {sup 50}Ti-enriched material in carbonaceous chondrites and the lack of this material in the non-carbonaceous chondrites support the idea that these two meteorite groups derive from areas of the disk that remained isolated from each other, probably through the formation of Jupiter.« less
Richardson, Keith; Denny, Richard; Hughes, Chris; Skilling, John; Sikora, Jacek; Dadlez, Michał; Manteca, Angel; Jung, Hye Ryung; Jensen, Ole Nørregaard; Redeker, Virginie; Melki, Ronald; Langridge, James I.; Vissers, Johannes P.C.
2013-01-01
A probability-based quantification framework is presented for the calculation of relative peptide and protein abundance in label-free and label-dependent LC-MS proteomics data. The results are accompanied by credible intervals and regulation probabilities. The algorithm takes into account data uncertainties via Poisson statistics modified by a noise contribution that is determined automatically during an initial normalization stage. Protein quantification relies on assignments of component peptides to the acquired data. These assignments are generally of variable reliability and may not be present across all of the experiments comprising an analysis. It is also possible for a peptide to be identified to more than one protein in a given mixture. For these reasons the algorithm accepts a prior probability of peptide assignment for each intensity measurement. The model is constructed in such a way that outliers of any type can be automatically reweighted. Two discrete normalization methods can be employed. The first method is based on a user-defined subset of peptides, while the second method relies on the presence of a dominant background of endogenous peptides for which the concentration is assumed to be unaffected. Normalization is performed using the same computational and statistical procedures employed by the main quantification algorithm. The performance of the algorithm will be illustrated on example data sets, and its utility demonstrated for typical proteomics applications. The quantification algorithm supports relative protein quantification based on precursor and product ion intensities acquired by means of data-dependent methods, originating from all common isotopically-labeled approaches, as well as label-free ion intensity-based data-independent methods. PMID:22871168
Assessment of the GECKO-A Modeling Tool and Simplified 3D Model Parameterizations for SOA Formation
NASA Astrophysics Data System (ADS)
Aumont, B.; Hodzic, A.; La, S.; Camredon, M.; Lannuque, V.; Lee-Taylor, J. M.; Madronich, S.
2014-12-01
Explicit chemical mechanisms aim to embody the current knowledge of the transformations occurring in the atmosphere during the oxidation of organic matter. These explicit mechanisms are therefore useful tools to explore the fate of organic matter during its tropospheric oxidation and examine how these chemical processes shape the composition and properties of the gaseous and the condensed phases. Furthermore, explicit mechanisms provide powerful benchmarks to design and assess simplified parameterizations to be included 3D model. Nevertheless, the explicit mechanism describing the oxidation of hydrocarbons with backbones larger than few carbon atoms involves millions of secondary organic compounds, far exceeding the size of chemical mechanisms that can be written manually. Data processing tools can however be designed to overcome these difficulties and automatically generate consistent and comprehensive chemical mechanisms on a systematic basis. The Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) has been developed for the automatic writing of explicit chemical schemes of organic species and their partitioning between the gas and condensed phases. GECKO-A can be viewed as an expert system that mimics the steps by which chemists might develop chemical schemes. GECKO-A generates chemical schemes according to a prescribed protocol assigning reaction pathways and kinetics data on the basis of experimental data and structure-activity relationships. In its current version, GECKO-A can generate the full atmospheric oxidation scheme for most linear, branched and cyclic precursors, including alkanes and alkenes up to C25. Assessments of the GECKO-A modeling tool based on chamber SOA observations will be presented. GECKO-A was recently used to design a parameterization for SOA formation based on a Volatility Basis Set (VBS) approach. First results will be presented.
Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M
2015-01-01
Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.
Sridharan, Ramesh; Vul, Edward; Hsieh, Po-Jang; Kanwisher, Nancy; Golland, Polina
2012-01-01
Functional MRI studies have uncovered a number of brain areas that demonstrate highly specific functional patterns. In the case of visual object recognition, small, focal regions have been characterized with selectivity for visual categories such as human faces. In this paper, we develop an algorithm that automatically learns patterns of functional specificity from fMRI data in a group of subjects. The method does not require spatial alignment of functional images from different subjects. The algorithm is based on a generative model that comprises two main layers. At the lower level, we express the functional brain response to each stimulus as a binary activation variable. At the next level, we define a prior over sets of activation variables in all subjects. We use a Hierarchical Dirichlet Process as the prior in order to learn the patterns of functional specificity shared across the group, which we call functional systems, and estimate the number of these systems. Inference based on our model enables automatic discovery and characterization of dominant and consistent functional systems. We apply the method to data from a visual fMRI study comprised of 69 distinct stimulus images. The discovered system activation profiles correspond to selectivity for a number of image categories such as faces, bodies, and scenes. Among systems found by our method, we identify new areas that are deactivated by face stimuli. In empirical comparisons with perviously proposed exploratory methods, our results appear superior in capturing the structure in the space of visual categories of stimuli. PMID:21884803
Peng, Jinye; Babaguchi, Noboru; Luo, Hangzai; Gao, Yuli; Fan, Jianping
2010-07-01
Digital video now plays an important role in supporting more profitable online patient training and counseling, and integration of patient training videos from multiple competitive organizations in the health care network will result in better offerings for patients. However, privacy concerns often prevent multiple competitive organizations from sharing and integrating their patient training videos. In addition, patients with infectious or chronic diseases may not want the online patient training organizations to identify who they are or even which video clips they are interested in. Thus, there is an urgent need to develop more effective techniques to protect both video content privacy and access privacy . In this paper, we have developed a new approach to construct a distributed Hippocratic video database system for supporting more profitable online patient training and counseling. First, a new database modeling approach is developed to support concept-oriented video database organization and assign a degree of privacy of the video content for each database level automatically. Second, a new algorithm is developed to protect the video content privacy at the level of individual video clip by filtering out the privacy-sensitive human objects automatically. In order to integrate the patient training videos from multiple competitive organizations for constructing a centralized video database indexing structure, a privacy-preserving video sharing scheme is developed to support privacy-preserving distributed classifier training and prevent the statistical inferences from the videos that are shared for cross-validation of video classifiers. Our experiments on large-scale video databases have also provided very convincing results.
NASA Astrophysics Data System (ADS)
Gaudin, Damien; Moroni, Monica; Taddeucci, Jacopo; Scarlato, Piergiorgio; Shindler, Luca
2014-07-01
Image-based techniques enable high-resolution observation of the pyroclasts ejected during Strombolian explosions and drawing inferences on the dynamics of volcanic activity. However, data extraction from high-resolution videos is time consuming and operator dependent, while automatic analysis is often challenging due to the highly variable quality of images collected in the field. Here we present a new set of algorithms to automatically analyze image sequences of explosive eruptions: the pyroclast tracking velocimetry (PyTV) toolbox. First, a significant preprocessing is used to remove the image background and to detect the pyroclasts. Then, pyroclast tracking is achieved with a new particle tracking velocimetry algorithm, featuring an original predictor of velocity based on the optical flow equation. Finally, postprocessing corrects the systematic errors of measurements. Four high-speed videos of Strombolian explosions from Yasur and Stromboli volcanoes, representing various observation conditions, have been used to test the efficiency of the PyTV against manual analysis. In all cases, >106 pyroclasts have been successfully detected and tracked by PyTV, with a precision of 1 m/s for the velocity and 20% for the size of the pyroclast. On each video, more than 1000 tracks are several meters long, enabling us to study pyroclast properties and trajectories. Compared to manual tracking, 3 to 100 times more pyroclasts are analyzed. PyTV, by providing time-constrained information, links physical properties and motion of individual pyroclasts. It is a powerful tool for the study of explosive volcanic activity, as well as an ideal complement for other geological and geophysical volcano observation systems.
dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs.
Ma, Kede; Liu, Wentao; Liu, Tongliang; Wang, Zhou; Tao, Dacheng
2017-05-26
Objective assessment of image quality is fundamentally important in many image processing tasks. In this work, we focus on learning blind image quality assessment (BIQA) models which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIP) can be obtained automatically at low cost by exploiting largescale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation (gMAD) competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL Inferred Quality (dilIQ) index achieves an additional performance gain.
Folk-Economic Beliefs: An Evolutionary Cognitive Model.
Boyer, Pascal; Petersen, Michael Bang
2017-10-12
The domain of "folk-economics" consists in explicit beliefs about the economy held by laypeople, untrained in economics, about such topics as e.g., the causes of the wealth of nations, the benefits or drawbacks of markets and international trade, the effects of regulation, the origins of inequality, the connection between work and wages, the economic consequences of immigration, or the possible causes of unemployment. These beliefs are crucial in forming people's political beliefs, and in shaping their reception of different policies. Yet, they often conflict with elementary principles of economic theory and are often described as the consequences of ignorance, irrationality or specific biases. As we will argue, these past perspectives fail to predict the particular contents of popular folk-economic beliefs and, as a result, there is no systematic study of the cognitive factors involved in their emergence and cultural success. Here we propose that the cultural success of particular beliefs about the economy is predictable if we consider the influence of specialized, largely automatic inference systems that evolved as adaptations to ancestral human small-scale sociality. These systems, for which there is independent evidence, include free-rider detection, fairness-based partner-choice, ownership intuitions, coalitional psychology, and more. Information about modern mass-market conditions activates these specific inference-systems, resulting in particular intuitions, e.g., that impersonal transactions are dangerous or that international trade is a zero-sum game. These intuitions in turn make specific policy proposals more likely than others to become intuitively compelling, and as a consequence exert a crucial influence on political choices.
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Fumagalli, Michele; da Silva, Robert L.; Rendahl, Theodore; Parra, Jonathan
2015-09-01
Stellar population synthesis techniques for predicting the observable light emitted by a stellar population have extensive applications in numerous areas of astronomy. However, accurate predictions for small populations of young stars, such as those found in individual star clusters, star-forming dwarf galaxies, and small segments of spiral galaxies, require that the population be treated stochastically. Conversely, accurate deductions of the properties of such objects also require consideration of stochasticity. Here we describe a comprehensive suite of modular, open-source software tools for tackling these related problems. These include the following: a greatly-enhanced version of the SLUG code introduced by da Silva et al., which computes spectra and photometry for stochastically or deterministically sampled stellar populations with nearly arbitrary star formation histories, clustering properties, and initial mass functions; CLOUDY_SLUG, a tool that automatically couples SLUG-computed spectra with the CLOUDY radiative transfer code in order to predict stochastic nebular emission; BAYESPHOT, a general-purpose tool for performing Bayesian inference on the physical properties of stellar systems based on unresolved photometry; and CLUSTER_SLUG and SFR_SLUG, a pair of tools that use BAYESPHOT on a library of SLUG models to compute the mass, age, and extinction of mono-age star clusters, and the star formation rate of galaxies, respectively. The latter two tools make use of an extensive library of pre-computed stellar population models, which are included in the software. The complete package is available at http://www.slugsps.com.