An information-theoretical perspective on weighted ensemble forecasts
NASA Astrophysics Data System (ADS)
Weijs, Steven V.; van de Giesen, Nick
2013-08-01
This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.
ERIC Educational Resources Information Center
Pettersson, Rune
2014-01-01
Information design has practical and theoretical components. As an academic discipline we may view information design as a combined discipline, a practical theory, or as a theoretical practice. So far information design has incorporated facts, influences, methods, practices, principles, processes, strategies, and tools from a large number of…
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Researching Society and Culture.
ERIC Educational Resources Information Center
Seale, Clive, Ed.
This book provides theoretically informed guidance to practicing the key research methods for investigating society and culture. It is a text in both methods and methodology, in which the importance of understanding the historical, theoretical and institutional context in which particular methods have developed is stressed. The contributors of the…
Adaptive selection and validation of models of complex systems in the presence of uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell-Maupin, Kathryn; Oden, J. T.
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
Farrell-Maupin, Kathryn; Oden, J. T.
2017-08-01
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Meta-Synthesis of Research on Information Seeking Behaviour
ERIC Educational Resources Information Center
Urquhart, Christine
2011-01-01
Introduction: Meta-synthesis methods may help to make more sense of information behaviour research evidence. Aims and objectives: The objectives are to: 1) identify and examine the theoretical research strategies commonly used in information behaviour research; 2) discuss meta-synthesis methods that might be appropriate to the type of research…
An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.
Brito da Silva, Leonardo Enzo; Wunsch, Donald C
2018-06-01
Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.
Suggestions for presenting the results of data analyses
Anderson, David R.; Link, William A.; Johnson, Douglas H.; Burnham, Kenneth P.
2001-01-01
We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentists methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management.
Connectionist Interaction Information Retrieval.
ERIC Educational Resources Information Center
Dominich, Sandor
2003-01-01
Discussion of connectionist views for adaptive clustering in information retrieval focuses on a connectionist clustering technique and activation spreading-based information retrieval model using the interaction information retrieval method. Presents theoretical as well as simulation results as regards computational complexity and includes…
Information-theoretical noninvasive damage detection in bridge structures
NASA Astrophysics Data System (ADS)
Sudu Ambegedara, Amila; Sun, Jie; Janoyan, Kerop; Bollt, Erik
2016-11-01
Damage detection of mechanical structures such as bridges is an important research problem in civil engineering. Using spatially distributed sensor time series data collected from a recent experiment on a local bridge in Upper State New York, we study noninvasive damage detection using information-theoretical methods. Several findings are in order. First, the time series data, which represent accelerations measured at the sensors, more closely follow Laplace distribution than normal distribution, allowing us to develop parameter estimators for various information-theoretic measures such as entropy and mutual information. Second, as damage is introduced by the removal of bolts of the first diaphragm connection, the interaction between spatially nearby sensors as measured by mutual information becomes weaker, suggesting that the bridge is "loosened." Finally, using a proposed optimal mutual information interaction procedure to prune away indirect interactions, we found that the primary direction of interaction or influence aligns with the traffic direction on the bridge even after damaging the bridge.
Lizier, Joseph T; Heinzle, Jakob; Horstmann, Annette; Haynes, John-Dylan; Prokopenko, Mikhail
2011-02-01
The human brain undertakes highly sophisticated information processing facilitated by the interaction between its sub-regions. We present a novel method for interregional connectivity analysis, using multivariate extensions to the mutual information and transfer entropy. The method allows us to identify the underlying directed information structure between brain regions, and how that structure changes according to behavioral conditions. This method is distinguished in using asymmetric, multivariate, information-theoretical analysis, which captures not only directional and non-linear relationships, but also collective interactions. Importantly, the method is able to estimate multivariate information measures with only relatively little data. We demonstrate the method to analyze functional magnetic resonance imaging time series to establish the directed information structure between brain regions involved in a visuo-motor tracking task. Importantly, this results in a tiered structure, with known movement planning regions driving visual and motor control regions. Also, we examine the changes in this structure as the difficulty of the tracking task is increased. We find that task difficulty modulates the coupling strength between regions of a cortical network involved in movement planning and between motor cortex and the cerebellum which is involved in the fine-tuning of motor control. It is likely these methods will find utility in identifying interregional structure (and experimentally induced changes in this structure) in other cognitive tasks and data modalities.
The Philosophy of Information as an Underlying and Unifying Theory of Information Science
ERIC Educational Resources Information Center
Tomic, Taeda
2010-01-01
Introduction: Philosophical analyses of theoretical principles underlying these sub-domains reveal philosophy of information as underlying meta-theory of information science. Method: Conceptual research on the knowledge sub-domains in information science and philosophy and analysis of their mutual connection. Analysis: Similarities between…
Towards a Definition of Serendipity in Information Behaviour
ERIC Educational Resources Information Center
Agarwal, Naresh Kumar
2015-01-01
Introduction: Serendipitous or accidental discovery of information has often been neglected in information behaviour models, which tend to focus on information seeking, a more goal-directed behaviour. Method: This theoretical paper seeks to map the conceptual space of serendipity in information behaviour and to arrive at a definition. This is done…
MIDER: network inference with mutual information distance and entropy reduction.
Villaverde, Alejandro F; Ross, John; Morán, Federico; Banga, Julio R
2014-01-01
The prediction of links among variables from a given dataset is a task referred to as network inference or reverse engineering. It is an open problem in bioinformatics and systems biology, as well as in other areas of science. Information theory, which uses concepts such as mutual information, provides a rigorous framework for addressing it. While a number of information-theoretic methods are already available, most of them focus on a particular type of problem, introducing assumptions that limit their generality. Furthermore, many of these methods lack a publicly available implementation. Here we present MIDER, a method for inferring network structures with information theoretic concepts. It consists of two steps: first, it provides a representation of the network in which the distance among nodes indicates their statistical closeness. Second, it refines the prediction of the existing links to distinguish between direct and indirect interactions and to assign directionality. The method accepts as input time-series data related to some quantitative features of the network nodes (such as e.g. concentrations, if the nodes are chemical species). It takes into account time delays between variables, and allows choosing among several definitions and normalizations of mutual information. It is general purpose: it may be applied to any type of network, cellular or otherwise. A Matlab implementation including source code and data is freely available (http://www.iim.csic.es/~gingproc/mider.html). The performance of MIDER has been evaluated on seven different benchmark problems that cover the main types of cellular networks, including metabolic, gene regulatory, and signaling. Comparisons with state of the art information-theoretic methods have demonstrated the competitive performance of MIDER, as well as its versatility. Its use does not demand any a priori knowledge from the user; the default settings and the adaptive nature of the method provide good results for a wide range of problems without requiring tuning.
Method and apparatus for sensor fusion
NASA Technical Reports Server (NTRS)
Krishen, Kumar (Inventor); Shaw, Scott (Inventor); Defigueiredo, Rui J. P. (Inventor)
1991-01-01
Method and apparatus for fusion of data from optical and radar sensors by error minimization procedure is presented. The method was applied to the problem of shape reconstruction of an unknown surface at a distance. The method involves deriving an incomplete surface model from an optical sensor. The unknown characteristics of the surface are represented by some parameter. The correct value of the parameter is computed by iteratively generating theoretical predictions of the radar cross sections (RCS) of the surface, comparing the predicted and the observed values for the RCS, and improving the surface model from results of the comparison. Theoretical RCS may be computed from the surface model in several ways. One RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. The optical image provides the independent information.
A parametric method for determining the number of signals in narrow-band direction finding
NASA Astrophysics Data System (ADS)
Wu, Qiang; Fuhrmann, Daniel R.
1991-08-01
A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).
ERIC Educational Resources Information Center
Mursu, Anja; Luukkonen, Irmeli; Toivanen, Marika; Korpela, Mikko
2007-01-01
Introduction: The purpose of information systems is to facilitate work activities: here we consider how Activity Theory can be applied in information systems development. Method. The requirements for an analytical model for emancipatory, work-oriented information systems research and practice are specified. Previous research work in Activity…
Reverse Engineering Cellular Networks with Information Theoretic Methods
Villaverde, Alejandro F.; Ross, John; Banga, Julio R.
2013-01-01
Building mathematical models of cellular networks lies at the core of systems biology. It involves, among other tasks, the reconstruction of the structure of interactions between molecular components, which is known as network inference or reverse engineering. Information theory can help in the goal of extracting as much information as possible from the available data. A large number of methods founded on these concepts have been proposed in the literature, not only in biology journals, but in a wide range of areas. Their critical comparison is difficult due to the different focuses and the adoption of different terminologies. Here we attempt to review some of the existing information theoretic methodologies for network inference, and clarify their differences. While some of these methods have achieved notable success, many challenges remain, among which we can mention dealing with incomplete measurements, noisy data, counterintuitive behaviour emerging from nonlinear relations or feedback loops, and computational burden of dealing with large data sets. PMID:24709703
MIDER: Network Inference with Mutual Information Distance and Entropy Reduction
Villaverde, Alejandro F.; Ross, John; Morán, Federico; Banga, Julio R.
2014-01-01
The prediction of links among variables from a given dataset is a task referred to as network inference or reverse engineering. It is an open problem in bioinformatics and systems biology, as well as in other areas of science. Information theory, which uses concepts such as mutual information, provides a rigorous framework for addressing it. While a number of information-theoretic methods are already available, most of them focus on a particular type of problem, introducing assumptions that limit their generality. Furthermore, many of these methods lack a publicly available implementation. Here we present MIDER, a method for inferring network structures with information theoretic concepts. It consists of two steps: first, it provides a representation of the network in which the distance among nodes indicates their statistical closeness. Second, it refines the prediction of the existing links to distinguish between direct and indirect interactions and to assign directionality. The method accepts as input time-series data related to some quantitative features of the network nodes (such as e.g. concentrations, if the nodes are chemical species). It takes into account time delays between variables, and allows choosing among several definitions and normalizations of mutual information. It is general purpose: it may be applied to any type of network, cellular or otherwise. A Matlab implementation including source code and data is freely available (http://www.iim.csic.es/~gingproc/mider.html). The performance of MIDER has been evaluated on seven different benchmark problems that cover the main types of cellular networks, including metabolic, gene regulatory, and signaling. Comparisons with state of the art information–theoretic methods have demonstrated the competitive performance of MIDER, as well as its versatility. Its use does not demand any a priori knowledge from the user; the default settings and the adaptive nature of the method provide good results for a wide range of problems without requiring tuning. PMID:24806471
IMMAN: free software for information theory-based chemometric analysis.
Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo
2015-05-01
The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA supervised algorithms. Graphic representation for Shannon's distribution of MD calculating software.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Theory of Remote Image Formation
NASA Astrophysics Data System (ADS)
Blahut, Richard E.
2004-11-01
In many applications, images, such as ultrasonic or X-ray signals, are recorded and then analyzed with digital or optical processors in order to extract information. Such processing requires the development of algorithms of great precision and sophistication. This book presents a unified treatment of the mathematical methods that underpin the various algorithms used in remote image formation. The author begins with a review of transform and filter theory. He then discusses two- and three-dimensional Fourier transform theory, the ambiguity function, image construction and reconstruction, tomography, baseband surveillance systems, and passive systems (where the signal source might be an earthquake or a galaxy). Information-theoretic methods in image formation are also covered, as are phase errors and phase noise. Throughout the book, practical applications illustrate theoretical concepts, and there are many homework problems. The book is aimed at graduate students of electrical engineering and computer science, and practitioners in industry. Presents a unified treatment of the mathematical methods that underpin the algorithms used in remote image formation Illustrates theoretical concepts with reference to practical applications Provides insights into the design parameters of real systems
ERIC Educational Resources Information Center
Ilvonen, Ilona
2013-01-01
Information security management is an area with a lot of theoretical models. The models are designed to guide practitioners in prioritizing management resources in companies. Information security management education should address the gap between the academic ideals and practice. This paper introduces a teaching method that has been in use as…
Intermodulation Atomic Force Microscopy and Spectroscopy
NASA Astrophysics Data System (ADS)
Hutter, Carsten; Platz, Daniel; Tholen, Erik; Haviland, David; Hansson, Hans
2009-03-01
We present a powerful new method of dynamic AFM, which allows to gain far more information about the tip-surface interaction than standard amplitude or phase imaging, while scanning at comparable speed. Our method, called intermodulation atomic force microscopy (ImAFM), employs the manifestly nonlinear phenomenon of intermodulation to extract information about tip-surface forces. ImAFM uses one eigenmode of a mechanical resonator, the latter driven at two frequencies to produce many spectral peaks near its resonace, where sensitivity is highest [1]. We furthermore present a protocol for decoding the combined information encoded in the spectrum of intermodulation peaks. Our theoretical framework suggests methods to enhance the gained information by using a different parameter regime as compared to Ref. [1]. We also discuss strategies for solving the inverse problem, i.e., for extracting the nonlinear tip-surface interaction from the response, also naming limitations of our theoretical analysis. We will further report on latest progress to experimentally employ our new protocol.[3pt] [1] D. Platz, E. A. Tholen, D. Pesen, and D. B. Haviland, Appl. Phys. Lett. 92, 153106 (2008).
Van Zyl, Hendra; Kotze, Marike; Laubscher, Ria
2014-03-28
eHealth has been identified as a useful approach to disseminate HIV/AIDS information. Together with Consumer Health Informatics (CHI), the Web-to-Public Knowledge Transfer Model (WPKTM) has been applied as a theoretical framework to identify consumer needs for AfroAIDSinfo, a South African Web portal. As part of the CHI practice, regular eSurveys are conducted to determine whether these needs are changing and are continually being met. eSurveys show high rates of satisfaction with the content as well as the modes of delivery. The nature of information is thought of as reliable to reuse; both for education and for referencing of information. Using CHI and the WPKTM as a theoretical framework, it ensures that needs of consumers are being met and that they find the tailored methods of presenting the information agreeable. Combining ICTs and theories in eHealth interventions, this approach can be expanded to deliver information in other sectors of public health.
Van Zyl, Hendra; Kotze, Marike; Laubscher, Ria
2014-01-01
eHealth has been identified as a useful approach to disseminate HIV/AIDS information. Together with Consumer Health Informatics (CHI), the Web-to-Public Knowledge Transfer Model (WPKTM) has been applied as a theoretical framework to identify consumer needs for AfroAIDSinfo, a South African Web portal. As part of the CHI practice, regular eSurveys are conducted to determine whether these needs are changing and are continually being met. eSurveys show high rates of satisfaction with the content as well as the modes of delivery. The nature of information is thought of as reliable to reuse; both for education and for referencing of information. Using CHI and the WPKTM as a theoretical framework, it ensures that needs of consumers are being met and that they find the tailored methods of presenting the information agreeable. Combining ICTs and theories in eHealth interventions, this approach can be expanded to deliver information in other sectors of public health. PMID:24686487
Information theoretic approach for assessing image fidelity in photon-counting arrays.
Narravula, Srikanth R; Hayat, Majeed M; Javidi, Bahram
2010-02-01
The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image's entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier's performance.
Mielniczuk, Jan; Teisseyre, Paweł
2018-03-01
Detection of gene-gene interactions is one of the most important challenges in genome-wide case-control studies. Besides traditional logistic regression analysis, recently the entropy-based methods attracted a significant attention. Among entropy-based methods, interaction information is one of the most promising measures having many desirable properties. Although both logistic regression and interaction information have been used in several genome-wide association studies, the relationship between them has not been thoroughly investigated theoretically. The present paper attempts to fill this gap. We show that although certain connections between the two methods exist, in general they refer two different concepts of dependence and looking for interactions in those two senses leads to different approaches to interaction detection. We introduce ordering between interaction measures and specify conditions for independent and dependent genes under which interaction information is more discriminative measure than logistic regression. Moreover, we show that for so-called perfect distributions those measures are equivalent. The numerical experiments illustrate the theoretical findings indicating that interaction information and its modified version are more universal tools for detecting various types of interaction than logistic regression and linkage disequilibrium measures. © 2017 WILEY PERIODICALS, INC.
1987-10-01
durability test at 800 C, 95% r.h. 71 SEM photomicrograph at 1600 x of E-8385 film spun coat . from a 2 wt% solution onto a ferrotype plate. .I 72 Theoretical ...TiO2 to the high energy side. While Auger line shapes theoretically yield oxidation state information, stoichiometry conclusions from experi- 0 mental...the justification for the methods chosen in this work. ,*p-* ., Fadley et al. [37] present a detailed theoretical discussion on quantitative XPS
Two Improved Access Methods on Compact Binary (CB) Trees.
ERIC Educational Resources Information Center
Shishibori, Masami; Koyama, Masafumi; Okada, Makoto; Aoe, Jun-ichi
2000-01-01
Discusses information retrieval and the use of binary trees as a fast access method for search strategies such as hashing. Proposes new methods based on compact binary trees that provide faster access and more compact storage, explains the theoretical basis, and confirms the validity of the methods through empirical observations. (LRW)
Crisis in science: in search for new theoretical foundations.
Schroeder, Marcin J
2013-09-01
Recognition of the need for theoretical biology more than half century ago did not bring substantial progress in this direction. Recently, the need for new methods in science, including physics became clear. The breakthrough should be sought in answering the question "What is life?", which can help to explain the mechanisms of consciousness and consequently give insight into the way we comprehend reality. This could help in the search for new methods in the study of both physical and biological phenomena. However, to achieve this, new theoretical discipline will have to be developed with a very general conceptual framework and rigor of mathematical reasoning, allowing it to assume the leading role in science. Since its foundations are in the recognition of the role of life and consciousness in the epistemic process, it could be called biomathics. The prime candidates proposed here for being the fundamental concepts for biomathics are 'information' and 'information integration', with an appropriately general mathematical formalism. Copyright © 2013 Elsevier Ltd. All rights reserved.
The disinfection of drinking water. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The current status of theoretically possible methods for disinfecting drinking water is reviewed. The specific biocidal activity of each of the disinfectants is considered, as well as information (or lack of it) on the practical application and reliability of the methods.
On Utilizing Optimal and Information Theoretic Syntactic Modeling for Peptide Classification
NASA Astrophysics Data System (ADS)
Aygün, Eser; Oommen, B. John; Cataltepe, Zehra
Syntactic methods in pattern recognition have been used extensively in bioinformatics, and in particular, in the analysis of gene and protein expressions, and in the recognition and classification of bio-sequences. These methods are almost universally distance-based. This paper concerns the use of an Optimal and Information Theoretic (OIT) probabilistic model [11] to achieve peptide classification using the information residing in their syntactic representations. The latter has traditionally been achieved using the edit distances required in the respective peptide comparisons. We advocate that one can model the differences between compared strings as a mutation model consisting of random Substitutions, Insertions and Deletions (SID) obeying the OIT model. Thus, in this paper, we show that the probability measure obtained from the OIT model can be perceived as a sequence similarity metric, using which a Support Vector Machine (SVM)-based peptide classifier, referred to as OIT_SVM, can be devised.
ERIC Educational Resources Information Center
Dirks, Melanie A.; De Los Reyes, Andres; Briggs-Gowan, Margaret; Cella, David; Wakschlag, Lauren S.
2012-01-01
This paper examines the selection and use of multiple methods and informants for the assessment of disruptive behavior syndromes and attention deficit/hyperactivity disorder, providing a critical discussion of (a) the bidirectional linkages between theoretical models of childhood psychopathology and current assessment techniques; and (b) current…
The use of information theory for the evaluation of biomarkers of aging and physiological age.
Blokh, David; Stambler, Ilia
2017-04-01
The present work explores the application of information theoretical measures, such as entropy and normalized mutual information, for research of biomarkers of aging. The use of information theory affords unique methodological advantages for the study of aging processes, as it allows evaluating non-linear relations between biological parameters, providing the precise quantitative strength of those relations, both for individual and multiple parameters, showing cumulative or synergistic effect. Here we illustrate those capabilities utilizing a dataset on heart disease, including diagnostic parameters routinely available to physicians. The use of information-theoretical methods, utilizing normalized mutual information, revealed the exact amount of information that various diagnostic parameters or their combinations contained about the persons' age. Based on those exact informative values for the correlation of measured parameters with age, we constructed a diagnostic rule (a decision tree) to evaluate physiological age, as compared to chronological age. The present data illustrated that younger subjects suffering from heart disease showed characteristics of people of higher age (higher physiological age). Utilizing information-theoretical measures, with additional data, it may be possible to create further clinically applicable information-theory-based markers and models for the evaluation of physiological age, its relation to age-related diseases and its potential modifications by therapeutic interventions. Copyright © 2017 Elsevier B.V. All rights reserved.
Control Coordination of Multiple Agents Through Decision Theoretic and Economic Methods
2003-02-01
instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information...investigated the design of test data for benchmarking such optimization algorithms. Our other research on combinatorial auctions included I...average combination rule. We exemplified these theoretical results with experiments on stock market data , demonstrating how ensembles of classifiers can
ERIC Educational Resources Information Center
Holden, Richard J.; Karsh, Ben-Tzion
2009-01-01
Primary objective: much research and practice related to the design and implementation of information technology in health care has been atheoretical. It is argued that using extant theory to develop testable models of health information technology (HIT) benefits both research and practice. Methods and procedures: several theories of motivation,…
Mainsah, B O; Reeves, G; Collins, L M; Throckmorton, C S
2017-08-01
The role of a brain-computer interface (BCI) is to discern a user's intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional row-column paradigm. By accounting for refractory effects, an information-theoretic approach can be exploited to significantly improve BCI performance across a wide range of performance levels.
NASA Astrophysics Data System (ADS)
Mainsah, B. O.; Reeves, G.; Collins, L. M.; Throckmorton, C. S.
2017-08-01
Objective. The role of a brain-computer interface (BCI) is to discern a user’s intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. Approach. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. Main results. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional row-column paradigm. Significance. By accounting for refractory effects, an information-theoretic approach can be exploited to significantly improve BCI performance across a wide range of performance levels.
Spatiotemporal coding in the cortex: information flow-based learning in spiking neural networks.
Deco, G; Schürmann, B
1999-05-15
We introduce a learning paradigm for networks of integrate-and-fire spiking neurons that is based on an information-theoretic criterion. This criterion can be viewed as a first principle that demonstrates the experimentally observed fact that cortical neurons display synchronous firing for some stimuli and not for others. The principle can be regarded as the postulation of a nonparametric reconstruction method as optimization criteria for learning the required functional connectivity that justifies and explains synchronous firing for binding of features as a mechanism for spatiotemporal coding. This can be expressed in an information-theoretic way by maximizing the discrimination ability between different sensory inputs in minimal time.
NASA Astrophysics Data System (ADS)
Prasai, Binay; Wilson, A. R.; Wiley, B. J.; Ren, Y.; Petkov, Valeri
2015-10-01
The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au100-xPdx (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when ``tuned up'' against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design.The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au100-xPdx (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when ``tuned up'' against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design. Electronic supplementary information (ESI) available: XRD patterns, TEM and 3D structure modelling methodology. See DOI: 10.1039/c5nr04678e
ERIC Educational Resources Information Center
Kelly, Janet
2000-01-01
Indicates the importance of preparing prospective teachers who will be elementary science teachers with different methods. Presents the theoretical and practical rationale for developing a constructivist-based elementary science methods course. Discusses the impact student knowledge and understanding of science and student attitudes has on…
Hershberger, Patricia E.; Finnegan, Lorna; Altfeld, Susan; Lake, Sara; Hirshfeld-Cytron, Jennifer
2014-01-01
Background Young women with cancer now face the complex decision about whether to undergo fertility preservation. Yet little is known about how these women process information involved in making this decision. Objective The purpose of this paper is to expand theoretical understanding of the decision-making process by examining aspects of information processing among young women diagnosed with cancer. Methods Using a grounded theory approach, 27 women with cancer participated in individual, semi-structured interviews. Data were coded and analyzed using constant-comparison techniques that were guided by five dimensions within the Contemplate phase of the decision-making process framework. Results In the first dimension, young women acquired information primarily from clinicians and Internet sources. Experiential information, often obtained from peers, occurred in the second dimension. Preferences and values were constructed in the third dimension as women acquired factual, moral, and ethical information. Women desired tailored, personalized information that was specific to their situation in the fourth dimension; however, women struggled with communicating these needs to clinicians. In the fifth dimension, women offered detailed descriptions of clinician behaviors that enhance or impede decisional debriefing. Conclusion Better understanding of theoretical underpinnings surrounding women’s information processes can facilitate decision support and improve clinical care. PMID:24552086
Cost Optimization in E-Learning-Based Education Systems: Implementation and Learning Sequence
ERIC Educational Resources Information Center
Fazlollahtabar, Hamed; Yousefpoor, Narges
2009-01-01
Increasing the effectiveness of e-learning has become one of the most practically and theoretically important issues within both educational engineering and information system fields. The development of information technologies has contributed to growth in online training as an important education method. The online training environment enables…
Information theoretic analysis of edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2010-08-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.
Detection of allosteric signal transmission by information-theoretic analysis of protein dynamics
Pandini, Alessandro; Fornili, Arianna; Fraternali, Franca; Kleinjung, Jens
2012-01-01
Allostery offers a highly specific way to modulate protein function. Therefore, understanding this mechanism is of increasing interest for protein science and drug discovery. However, allosteric signal transmission is difficult to detect experimentally and to model because it is often mediated by local structural changes propagating along multiple pathways. To address this, we developed a method to identify communication pathways by an information-theoretical analysis of molecular dynamics simulations. Signal propagation was described as information exchange through a network of correlated local motions, modeled as transitions between canonical states of protein fragments. The method was used to describe allostery in two-component regulatory systems. In particular, the transmission from the allosteric site to the signaling surface of the receiver domain NtrC was shown to be mediated by a layer of hub residues. The location of hubs preferentially connected to the allosteric site was found in close agreement with key residues experimentally identified as involved in the signal transmission. The comparison with the networks of the homologues CheY and FixJ highlighted similarities in their dynamics. In particular, we showed that a preorganized network of fragment connections between the allosteric and functional sites exists already in the inactive state of all three proteins.—Pandini, A., Fornili, A., Fraternali, F., Kleinjung, J. Detection of allosteric signal transmission by information-theoretic analysis of protein dynamics. PMID:22071506
Reed, Frances M; Fitzgerald, Les; Rae, Melanie
2016-01-01
To highlight philosophical and theoretical considerations for planning a mixed methods research design that can inform a practice model to guide rural district nursing end of life care. Conceptual models of nursing in the community are general and lack guidance for rural district nursing care. A combination of pragmatism and nurse agency theory can provide a framework for ethical considerations in mixed methods research in the private world of rural district end of life care. Reflection on experience gathered in a two-stage qualitative research phase, involving rural district nurses who use advocacy successfully, can inform a quantitative phase for testing and complementing the data. Ongoing data analysis and integration result in generalisable inferences to achieve the research objective. Mixed methods research that creatively combines philosophical and theoretical elements to guide design in the particular ethical situation of community end of life care can be used to explore an emerging field of interest and test the findings for evidence to guide quality nursing practice. Combining philosophy and nursing theory to guide mixed methods research design increases the opportunity for sound research outcomes that can inform a nursing model of care.
Zhao, Yubin; Li, Xiaofan; Zhang, Sha; Meng, Tianhui; Zhang, Yiwen
2016-08-23
In practical localization system design, researchers need to consider several aspects to make the positioning efficiently and effectively, e.g., the available auxiliary information, sensing devices, equipment deployment and the environment. Then, these practical concerns turn out to be the technical problems, e.g., the sequential position state propagation, the target-anchor geometry effect, the Non-line-of-sight (NLOS) identification and the related prior information. It is necessary to construct an efficient framework that can exploit multiple available information and guide the system design. In this paper, we propose a scalable method to analyze system performance based on the Cramér-Rao lower bound (CRLB), which can fuse all of the information adaptively. Firstly, we use an abstract function to represent all of the wireless localization system model. Then, the unknown vector of the CRLB consists of two parts: the first part is the estimated vector, and the second part is the auxiliary vector, which helps improve the estimation accuracy. Accordingly, the Fisher information matrix is divided into two parts: the state matrix and the auxiliary matrix. Unlike the theoretical analysis, our CRLB can be a practical fundamental limit to denote the system that fuses multiple information in the complicated environment, e.g., recursive Bayesian estimation based on the hidden Markov model, the map matching method and the NLOS identification and mitigation methods. Thus, the theoretical results are approaching the real case more. In addition, our method is more adaptable than other CRLBs when considering more unknown important factors. We use the proposed method to analyze the wireless sensor network-based indoor localization system. The influence of the hybrid LOS/NLOS channels, the building layout information and the relative height differences between the target and anchors are analyzed. It is demonstrated that our method exploits all of the available information for the indoor localization systems and serves as an indicator for practical system evaluation.
The Model Method: Singapore Children's Tool for Representing and Solving Algebraic Word Problems
ERIC Educational Resources Information Center
Ng, Swee Fong; Lee, Kerry
2009-01-01
Solving arithmetic and algebraic word problems is a key component of the Singapore elementary mathematics curriculum. One heuristic taught, the model method, involves drawing a diagram to represent key information in the problem. We describe the model method and a three-phase theoretical framework supporting its use. We conducted 2 studies to…
Hash Functions and Information Theoretic Security
NASA Astrophysics Data System (ADS)
Bagheri, Nasour; Knudsen, Lars R.; Naderi, Majid; Thomsen, Søren S.
Information theoretic security is an important security notion in cryptography as it provides a true lower bound for attack complexities. However, in practice attacks often have a higher cost than the information theoretic bound. In this paper we study the relationship between information theoretic attack costs and real costs. We show that in the information theoretic model, many well-known and commonly used hash functions such as MD5 and SHA-256 fail to be preimage resistant.
The application of information theory for the research of aging and aging-related diseases.
Blokh, David; Stambler, Ilia
2017-10-01
This article reviews the application of information-theoretical analysis, employing measures of entropy and mutual information, for the study of aging and aging-related diseases. The research of aging and aging-related diseases is particularly suitable for the application of information theory methods, as aging processes and related diseases are multi-parametric, with continuous parameters coexisting alongside discrete parameters, and with the relations between the parameters being as a rule non-linear. Information theory provides unique analytical capabilities for the solution of such problems, with unique advantages over common linear biostatistics. Among the age-related diseases, information theory has been used in the study of neurodegenerative diseases (particularly using EEG time series for diagnosis and prediction), cancer (particularly for establishing individual and combined cancer biomarkers), diabetes (mainly utilizing mutual information to characterize the diseased and aging states), and heart disease (mainly for the analysis of heart rate variability). Few works have employed information theory for the analysis of general aging processes and frailty, as underlying determinants and possible early preclinical diagnostic measures for aging-related diseases. Generally, the use of information-theoretical analysis permits not only establishing the (non-linear) correlations between diagnostic or therapeutic parameters of interest, but may also provide a theoretical insight into the nature of aging and related diseases by establishing the measures of variability, adaptation, regulation or homeostasis, within a system of interest. It may be hoped that the increased use of such measures in research may considerably increase diagnostic and therapeutic capabilities and the fundamental theoretical mathematical understanding of aging and disease. Copyright © 2016 Elsevier Ltd. All rights reserved.
Emitter Number Estimation by the General Information Theoretic Criterion from Pulse Trains
2002-12-01
negative log likelihood function plus a penalty function. The general information criteria by Yin and Krishnaiah [11] are different from the regular...548-551, Victoria, BC, Canada, March 1999 DRDC Ottawa TR 2002-156 11 11. L. Zhao, P. P. Krishnaiah and Z. Bai, “On some nonparametric methods for
Students as Tour Guides: Innovation in Fieldwork Assessment
ERIC Educational Resources Information Center
Coe, Neil M.; Smyth, Fiona M.
2010-01-01
This paper introduces and details an innovative mode of fieldcourse assessment in which students take on the role of tour guides to offer their lecturer and peers a themed, theoretically informed journey through the urban landscape of Havana, Cuba. Informed by notions of student-centered learning and mobile methods, the tour offers an enjoyable,…
Methodical Bases for the Regional Information Potential Estimation
ERIC Educational Resources Information Center
Ashmarina, Svetlana I.; Khasaev, Gabibulla R.; Mantulenko, Valentina V.; Kasarin, Stanislav V.; Dorozhkin, Evgenij M.
2016-01-01
The relevance of the investigated problem is caused by the need to assess the implementation of informatization level of the region and the insufficient development of the theoretical, content-technological, scientific and methodological aspects of the assessment of the region's information potential. The aim of the research work is to develop a…
The Role of Teacher's Authority in Students' Learning
ERIC Educational Resources Information Center
Esmaeili, Zohreh; Mohamadrezai, Hosein; Mohamadrezai, Abdolah
2015-01-01
The current article attempts to examine the relation between authority styles of teachers and learning of students of secondary school of district 9 Tehran. The researcher has collected theoretical information by library method and then arranged the field information from teachers of secondary schools of district 9 of Tehran by questionnaire; the…
Quantum entanglement of identical particles by standard information-theoretic notions
Lo Franco, Rosario; Compagno, Giuseppe
2016-01-01
Quantum entanglement of identical particles is essential in quantum information theory. Yet, its correct determination remains an open issue hindering the general understanding and exploitation of many-particle systems. Operator-based methods have been developed that attempt to overcome the issue. Here we introduce a state-based method which, as second quantization, does not label identical particles and presents conceptual and technical advances compared to the previous ones. It establishes the quantitative role played by arbitrary wave function overlaps, local measurements and particle nature (bosons or fermions) in assessing entanglement by notions commonly used in quantum information theory for distinguishable particles, like partial trace. Our approach furthermore shows that bringing identical particles into the same spatial location functions as an entangling gate, providing fundamental theoretical support to recent experimental observations with ultracold atoms. These results pave the way to set and interpret experiments for utilizing quantum correlations in realistic scenarios where overlap of particles can count, as in Bose-Einstein condensates, quantum dots and biological molecular aggregates. PMID:26857475
Science information in the media: an academic approach to improve its intrinsic quality.
Bruno, Flavia; Vercellesi, Luisa
2002-01-01
The lay audience expresses a clear demand for scientific information, particularly when health and welfare are involved. For most people science is what they learn from the media. The need for good scientific journalism is pressing, to bridge the gap between the slow pace of science and the fast-moving and concise nature of successful mass communication. This academic postgraduate course was established by the Department of Pharmacological Sciences to train mediators to improve the quality of lay scientific dissemination. The programme focuses on teaching a method of selecting, analysing, understanding, mediating and diffusing scientific information to lay people. The course explores the theoretical and practical aspects of methods, techniques and channels of scientific communication. Case studies, practical exercises, and stages complement the theoretical curriculum. The teaching focus is on reducing the asymmetry between scientists and the public. The different backgrounds of students and the spread of topics are major challenges. Copyright 2002 Academic Press.
NASA Technical Reports Server (NTRS)
Walch, S.
1984-01-01
The primary focus of this research has been the theoretical study of transition metal (TM) chemistry. A major goal of this work is to provide reliable information about the interaction of H atoms with iron metal. This information is needed to understand the effect of H atoms on the processes of embrittlement and crack propagation in iron. The method in the iron hydrogen studies is the cluster method in which the bulk metal is modelled by a finite number of iron atoms. There are several difficulties in the application of this approach to the hydrogen iron system. First the nature of TM-TM and TM-H bonding for even diatomic molecules was not well understood when these studies were started. Secondly relatively large iron clusters are needed to provide reasonable results.
ERIC Educational Resources Information Center
Price, Kathleen J.
2011-01-01
The use of information technology is a vital part of everyday life, but for a person with functional impairments, technology interaction may be difficult at best. Information technology is commonly designed to meet the needs of a theoretical "normal" user. However, there is no such thing as a "normal" user. A user's capabilities will vary over…
NASA Astrophysics Data System (ADS)
Clergeau, Jean-François; Ferraton, Matthieu; Guérard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick; Daullé, Thibault
2017-01-01
1D or 2D neutron position sensitive detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of position resolution. We then apply this measure to quantify the power of position resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the position resolution over best-wire algorithms which are the standard way of treating these signals.
Cao, Hui; Melton, Genevieve B.; Markatou, Marianthi; Hripcsak, George
2008-01-01
Inter-case similarity metrics can potentially help find similar cases from a case base for evidence-based practice. While several methods to measure similarity between cases have been proposed, developing an effective means for measuring patient case similarity remains a challenging problem. We were interested in examining how abstracting could potentially assist computing case similarity. In this study, abstracted patient-specific features from medical records were used to improve an existing information-theoretic measurement. The developed metric, using a combination of abstracted disease, finding, procedure and medication features, achieved a correlation between 0.6012 and 0.6940 to experts. PMID:18487093
Recent statistical methods for orientation data
NASA Technical Reports Server (NTRS)
Batschelet, E.
1972-01-01
The application of statistical methods for determining the areas of animal orientation and navigation are discussed. The method employed is limited to the two-dimensional case. Various tests for determining the validity of the statistical analysis are presented. Mathematical models are included to support the theoretical considerations and tables of data are developed to show the value of information obtained by statistical analysis.
Information analysis of hyperspectral images from the hyperion satellite
NASA Astrophysics Data System (ADS)
Puzachenko, Yu. G.; Sandlersky, R. B.; Krenke, A. N.; Puzachenko, M. Yu.
2017-07-01
A new method of estimating the outgoing radiation spectra data obtained from the Hyperion EO-1 satellite is considered. In theoretical terms, this method is based on the nonequilibrium thermodynamics concept with corresponding estimates of the entropy and the Kullbak information. The obtained information estimates make it possible to assess the effective work of the landscape cover both in general and for its various types and to identify the spectrum ranges primarily responsible for the information increment and, accordingly, for the effective work. The information is measured in the frequency band intervals corresponding to the peaks of solar radiation absorption by different pigments, mesophyll, and water to evaluate the system operation by their synthesis and moisture accumulation. This method is assumed to be effective in investigation of ecosystem functioning by hyperspectral remote sensing.
ERIC Educational Resources Information Center
Katz, Phyllis; McGinnis, J. Randy; Riedinger, Kelly; Marbach-Ad, Gili; Dai, Amy
2013-01-01
In case studies of two first-year elementary classroom teachers, we explored the influence of informal science education (ISE) they experienced in their teacher education program. Our theoretical lens was identity development, delimited to classroom science teaching. We used complementary data collection methods and analysis, including interviews,…
Constructing probability boxes and Dempster-Shafer structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferson, Scott; Kreinovich, Vladik; Grinzburg, Lev
This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement and conflict when multiple such objects are obtained from different sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armas-Perez, Julio C.; Londono-Hurtado, Alejandro; Guzman, Orlando
2015-07-27
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro; Guzmán, Orlando
2015-07-28
A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystalmore » droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.« less
Evolution of the Use of Information within the Operational Art: Impact on Modern US Forces
2013-04-16
his essential assistance until the paper was complete. I owe a great debt of gratitude to a great friend who taught me how to appreciate...2 The employment of information in innovative ways derived from its theoretical operational art underpinnings is the most powerful method of...commanders, not least because of a lack of technology by which to project that information. Information was tactical and could include troop level controls
Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K
2018-06-01
This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.
Linking theory with qualitative research through study of stroke caregiving families.
Pierce, Linda L; Steiner, Victoria; Cervantez Thompson, Teresa L; Friedemann, Marie-Luise
2014-01-01
This theoretical article outlines the deliberate process of applying a qualitative data analysis method rooted in Friedemann's Framework of Systemic Organization through the study of a web-based education and support intervention for stroke caregiving families. Directed by Friedemann's framework, the analytic method involved developing, refining, and using a coding rubric to explore interactive patterns between caregivers and care recipients from this 3-month feasibility study using this education and support intervention. Specifically, data were gathered from the intervention's web-based discussion component between caregivers and the nurse specialist, as well as from telephone caregiver interviews. A theoretical framework guided the process of developing and refining this coding rubric for the purpose of organizing data; but, more importantly, guided the investigators' thought processes, allowing them to extract rich information from the data set, as well as synthesize this information to generate a broad understanding of the caring situation. © 2013 Association of Rehabilitation Nurses.
NASA Astrophysics Data System (ADS)
Wu, Z.; Gao, K.; Wang, Z. L.; Shao, Q. G.; Hu, R. F.; Wei, C. X.; Zan, G. B.; Wali, F.; Luo, R. H.; Zhu, P. P.; Tian, Y. C.
2017-06-01
In X-ray grating-based phase contrast imaging, information retrieval is necessary for quantitative research, especially for phase tomography. However, numerous and repetitive processes have to be performed for tomographic reconstruction. In this paper, we report a novel information retrieval method, which enables retrieving phase and absorption information by means of a linear combination of two mutually conjugate images. Thanks to the distributive law of the multiplication as well as the commutative law and associative law of the addition, the information retrieval can be performed after tomographic reconstruction, thus simplifying the information retrieval procedure dramatically. The theoretical model of this method is established in both parallel beam geometry for Talbot interferometer and fan beam geometry for Talbot-Lau interferometer. Numerical experiments are also performed to confirm the feasibility and validity of the proposed method. In addition, we discuss its possibility in cone beam geometry and its advantages compared with other methods. Moreover, this method can also be employed in other differential phase contrast imaging methods, such as diffraction enhanced imaging, non-interferometric imaging, and edge illumination.
NASA Astrophysics Data System (ADS)
Wang, Xu; Le, Anh-Thu; Zhou, Zhaoyan; Wei, Hui; Lin, C. D.
2017-08-01
We provide a unified theoretical framework for recently emerging experiments that retrieve fixed-in-space molecular information through time-domain rotational coherence spectroscopy. Unlike a previous approach by Makhija et al. (V. Makhija et al., arXiv:1611.06476), our method can be applied to the retrieval of both real-valued (e.g., ionization yield) and complex-valued (e.g., induced dipole moment) molecular response information. It is also a direct retrieval method without using iterations. We also demonstrate that experimental parameters, such as the fluence of the aligning laser pulse and the rotational temperature of the molecular ensemble, can be quite accurately determined using a statistical method.
NASA Astrophysics Data System (ADS)
Duan, B.; Bari, M. A.; Wu, Z. Q.; Jun, Y.; Li, Y. M.; Wang, J. G.
2012-11-01
Aims: We present relativistic quantum mechanical calculations of electron-impact broadening of the singlet and triplet transition 2s3s ← 2s3p in four Be-like ions from N IV to Ne VII. Methods: In our theoretical calculations, the K-matrix and related symmetry information determined by the colliding systems are generated by the DARC codes. Results: A careful comparison between our calculations and experimental results shows good agreement. Our calculated widths of spectral lines also agree with earlier theoretical results. Our investigations provide new methods of calculating electron-impact broadening parameters for plasma diagnostics.
McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W
2015-03-27
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Reid, J. S.; Schmidt, C. C.; Giglio, L.; Prins, E.
2009-12-01
The diurnal cycle of fire activity is crucial for accurate simulation of atmospheric effects of fire emissions, especially at finer spatial and temporal scales. Estimating diurnal variability in emissions is also a critical problem for construction of emissions estimates from multiple sensors with variable coverage patterns. An optimal diurnal emissions estimate will use as much information as possible from satellite fire observations, compensate known biases in those observations, and use detailed theoretical models of the diurnal cycle to fill in missing information. As part of ongoing improvements to the Fire Location and Monitoring of Burning Emissions (FLAMBE) fire monitoring system, we evaluated several different methods of integrating observations with different temporal sampling. We used geostationary fire detections from WF_ABBA, fire detection data from MODIS, empirical diurnal cycles from TRMM, and simple theoretical diurnal curves based on surface heating. Our experiments integrated these data in different combinations to estimate the diurnal cycles of emissions for each location and time. Hourly emissions estimates derived using these methods were tested using an aerosol transport model. We present results of this comparison, and discuss the implications of our results for the broader problem of multi-sensor data fusion in fire emissions modeling.
Multimodal Image Registration through Simultaneous Segmentation.
Aganj, Iman; Fischl, Bruce
2017-11-01
Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.
ERIC Educational Resources Information Center
Peterson, Janey C.; Czajkowski, Susan; Charlson, Mary E.; Link, Alissa R.; Wells, Martin T.; Isen, Alice M.; Mancuso, Carol A.; Allegrante, John P.; Boutin-Foster, Carla; Ogedegbe, Gbenga; Jobe, Jared B.
2013-01-01
Objective: To describe a mixed-methods approach to develop and test a basic behavioral science-informed intervention to motivate behavior change in 3 high-risk clinical populations. Our theoretically derived intervention comprised a combination of positive affect and self-affirmation (PA/SA), which we applied to 3 clinical chronic disease…
Information-theoretic metamodel of organizational evolution
NASA Astrophysics Data System (ADS)
Sepulveda, Alfredo
2011-12-01
Social organizations are abstractly modeled by holarchies---self-similar connected networks---and intelligent complex adaptive multiagent systems---large networks of autonomous reasoning agents interacting via scaled processes. However, little is known of how information shapes evolution in such organizations, a gap that can lead to misleading analytics. The research problem addressed in this study was the ineffective manner in which classical model-predict-control methods used in business analytics attempt to define organization evolution. The purpose of the study was to construct an effective metamodel for organization evolution based on a proposed complex adaptive structure---the info-holarchy. Theoretical foundations of this study were holarchies, complex adaptive systems, evolutionary theory, and quantum mechanics, among other recently developed physical and information theories. Research questions addressed how information evolution patterns gleamed from the study's inductive metamodel more aptly explained volatility in organization. In this study, a hybrid grounded theory based on abstract inductive extensions of information theories was utilized as the research methodology. An overarching heuristic metamodel was framed from the theoretical analysis of the properties of these extension theories and applied to business, neural, and computational entities. This metamodel resulted in the synthesis of a metaphor for, and generalization of organization evolution, serving as the recommended and appropriate analytical tool to view business dynamics for future applications. This study may manifest positive social change through a fundamental understanding of complexity in business from general information theories, resulting in more effective management.
Determination of principal stress in birefringent composites by hole-drilling method
NASA Technical Reports Server (NTRS)
Prabhakaran, R.
1981-01-01
The application of transmission photoelasticity to stress analysis of composite materials is discussed.The method consists in drilling very small holes at points where the state of stress has to be determined. Experiments are described which verify the theoretical predicitons. The limitations of the method are discussed and it is concluded that valuable information concerning the state of stress in a composite model can be obtained through the suggested method.
Development of Theoretical and Computational Methods for Single-Source Bathymetric Data
2016-09-15
Methods for Single-Source N00014-16-1-2035 Bathymetric Data Sb. GRANT NUMBER 11893686 Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER...A method is outlined for fusing the information inherent in such source documents, at different scales, into a single picture for the marine...algorithm reliability, which reflects the degree of inconsistency of the source documents, is also provided. A conceptual outline of the method , and a
Persona, Marek; Kutarov, Vladimir V; Kats, Boris M; Persona, Andrzej; Marczewska, Barbara
2007-01-01
The paper describes the new prediction method of octanol-water partition coefficient, which is based on molecular graph theory. The results obtained using the new method are well correlated with experimental values. These results were compared with the ones obtained by use of ten other structure correlated methods. The comparison shows that graph theory can be very useful in structure correlation research.
Social Influences on Fertility: A Comparative Mixed Methods Study in Eastern and Western Germany
ERIC Educational Resources Information Center
Bernardi, Laura; Keim, Sylvia; von der Lippe, Holger
2007-01-01
This article uses a mixed methods design to investigate the effects of social influence on family formation in a sample of eastern and western German young adults at an early stage of their family formation. Theoretical propositions on the importance of informal interaction for fertility and family behavior are still rarely supported by systematic…
Facilitating Behavior Change with Low-Literacy Patient Education Materials
ERIC Educational Resources Information Center
Seligman, Hilary K.; Wallace, Andrea S.; DeWalt, Darren A.; Schillinger, Dean; Arnold, Connie L.; Shilliday, Betsy Bryant; Delgadillo, Adriana; Bengal, Nikki; Davis, Terry C.
2007-01-01
Objective: To describe a process for developing low-literacy health education materials that increase knowledge and activate patients toward healthier behaviors. Methods: We developed a theoretically informed process for developing educational materials. This process included convening a multidisciplinary creative team, soliciting stakeholder…
NASA Astrophysics Data System (ADS)
Rey, Michaël; Nikitin, Andrei V.; Babikov, Yurii L.; Tyuterev, Vladimir G.
2016-09-01
Knowledge of intensities of rovibrational transitions of various molecules and theirs isotopic species in wide spectral and temperature ranges is essential for the modeling of optical properties of planetary atmospheres, brown dwarfs and for other astrophysical applications. TheoReTS ("Theoretical Reims-Tomsk Spectral data") is an Internet accessible information system devoted to ab initio based rotationally resolved spectra predictions for some relevant molecular species. All data were generated from potential energy and dipole moment surfaces computed via high-level electronic structure calculations using variational methods for vibration-rotation energy levels and transitions. When available, empirical corrections to band centers were applied, all line intensities remaining purely ab initio. The current TheoReTS implementation contains information on four-to-six atomic molecules, including phosphine, methane, ethylene, silane, methyl-fluoride, and their isotopic species 13CH4 , 12CH3D , 12CH2D2 , 12CD4 , 13C2H4, … . Predicted hot methane line lists up to T = 2000 K are included. The information system provides the associated software for spectra simulation including absorption coefficient, absorption and emission cross-sections, transmittance and radiance. The simulations allow Lorentz, Gauss and Voight line shapes. Rectangular, triangular, Lorentzian, Gaussian, sinc and sinc squared apparatus function can be used with user-defined specifications for broadening parameters and spectral resolution. All information is organized as a relational database with the user-friendly graphical interface according to Model-View-Controller architectural tools. The full-featured web application is written on PHP using Yii framework and C++ software modules. In case of very large high-temperature line lists, a data compression is implemented for fast interactive spectra simulations of a quasi-continual absorption due to big line density. Applications for the TheoReTS may include: education/training in molecular absorption/emission, radiative and non-LTE processes, spectroscopic applications, opacity calculations for planetary and astrophysical applications. The system is freely accessible via internet on the two mirror sites: in Reims, France
Reconstructing Information in Large-Scale Structure via Logarithmic Mapping
NASA Astrophysics Data System (ADS)
Szapudi, Istvan
We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out practical methods, with the ultimate goal of cosmological parameter estimation. We will quantify with standard MCMC and Fisher methods (including DETF Figure of merit when applicable) the efficiency of our estimators, comparing with the conventional method, that uses the un-transformed field. Preliminary results indicate that the increase for NASA's WFIRST in the DETF Figure of Merit would be 1.5-4.2 using a range of pessimistic to optimistic assumptions, respectively.
McDonnell, J. D.; Schunck, N.; Higdon, D.; ...
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonnell, J. D.; Schunck, N.; Higdon, D.
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less
Methodology for vocational psychodiagnostics of senior schoolchildren using information technologies
NASA Astrophysics Data System (ADS)
Bogdanovskaya, I. M.; Kosheleva, A. N.; Kiselev, P. B.; Davydova, Yu. A.
2017-01-01
The article identifies the role and main problems of vocational psychodiagnostics in modern socio-cultural conditions. It analyzes the potentials of information technologies in vocational psychodiagnostics of senior schoolchildren. The article describes the theoretical and methodological grounds, content and diagnostic potentials of the computerized method in vocational psychodiagnostics. The computerized method includes three blocks of sub-tests to identify intellectual potential, personal qualities, professional interests and values, career orientations, as well as subtests to analyze the specific life experience of senior schoolchildren. The results of diagnostics allow developing an integrated psychodiagnostic conclusion with recommendations. The article contains options of software architecture for the given method.
NASA Astrophysics Data System (ADS)
Stöltzner, Michael
Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.
Temporal Correlations and Neural Spike Train Entropy
NASA Astrophysics Data System (ADS)
Schultz, Simon R.; Panzeri, Stefano
2001-06-01
Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a ``brute force'' approach.
Network Learning for Educational Change. Professional Learning
ERIC Educational Resources Information Center
Veugelers, Wiel, Ed.; O'Hair, Mary John, Ed.
2005-01-01
School-university networks are becoming an important method to enhance educational renewal and student achievement. Networks go beyond tensions of top-down versus bottom-up, school development and professional development of individuals, theory and practice, and formal and informal organizational structures. The theoretical base of networking…
Preparing Prospective Physical Educators in Exercise Physiology.
ERIC Educational Resources Information Center
Bulger, Sean M.; Mohr, Derek J.; Carson, Linda M.; Robert, Darren L.; Wiegand, Robert L.
2000-01-01
Addresses the need for continued assessment of course content and instructional methods employed within physical education teacher education programs to deliver theoretical and applied information from the foundational subdiscipline of exercise physiology, describing an innovative course at one university (Exercise for School-Aged Children) which…
Wilson, Kumanan; Code, Catherine; Dornan, Christopher; Ahmad, Nadya; Hébert, Paul; Graham, Ian
2004-01-01
Background The media play an important role at the interface of science and policy by communicating scientific information to the public and policy makers. In issues of theoretical risk, in which there is scientific uncertainty, the media's role as disseminators of information is particularly important due to the potential to influence public perception of the severity of the risk. In this article we describe how the Canadian print media reported the theoretical risk of blood transmission of Creutzfeldt-Jakob disease (CJD). Methods We searched 3 newspaper databases for articles published by 6 major Canadian daily newspapers between January 1990 and December 1999. We identified all articles relating to blood transmission of CJD. In duplicate we extracted information from the articles and entered the information into a qualitative software program. We compared the observations obtained from this content analysis with information obtained from a previous policy analysis examining the Canadian blood system's decision-making concerning the potential transfusion transmission of CJD. Results Our search identified 245 relevant articles. We observed that newspapers in one instance accelerated a policy decision, which had important resource and health implication, by communicating information on risk to the public. We also observed that newspapers primarily relied upon expert opinion (47 articles) as opposed to published medical evidence (28 articles) when communicating risk information. Journalists we interviewed described the challenges of balancing their responsibility to raise awareness of potential health threats with not unnecessarily arousing fear amongst the public. Conclusions Based on our findings we recommend that journalists report information from both expert opinion sources and from published studies when communicating information on risk. We also recommend researchers work more closely with journalists to assist them in identifying and appraising relevant scientific information on risk. PMID:14706119
Setting a disordered password on a photonic memory
NASA Astrophysics Data System (ADS)
Su, Shih-Wei; Gou, Shih-Chuan; Chew, Lock Yue; Chang, Yu-Yen; Yu, Ite A.; Kalachev, Alexey; Liao, Wen-Te
2017-06-01
An all-optical method of setting a disordered password on different schemes of photonic memory is theoretically studied. While photons are regarded as ideal information carriers, it is imperative to implement such data protection on all-optical storage. However, we wish to address the intrinsic risk of data breaches in existing schemes of photonic memory. We theoretically demonstrate a protocol using spatially disordered laser fields to encrypt data stored on an optical memory, namely, encrypted photonic memory. To address the broadband storage, we also investigate a scheme of disordered echo memory with a high fidelity approaching unity. The proposed method increases the difficulty for the eavesdropper to retrieve the stored photon without the preset password even when the randomized and stored photon state is nearly perfectly cloned. Our results pave ways to significantly reduce the exposure of memories, required for long-distance communication, to eavesdropping and therefore restrict the optimal attack on communication protocols. The present scheme also increases the sensitivity of detecting any eavesdropper and so raises the security level of photonic information technology.
Research on image complexity evaluation method based on color information
NASA Astrophysics Data System (ADS)
Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo
2017-11-01
In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.
Information theoretic comparisons of original and transformed data from Landsat MSS and TM
NASA Technical Reports Server (NTRS)
Malila, W. A.
1985-01-01
The dispersion and concentration of signal values in transformed data from the Landsat-4 MSS and TM instruments are analyzed using a communications theory approach. The definition of entropy of Shannon was used to quantify information, and the concept of mutual information was employed to develop a measure of information contained in several subsets of variables. Several comparisons of information content are made on the basis of the information content measure, including: system design capacities; data volume occupied by agricultural data; and the information content of original bands and Tasseled Cap variables. A method for analyzing noise effects in MSS and TM data is proposed.
Prasai, Binay; Wilson, A R; Wiley, B J; Ren, Y; Petkov, Valeri
2015-11-14
The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au(100-x)Pd(x) (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when "tuned up" against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design.
Günay, Ulviye; Kılınç, Gülsen
2018-06-01
Nursing education contains both theoretical and practical training processes. Clinical training is the basis of nursing education. The quality of clinical training is closely related to the quality of the clinical learning environment. This study aimed to determine the transfer of theoretical knowledge into clinical practice by nursing students and the difficulties they experience during this process. A qualitative research design was used in the study. The study was conducted in 2015 with 30 nursing students in a university located in the east of Turkey, constituting three focus groups. The questions directed to the students during the focus group interviews were as follows: What do you think about your clinical training? How do you evaluate yourself in the process of putting your theoretical knowledge into clinical practice? What kind of difficulties are you experiencing in clinical practices? The data were interpreted using the method of content analysis. Most of the students reported that theoretical information they received was excessive, their ability to put most of this information into practice was weak, and they lacked courage to touch patients for fear of implementing procedures incorrectly. As a result of the analysis of the data, five main themes were determined: clinical training, guidance and communication, hospital environment and expectations. The results of this study showed that nursing students found their clinical knowledge and skills insufficient and usually failed to transfer their theoretical knowledge into clinical practices. The study observed that nursing students experienced various issues in clinical practices. In order to fix these issues and achieve an effective clinical training environment, collaboration should be achieved among nursing instructors, nurses, nursing school and hospital managements. Additionally, the number of nursing educators should be increased and training programs should be provided regarding effective clinical training methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
[Learning strategies of autonomous medical students].
Márquez U, Carolina; Fasce H, Eduardo; Ortega B, Javiera; Bustamante D, Carolina; Pérez V, Cristhian; Ibáñez G, Pilar; Ortiz M, Liliana; Espinoza P, Camila; Bastías V, Nancy
2015-12-01
Understanding how autonomous students are capable of regulating their own learning process is essential to develop self-directed teaching methods. To understand how self-directed medical students approach learning in medical schools at University of Concepción, Chile. A qualitative and descriptive study, performed according to Grounded Theory guidelines, following Strauss & Corbin was performed. Twenty medical students were selected by the maximum variation sampling method. The data collection technique was carried out by a semi-structured thematic interview. Students were interviewed by researchers after an informed consent procedure. Data were analyzed by the open coding method using Atlas-ti 7.5.2 software. Self-directed learners were characterized by being good planners and managing their time correctly. Students performed a diligent selection of contents to study based on reliable literature sources, theoretical relevance and type of evaluation. They also emphasized the discussion of clinical cases, where theoretical contents can be applied. This modality allows them to gain a global view of theoretical contents, to verbalize knowledge and to obtain a learning feedback. The learning process of autonomous students is intentional and planned.
Advantages of Structure-Based Drug Design Approaches in Neurological Disorders
Aarthy, Murali; Panwar, Umesh; Selvaraj, Chandrabose; Singh, Sanjeev Kumar
2017-01-01
Objective: The purpose of the review is to portray the theoretical concept on neurological disorders from research data. Background: The freak changes in chemical response of nerve impulse causes neurological disorders. The research evidence of the effort done in the older history suggests that the biological drug targets and their effective feature with responsive drugs could be valuable in promoting the future development of health statistics structure for improved treatment for curing the nervous disorders. Methods: In this review, we summarized the most iterative theoretical concept of structure based drug design approaches in various neurological disorders to unfathomable understanding of reported information for future drug design and development. Results: On the premise of reported information we analyzed the model of theoretical drug designing process for understanding the mechanism and pathology of the neurological diseases which covers the development of potentially effective inhibitors against the biological drug targets. Finally, it also suggests the management and implementation of the current treatment in improving the human health system behaviors. Conclusion: With the survey of reported information we concluded the development strategies of diagnosis and treatment against neurological diseases which leads to supportive progress in the drug discovery. PMID:28042767
Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John
2018-03-07
DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.
Three optical methods for remotely measuring aerosol size distributions.
NASA Technical Reports Server (NTRS)
Reagan, J. A.; Herman, B. M.
1971-01-01
Three optical probing methods for remotely measuring atmospheric aerosol size distributions are discussed and contrasted. The particular detection methods which are considered make use of monostatic lidar (laser radar), bistatic lidar, and solar radiometer sensing techniques. The theory of each of these measurement techniques is discussed briefly, and the necessary constraints which must be applied to obtain aerosol size distribution information from such measurements are pointed out. Theoretical and/or experimental results are also presented which demonstrate the utility of the three proposed probing methods.
Extension of the hole-drilling method to birefringent composites
NASA Technical Reports Server (NTRS)
Prabhakaran, R.
1982-01-01
A complete stress analysis and reliable failure criteria are essential for important structural applications of composites in order to fully utilize their unique properties. The inhomogeneity, anisotropy and inelasticity of many composites make the use of experimental methods indispensable. Among the experimental techniques, transmission photoelasticity has been extended to birefringent composites in recent years. The extension is not straight-forward, in view of the complex nature of the photoelastic response of such model materials. This paper very briefly reviews the important developments in the subject and then describes the theoretical basis for a new method of determining the individual values of principal stresses in composite models. The method consists in drilling very small holes at points where the state of stress has to be determined. Experiments are then described which verify the theoretical predictions. The limitations of the method are pointed out and it is concluded that valuable information concerning the state of stress in a composite model can be obtained through the suggested method.
Dunn-Walters, Deborah K.; Belelovsky, Alex; Edelman, Hanna; Banerjee, Monica; Mehr, Ramit
2002-01-01
We have developed a rigorous graph-theoretical algorithm for quantifying the shape properties of mutational lineage trees. We show that information about the dynamics of hypermutation and antigen-driven clonal selection during the humoral immune response is contained in the shape of mutational lineage trees deduced from the responding clones. Age and tissue related differences in the selection process can be studied using this method. Thus, tree shape analysis can be used as a means of elucidating humoral immune response dynamics in various situations. PMID:15144020
Baillie, Colin P.T.; Galaviz, Karla; Jarvis, Jocelyn W.; Latimer-Cheung, Amy E.
2015-01-01
Background: Physical activity can aid people with multiple sclerosis (MS) in managing symptoms and maintaining functional abilities. The Internet is a preferred source of physical activity information for people with MS and, therefore, a method for the dissemination of behavior change techniques. The purpose of this study was to examine the coverage and quality of physical activity behavior change techniques delivered on the Internet for adults with MS using Abraham and Michie's taxonomy of behavior change techniques. Methods: Using the taxonomy, 20 websites were coded for quality (ie, accuracy of information) and coverage (ie, completeness of information) of theoretical behavior change techniques. Results: Results indicated that most websites covered a mean of 8.05 (SD 3.86, range 3–16) techniques out of a possible 20. Only one of the techniques, provide information on behavior–health link and consequences, was delivered on all websites. The websites demonstrated low mean coverage and quality across all behavior change techniques, with means of 0.64 (SD 0.67) and 0.62 (SD 0.37) on a scale of 0 to 2, respectively. However, coverage and quality improved when websites were examined solely for the techniques that they covered, as opposed to all 20 techniques. Conclusions: This study, which examined quality and coverage of physical activity behavior change techniques described online for people with MS, illustrated that the dissemination of these techniques requires improvement. PMID:25892979
Enhancements and Algorithms for Avionic Information Processing System Design Methodology.
1982-06-16
programming algorithm is enhanced by incorporating task precedence constraints and hardware failures. Stochastic network methods are used to analyze...allocations in the presence of random fluctuations. Graph theoretic methods are used to analyze hardware designs, and new designs are constructed with...There, spatial dynamic programming (SDP) was used to solve a static, deterministic software allocation problem. Under the current contract the SDP
Navarrete, Gorka; Correia, Rut; Sirota, Miroslav; Juanchich, Marie; Huepe, David
2015-01-01
Most of the research on Bayesian reasoning aims to answer theoretical questions about the extent to which people are able to update their beliefs according to Bayes' Theorem, about the evolutionary nature of Bayesian inference, or about the role of cognitive abilities in Bayesian inference. Few studies aim to answer practical, mainly health-related questions, such as, “What does it mean to have a positive test in a context of cancer screening?” or “What is the best way to communicate a medical test result so a patient will understand it?”. This type of research aims to translate empirical findings into effective ways of providing risk information. In addition, the applied research often adopts the paradigms and methods of the theoretically-motivated research. But sometimes it works the other way around, and the theoretical research borrows the importance of the practical question in the medical context. The study of Bayesian reasoning is relevant to risk communication in that, to be as useful as possible, applied research should employ specifically tailored methods and contexts specific to the recipients of the risk information. In this paper, we concentrate on the communication of the result of medical tests and outline the epidemiological and test parameters that affect the predictive power of a test—whether it is correct or not. Building on this, we draw up recommendations for better practice to convey the results of medical tests that could inform health policy makers (What are the drawbacks of mass screenings?), be used by health practitioners and, in turn, help patients to make better and more informed decisions. PMID:26441711
Unifying cost and information in information-theoretic competitive learning.
Kamimura, Ryotaro
2005-01-01
In this paper, we introduce costs into the framework of information maximization and try to maximize the ratio of information to its associated cost. We have shown that competitive learning is realized by maximizing mutual information between input patterns and competitive units. One shortcoming of the method is that maximizing information does not necessarily produce representations faithful to input patterns. Information maximizing primarily focuses on some parts of input patterns that are used to distinguish between patterns. Therefore, we introduce the cost, which represents average distance between input patterns and connection weights. By minimizing the cost, final connection weights reflect input patterns well. We applied the method to a political data analysis, a voting attitude problem and a Wisconsin cancer problem. Experimental results confirmed that, when the cost was introduced, representations faithful to input patterns were obtained. In addition, improved generalization performance was obtained within a relatively short learning time.
Crystal structure prediction supported by incomplete experimental data
NASA Astrophysics Data System (ADS)
Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji
2018-05-01
We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.
Informing Physics: Jacob Bekenstein and the Informational Turn in Theoretical Physics
NASA Astrophysics Data System (ADS)
Belfer, Israel
2014-03-01
In his PhD dissertation in the early 1970s, the Mexican-Israeli theoretical physicist Jacob Bekenstein developed the thermodynamics of black holes using a generalized version of the second law of thermodynamics. This work made it possible for physicists to describe and analyze black holes using information-theoretical concepts. It also helped to transform information theory into a fundamental and foundational concept in theoretical physics. The story of Bekenstein's work—which was initially opposed by many scientists, including Stephen Hawking—highlights the transformation within physics towards an information-oriented scientific mode of theorizing. This "informational turn" amounted to a mild-mannered revolution within physics, revolutionary without being rebellious.
Cognition in Orienteering: Theoretical Perspectives and Methods of Study.
ERIC Educational Resources Information Center
Ottosson, Torgny
1996-01-01
Almost without exception, published studies on cognition in orienteering have adopted an information processing perspective involving dualism between objective and subjective worlds. An alternative, experiential framework focuses on the orienteer's conception of (or way of experiencing) the task to be accomplished, and on "affordances" (lines of…
Layover and shadow detection based on distributed spaceborne single-baseline InSAR
NASA Astrophysics Data System (ADS)
Huanxin, Zou; Bin, Cai; Changzhou, Fan; Yun, Ren
2014-03-01
Distributed spaceborne single-baseline InSAR is an effective technique to get high quality Digital Elevation Model. Layover and Shadow are ubiquitous phenomenon in SAR images because of geometric relation of SAR imaging. In the signal processing of single-baseline InSAR, the phase singularity of Layover and Shadow leads to the phase difficult to filtering and unwrapping. This paper analyzed the geometric and signal model of the Layover and Shadow fields. Based on the interferometric signal autocorrelation matrix, the paper proposed the signal number estimation method based on information theoretic criteria, to distinguish Layover and Shadow from normal InSAR fields. The effectiveness and practicability of the method proposed in the paper are validated in the simulation experiments and theoretical analysis.
Kivijärvi, Ville; Nyman, Markus; Shevchenko, Andriy; Kaivola, Matti
2018-04-02
Planar optical waveguides made of designable spatially dispersive nanomaterials can offer new capabilities for nanophotonic components. As an example, a thin slab waveguide can be designed to compensate for optical diffraction and provide divergence-free propagation for strongly focused optical beams. Optical signals in such waveguides can be transferred in narrow channels formed by the light itself. We introduce here a theoretical method for characterization and design of nanostructured waveguides taking into account their inherent spatial dispersion and anisotropy. Using the method, we design a diffraction-compensating slab waveguide that contains only a single layer of silver nanorods. The waveguide shows low propagation loss and broadband diffraction compensation, potentially allowing transfer of optical information at a THz rate.
Pant, Sanjay; Lombardi, Damiano
2015-10-01
A new approach for assessing parameter identifiability of dynamical systems in a Bayesian setting is presented. The concept of Shannon entropy is employed to measure the inherent uncertainty in the parameters. The expected reduction in this uncertainty is seen as the amount of information one expects to gain about the parameters due to the availability of noisy measurements of the dynamical system. Such expected information gain is interpreted in terms of the variance of a hypothetical measurement device that can measure the parameters directly, and is related to practical identifiability of the parameters. If the individual parameters are unidentifiable, correlation between parameter combinations is assessed through conditional mutual information to determine which sets of parameters can be identified together. The information theoretic quantities of entropy and information are evaluated numerically through a combination of Monte Carlo and k-nearest neighbour methods in a non-parametric fashion. Unlike many methods to evaluate identifiability proposed in the literature, the proposed approach takes the measurement-noise into account and is not restricted to any particular noise-structure. Whilst computationally intensive for large dynamical systems, it is easily parallelisable and is non-intrusive as it does not necessitate re-writing of the numerical solvers of the dynamical system. The application of such an approach is presented for a variety of dynamical systems--ranging from systems governed by ordinary differential equations to partial differential equations--and, where possible, validated against results previously published in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.
Efficient proof of ownership for cloud storage systems
NASA Astrophysics Data System (ADS)
Zhong, Weiwei; Liu, Zhusong
2017-08-01
Cloud storage system through the deduplication technology to save disk space and bandwidth, but the use of this technology has appeared targeted security attacks: the attacker can deceive the server to obtain ownership of the file by get the hash value of original file. In order to solve the above security problems and the different security requirements of the files in the cloud storage system, an efficient and information-theoretical secure proof of ownership sceme is proposed to support the file rating. Through the K-means algorithm to implement file rating, and use random seed technology and pre-calculation method to achieve safe and efficient proof of ownership scheme. Finally, the scheme is information-theoretical secure, and achieve better performance in the most sensitive areas of client-side I/O and computation.
2010-01-01
Background It is recognised as good practice to use qualitative methods to elicit users' views of internet-delivered health-care interventions during their development. This paper seeks to illustrate the advantages of combining usability testing with 'theoretical modelling', i.e. analyses that relate the findings of qualitative studies during intervention development to social science theory, in order to gain deeper insights into the reasons and context for how people respond to the intervention. This paper illustrates how usability testing may be enriched by theoretical modelling by means of two qualitative studies of users' views of the delivery of information in an internet-delivered intervention to help users decide whether they needed to seek medical care for their cold or flu symptoms. Methods In Study 1, 21 participants recruited from a city in southern England were asked to 'think aloud' while viewing draft web-pages presented in paper format. In Study 2, views of our prototype website were elicited, again using think aloud methods, in a sample of 26 participants purposively sampled for diversity in education levels. Both data-sets were analysed by thematic analysis. Results Study 1 revealed that although the information provided by the draft web-pages had many of the intended empowering benefits, users often felt overwhelmed by the quantity of information. Relating these findings to theory and research on factors influencing preferences for information-seeking we hypothesised that to meet the needs of different users (especially those with lower literacy levels) our website should be designed to provide only essential personalised advice, but with options to access further information. Study 2 showed that our website design did prove accessible to users with different literacy levels. However, some users seemed to want still greater control over how information was accessed. Conclusions Educational level need not be an insuperable barrier to appreciating web-based access to detailed health-related information, provided that users feel they can quickly gain access to the specific information they seek. PMID:20849599
Fission yield covariances for JEFF: A Bayesian Monte Carlo method
NASA Astrophysics Data System (ADS)
Leray, Olivier; Rochman, Dimitri; Fleming, Michael; Sublet, Jean-Christophe; Koning, Arjan; Vasiliev, Alexander; Ferroukhi, Hakim
2017-09-01
The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties) and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.
Kotini, A; Anninos, P; Anastasiadis, A N; Tamiolakis, D
2005-09-07
The aim of this study was to compare a theoretical neural net model with MEG data from epileptic patients and normal individuals. Our experimental study population included 10 epilepsy sufferers and 10 healthy subjects. The recordings were obtained with a one-channel biomagnetometer SQUID in a magnetically shielded room. Using the method of x2-fitting it was found that the MEG amplitudes in epileptic patients and normal subjects had Poisson and Gauss distributions respectively. The Poisson connectivity derived from the theoretical neural model represents the state of epilepsy, whereas the Gauss connectivity represents normal behavior. The MEG data obtained from epileptic areas had higher amplitudes than the MEG from normal regions and were comparable with the theoretical magnetic fields from Poisson and Gauss distributions. Furthermore, the magnetic field derived from the theoretical model had amplitudes in the same order as the recorded MEG from the 20 participants. The approximation of the theoretical neural net model with real MEG data provides information about the structure of the brain function in epileptic and normal states encouraging further studies to be conducted.
2016-12-22
assumptions of behavior. This research proposes an information theoretic methodology to discover such complex network structures and dynamics while overcoming...the difficulties historically associated with their study. Indeed, this was the first application of an information theoretic methodology as a tool...1 Research Objectives and Questions..............................................................................2 Methodology
ERIC Educational Resources Information Center
Zhumasheva, Anara; Zhumabaeva, Zaida; Sakenov, Janat; Vedilina, Yelena; Zhaxylykova, Nuriya; Sekenova, Balkumis
2016-01-01
The current study focuses on the research topic of creating a theoretical model of development of information competence among students enrolled in elective courses. In order to examine specific features of the theoretical model of development of information competence among students enrolled in elective courses, we performed an analysis of…
NASA Astrophysics Data System (ADS)
Mishra, A.; Vibhute, V.; Ninama, S.; Parsai, N.; Jha, S. N.; Sharma, P.
2016-10-01
X-ray absorption fine structure (XAFS) at the K-edge of copper has been studied in some copper (II) complexes with substituted anilines like (2Cl, 4Br, 2NO2, 4NO2 and pure aniline) with o-PDA (orthophenylenediamine) as ligand. The X-ray absorption measurements have been performed at the recently developed BL-8 dispersive EXAFS beam line at 2.5 GeV Indus-2 Synchrotron Source at RRCAT, Indore, India. The data obtained has been processed using EXAFS data analysis program Athena.The graphical method gives the useful information about bond length and also the environment of the absorbing atom. The theoretical bond lengths of the complexes were calculated by using interactive fitting of EXAFS using fast Fourier inverse transformation (IFEFFIT) method. This method is also called as Fourier transform method. The Lytle, Sayers and Stern method and Levy's method have been used for determination of bond lengths experimentally of the studied complexes. The results of both methods have been compared with theoretical IFEFFIT method.
NASA Astrophysics Data System (ADS)
Bateev, A. B.; Filippov, V. P.
2017-01-01
The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.
Theoretical Implications of Extralist Probes for Directed Forgetting
ERIC Educational Resources Information Center
Sahakyan, Lili; Goodmon, Leilani B.
2010-01-01
In 5 experiments, the authors examined the influence of associative information in list-method directed forgetting, using the extralist cuing procedure (Nelson & McEvoy, 2005). Targets were studied in the absence of cues, but during retrieval, related cues were used to test their memory. Experiment 1 manipulated the degree of resonant…
Leader Positivity and Follower Creativity: An Experimental Analysis
ERIC Educational Resources Information Center
Avey, James B.; Richmond, F. Lynn; Nixon, Don R.
2012-01-01
Using an experimental research design, 191 working adults were randomly assigned to two experimental conditions in order to test a theoretical model linking leader and follower positive psychological capital (PsyCap). Multiple methods were used to gather information from the participants. We found when leader PsyCap was manipulated experimentally,…
Community College Management by Objectives: Process, Progress, Problems.
ERIC Educational Resources Information Center
Deegan, William L.; And Others
The objectives of this book are: (1) to present a theoretical framework for management by objectives in community colleges, (2) to present information about alternative methods for conducting needs assessment and implementing management by objectives, (3) to present a framework for integrating academic and fiscal planning through management by…
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
One-dimensional barcode reading: an information theoretic approach
NASA Astrophysics Data System (ADS)
Houni, Karim; Sawaya, Wadih; Delignon, Yves
2008-03-01
In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.
One-dimensional barcode reading: an information theoretic approach.
Houni, Karim; Sawaya, Wadih; Delignon, Yves
2008-03-10
In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.
Pump-probe nonlinear phase dispersion spectroscopy.
Robles, Francisco E; Samineni, Prathyush; Wilson, Jesse W; Warren, Warren S
2013-04-22
Pump-probe microscopy is an imaging technique that delivers molecular contrast of pigmented samples. Here, we introduce pump-probe nonlinear phase dispersion spectroscopy (PP-NLDS), a method that leverages pump-probe microscopy and spectral-domain interferometry to ascertain information from dispersive and resonant nonlinear effects. PP-NLDS extends the information content to four dimensions (phase, amplitude, wavelength, and pump-probe time-delay) that yield unique insight into a wider range of nonlinear interactions compared to conventional methods. This results in the ability to provide highly specific molecular contrast of pigmented and non-pigmented samples. A theoretical framework is described, and experimental results and simulations illustrate the potential of this method. Implications for biomedical imaging are discussed.
Pump-probe nonlinear phase dispersion spectroscopy
Robles, Francisco E.; Samineni, Prathyush; Wilson, Jesse W.; Warren, Warren S.
2013-01-01
Pump-probe microscopy is an imaging technique that delivers molecular contrast of pigmented samples. Here, we introduce pump-probe nonlinear phase dispersion spectroscopy (PP-NLDS), a method that leverages pump-probe microscopy and spectral-domain interferometry to ascertain information from dispersive and resonant nonlinear effects. PP-NLDS extends the information content to four dimensions (phase, amplitude, wavelength, and pump-probe time-delay) that yield unique insight into a wider range of nonlinear interactions compared to conventional methods. This results in the ability to provide highly specific molecular contrast of pigmented and non-pigmented samples. A theoretical framework is described, and experimental results and simulations illustrate the potential of this method. Implications for biomedical imaging are discussed. PMID:23609646
NASA Astrophysics Data System (ADS)
Acton, Scott T.; Gilliam, Andrew D.; Li, Bing; Rossi, Adam
2008-02-01
Improvised explosive devices (IEDs) are common and lethal instruments of terrorism, and linking a terrorist entity to a specific device remains a difficult task. In the effort to identify persons associated with a given IED, we have implemented a specialized content based image retrieval system to search and classify IED imagery. The system makes two contributions to the art. First, we introduce a shape-based matching technique exploiting shape, color, and texture (wavelet) information, based on novel vector field convolution active contours and a novel active contour initialization method which treats coarse segmentation as an inverse problem. Second, we introduce a unique graph theoretic approach to match annotated printed circuit board images for which no schematic or connectivity information is available. The shape-based image retrieval method, in conjunction with the graph theoretic tool, provides an efficacious system for matching IED images. For circuit imagery, the basic retrieval mechanism has a precision of 82.1% and the graph based method has a precision of 98.1%. As of the fall of 2007, the working system has processed over 400,000 case images.
Theoretically Founded Optimization of Auctioneer's Revenues in Expanding Auctions
NASA Astrophysics Data System (ADS)
Rabin, Jonathan; Shehory, Onn
The expanding auction is a multi-unit auction which provides the auctioneer with control over the outcome of the auction by means of dynamically adding items for sale. Previous research on the expanding auction has provided a numeric method to calculate a strategy that optimizes the auctioneer's revenue. In this paper, we analyze various theoretical properties of the expanding auction, and compare it to VCG, a multi-unit auction protocol known in the art. We examine the effects of errors in the auctioneer's estimation of the buyers' maximal bidding values and prove a theoretical bound on the ratio between the revenue yielded by the Informed Decision Strategy (IDS) and the post-optimal strategy. We also analyze the relationship between the auction step and the optimal revenue and introduce a method of computing this optimizing step. We further compare the revenues yielded by the use of IDS with an expanding auction to those of the VCG mechanism and determine the conditions under which the former outperforms the latter. Our work provides new insight into the properties of the expanding auction. It further provides theoretically founded means for optimizing the revenue of auctioneer.
NASA Astrophysics Data System (ADS)
Dondurur, Derman
2005-11-01
The Normalized Full Gradient (NFG) method was proposed in the mid 1960s and was generally used for the downward continuation of the potential field data. The method eliminates the side oscillations which appeared on the continuation curves when passing through anomalous body depth. In this study, the NFG method was applied to Slingram electromagnetic anomalies to obtain the depth of the anomalous body. Some experiments were performed on the theoretical Slingram model anomalies in a free space environment using a perfectly conductive thin tabular conductor with an infinite depth extent. The theoretical Slingram responses were obtained for different depths, dip angles and coil separations, and it was observed from NFG fields of the theoretical anomalies that the NFG sections yield the depth information of top of the conductor at low harmonic numbers. The NFG sections consisted of two main local maxima located at both sides of the central negative Slingram anomalies. It is concluded that these two maxima also locate the maximum anomaly gradient points, which indicates the depth of the anomaly target directly. For both theoretical and field data, the depth of the maximum value on the NFG sections corresponds to the depth of the upper edge of the anomalous conductor. The NFG method was applied to the in-phase component and correct depth estimates were obtained even for the horizontal tabular conductor. Depth values could be estimated with a relatively small error percentage when the conductive model was near-vertical and/or the conductor depth was larger.
ERIC Educational Resources Information Center
Dunlop, David Livingston
The purpose of this study was to use an information theoretic memory model to quantitatively investigate classification sorting and recall behaviors of various groups of students. The model provided theorems for the determination of information theoretic measures from which inferences concerning mental processing were made. The basic procedure…
Research in Computational Astrobiology
NASA Technical Reports Server (NTRS)
Chaban, Galina; Colombano, Silvano; Scargle, Jeff; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.
2003-01-01
We report on several projects in the field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. Research projects included modifying existing computer simulation codes to use efficient, multiple time step algorithms, statistical methods for analysis of astrophysical data via optimal partitioning methods, electronic structure calculations on water-nuclei acid complexes, incorporation of structural information into genomic sequence analysis methods and calculations of shock-induced formation of polycylic aromatic hydrocarbon compounds.
Unified framework for information integration based on information geometry
Oizumi, Masafumi; Amari, Shun-ichi
2016-01-01
Assessment of causal influences is a ubiquitous and important subject across diverse research fields. Drawn from consciousness studies, integrated information is a measure that defines integration as the degree of causal influences among elements. Whereas pairwise causal influences between elements can be quantified with existing methods, quantifying multiple influences among many elements poses two major mathematical difficulties. First, overestimation occurs due to interdependence among influences if each influence is separately quantified in a part-based manner and then simply summed over. Second, it is difficult to isolate causal influences while avoiding noncausal confounding influences. To resolve these difficulties, we propose a theoretical framework based on information geometry for the quantification of multiple causal influences with a holistic approach. We derive a measure of integrated information, which is geometrically interpreted as the divergence between the actual probability distribution of a system and an approximated probability distribution where causal influences among elements are statistically disconnected. This framework provides intuitive geometric interpretations harmonizing various information theoretic measures in a unified manner, including mutual information, transfer entropy, stochastic interaction, and integrated information, each of which is characterized by how causal influences are disconnected. In addition to the mathematical assessment of consciousness, our framework should help to analyze causal relationships in complex systems in a complete and hierarchical manner. PMID:27930289
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2016-10-15
The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Reviews of theoretical frameworks: Challenges and judging the quality of theory application.
Hean, Sarah; Anderson, Liz; Green, Chris; John, Carol; Pitt, Richard; O'Halloran, Cath
2016-06-01
Rigorous reviews of available information, from a range of resources, are required to support medical and health educators in their decision making. The aim of this article is to highlight the importance of a review of theoretical frameworks specifically as a supplement to reviews that focus on a synthesis of the empirical evidence alone. Establishing a shared understanding of theory as a concept is highlighted as a challenge and some practical strategies to achieving this are presented. This article also introduces the concept of theoretical quality, arguing that a critique of how theory is applied should complement the methodological appraisal of the literature in a review. We illustrate the challenge of establishing a shared meaning of theory through reference to experiences of an on-going review of this kind conducted in the field of interprofessional education (IPE) and use a high scoring paper selected in this review to illustrate how theoretical quality can be assessed. In reaching a shared understanding of theory as a concept, practical strategies that promote experiential and practical ways of knowing are required in addition to more propositional ways of sharing knowledge. Concepts of parsimony, testability, operational adequacy and empirical adequacy are explored as concepts that establish theoretical quality. Reviews of theoretical frameworks used in medical education are required to inform educational practice. Review teams should make time and effort to reach a shared understanding of the term theory. Theory reviews, and reviews more widely, should add an assessment of theory application to the protocol of their review method.
Mapping interictal epileptic discharges using mutual information between concurrent EEG and fMRI.
Caballero-Gaudes, César; Van de Ville, Dimitri; Grouiller, Frédéric; Thornton, Rachel; Lemieux, Louis; Seeck, Margitta; Lazeyras, François; Vulliemoz, Serge
2013-03-01
The mapping of haemodynamic changes related to interictal epileptic discharges (IED) in simultaneous electroencephalography (EEG) and functional MRI (fMRI) studies is usually carried out by means of EEG-correlated fMRI analyses where the EEG information specifies the model to test on the fMRI signal. The sensitivity and specificity critically depend on the accuracy of EEG detection and the validity of the haemodynamic model. In this study we investigated whether an information theoretic analysis based on the mutual information (MI) between the presence of epileptic activity on EEG and the fMRI data can provide further insights into the haemodynamic changes related to interictal epileptic activity. The important features of MI are that: 1) both recording modalities are treated symmetrically; 2) no requirement for a-priori models for the haemodynamic response function, or assumption of a linear relationship between the spiking activity and BOLD responses, and 3) no parametric model for the type of noise or its probability distribution is necessary for the computation of MI. Fourteen patients with pharmaco-resistant focal epilepsy underwent EEG-fMRI and intracranial EEG and/or surgical resection with positive postoperative outcome (seizure freedom or considerable reduction in seizure frequency) was available in 7/14 patients. We used nonparametric statistical assessment of the MI maps based on a four-dimensional wavelet packet resampling method. The results of MI were compared to the statistical parametric maps obtained with two conventional General Linear Model (GLM) analyses based on the informed basis set (canonical HRF and its temporal and dispersion derivatives) and the Finite Impulse Response (FIR) models. The MI results were concordant with the electro-clinically or surgically defined epileptogenic area in 8/14 patients and showed the same degree of concordance as the results obtained with the GLM-based methods in 12 patients (7 concordant and 5 discordant). In one patient, the information theoretic analysis improved the delineation of the irritative zone compared with the GLM-based methods. Our findings suggest that an information theoretic analysis can provide clinically relevant information about the BOLD signal changes associated with the generation and propagation of interictal epileptic discharges. The concordance between the MI, GLM and FIR maps support the validity of the assumptions adopted in GLM-based analyses of interictal epileptic activity with EEG-fMRI in such a manner that they do not significantly constrain the localization of the epileptogenic zone. Copyright © 2012 Elsevier Inc. All rights reserved.
Life course approach in social epidemiology: an overview, application and future implications.
Cable, Noriko
2014-01-01
The application of the life course approach to social epidemiology has helped epidemiologists theoretically examine social gradients in population health. Longitudinal data with rich contextual information collected repeatedly and advanced statistical approaches have made this challenging task easier. This review paper provides an overview of the life course approach in epidemiology, its research application, and future challenges. In summary, a systematic approach to methods, including theoretically guided measurement of socioeconomic position, would assist researchers in gathering evidence for reducing social gradients in health, and collaboration across individual disciplines will make this task achievable.
Building qualitative study design using nursing's disciplinary epistemology.
Thorne, Sally; Stephens, Jennifer; Truant, Tracy
2016-02-01
To discuss the implications of drawing on core nursing knowledge as theoretical scaffolding for qualitative nursing enquiry. Although nurse scholars have been using qualitative methods for decades, much of their methodological direction derives from conventional approaches developed for answering questions in the social sciences. The quality of available knowledge to inform practice can be enhanced through the selection of study design options informed by an appreciation for the nature of nursing knowledge. Discussion paper. Drawing on the body of extant literature dealing with nursing's theoretical and qualitative research traditions, we consider contextual factors that have shaped the application of qualitative research approaches in nursing, including prior attempts to align method with the structure and form of disciplinary knowledge. On this basis, we critically reflect on design considerations that would follow logically from core features associated with a nursing epistemology. The substantive knowledge used by nurses to inform their practice includes both aspects developed at the level of the general and also that which pertains to application in the unique context of the particular. It must be contextually relevant to a fluid and dynamic healthcare environment and adaptable to distinctive patient conditions. Finally, it must align with nursing's moral mandate and action imperative. Qualitative research design components informed by nursing's disciplinary epistemology will help ensure a logical line of reasoning in our enquiries that remains true to the nature and structure of practice knowledge. © 2015 John Wiley & Sons Ltd.
Recent research related to prediction of stall/spin characteristics of fighter aircraft
NASA Technical Reports Server (NTRS)
Nguyen, L. T.; Anglin, E. L.; Gilbert, W. P.
1976-01-01
The NASA Langley Research Center is currently engaged in a stall/spin research program to provide the fundamental information and design guidelines required to predict the stall/spin characteristics of fighter aircraft. The prediction methods under study include theoretical spin prediction techniques and piloted simulation studies. The paper discusses the overall status of theoretical techniques including: (1) input data requirements, (2) math model requirements, and (3) correlation between theoretical and experimental results. The Langley Differential Maneuvering Simulator (DMS) facility has been used to evaluate the spin susceptibility of several current fighters during typical air combat maneuvers and to develop and evaluate the effectiveness of automatic departure/spin prevention concepts. The evaluation procedure is described and some of the more significant results of the studies are presented.
Chen, Yun; Yang, Hui
2016-01-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581
Chen, Yun; Yang, Hui
2016-12-14
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
NASA Technical Reports Server (NTRS)
Seewald, Friedrich
1931-01-01
The principal source of information on float resistance is the model test. In view of the insuperable difficulties opposing any attempt at theoretical treatment of the resistance problem, particularly at attitudes which tend toward satisfactory take-off, such as the transitory stage to planing, the towing test is and will remain the primary method for some time.
Ties That Do Not Bind: Musings on the Specious Relevance of Academic Research.
ERIC Educational Resources Information Center
Bolton, Michael J.; Stolcis, Gregory B.
2003-01-01
Discusses the gap between academic research and practice in public administration and argues that it can be traced to conflicts such as theoretical vs. pragmatic knowledge, data-supported vs. logic-driven information, scientific method vs. case studies, academic vs. practitioner journals, and tenure vs. organizational effectiveness. Explores…
Patterns of Informal Reasoning in the Context of Socioscientific Decision-Making.
ERIC Educational Resources Information Center
Sadler, Troy D.; Zeidler, Dana L.
The purpose of this article is to contribute to a theoretical knowledge base through research by examining factors salient to science education reform and practice in the context of socioscientific issues. The study explores how individuals negotiate and resolve genetic engineering dilemmas. A mixed-methods approach was used to examine patterns of…
Time Is Precious: Variable- and Event-Centred Approaches to Process Analysis in CSCL Research
ERIC Educational Resources Information Center
Reimann, Peter
2009-01-01
Although temporality is a key characteristic of the core concepts of CSCL--interaction, communication, learning, knowledge building, technology use--and although CSCL researchers have privileged access to process data, the theoretical constructs and methods employed in research practice frequently neglect to make full use of information relating…
Programmable Quantum Photonic Processor Using Silicon Photonics
2017-04-01
quantum information processing and quantum sensing, ranging from linear optics quantum computing and quantum simulation to quantum ...transformers have driven experimental and theoretical advances in quantum simulation, cluster-state quantum computing , all-optical quantum repeaters...neuromorphic computing , and other applications. In addition, we developed new schemes for ballistic quantum computation , new methods for
Bringing Nature into Social Work Settings: Mother Earth's Presence
ERIC Educational Resources Information Center
Gana, Carolina
2011-01-01
In an urban location in the downtown core of Toronto, Ontario, the author provides both individual and group counselling to women impacted by trauma in a community-based setting. Various modalities and theoretical frameworks that include feminism and anti-oppressive methods inform her counselling practice. The approach that the author takes in the…
NASA Technical Reports Server (NTRS)
Lee, H. P.
1977-01-01
The NASTRAN Thermal Analyzer Manual describes the fundamental and theoretical treatment of the finite element method, with emphasis on the derivations of the constituent matrices of different elements and solution algorithms. Necessary information and data relating to the practical applications of engineering modeling are included.
The Weaknesses of Full-Text Searching
ERIC Educational Resources Information Center
Beall, Jeffrey
2008-01-01
This paper provides a theoretical critique of the deficiencies of full-text searching in academic library databases. Because full-text searching relies on matching words in a search query with words in online resources, it is an inefficient method of finding information in a database. This matching fails to retrieve synonyms, and it also retrieves…
Does Privatization Affect Access to Government Information?
ERIC Educational Resources Information Center
Caponio, Joseph F.; Geffner, Janet
This paper begins by pointing out that privatization, or relying on the private sector to provide commercial goods and services for government departments and agencies, is a tool that has been used effectively by the federal government for several decades. It then presents the theoretical basis for privatization, describes a number of methods used…
Cognitive Styles and Sex Roles in Teaching-Learning Processes.
ERIC Educational Resources Information Center
Nelson, Karen H.
Cognitive style models describe individual differences in information-processing, or methods for deriving meaning from the world. Each style is theoretically value-free; each is valid and has strengths or weaknesses depending upon its context. However, this value freedom has been threatened in two ways. First, while cognitive style has been…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ditzler, Gregory; Morrison, J. Calvin; Lan, Yemin
Background: Some of the current software tools for comparative metagenomics provide ecologists with the ability to investigate and explore bacterial communities using α– & β–diversity. Feature subset selection – a sub-field of machine learning – can also provide a unique insight into the differences between metagenomic or 16S phenotypes. In particular, feature subset selection methods can obtain the operational taxonomic units (OTUs), or functional features, that have a high-level of influence on the condition being studied. For example, in a previous study we have used information-theoretic feature selection to understand the differences between protein family abundances that best discriminate betweenmore » age groups in the human gut microbiome. Results: We have developed a new Python command line tool, which is compatible with the widely adopted BIOM format, for microbial ecologists that implements information-theoretic subset selection methods for biological data formats. We demonstrate the software tools capabilities on publicly available datasets. Conclusions: We have made the software implementation of Fizzy available to the public under the GNU GPL license. The standalone implementation can be found at http://github.com/EESI/Fizzy.« less
Fizzy: feature subset selection for metagenomics.
Ditzler, Gregory; Morrison, J Calvin; Lan, Yemin; Rosen, Gail L
2015-11-04
Some of the current software tools for comparative metagenomics provide ecologists with the ability to investigate and explore bacterial communities using α- & β-diversity. Feature subset selection--a sub-field of machine learning--can also provide a unique insight into the differences between metagenomic or 16S phenotypes. In particular, feature subset selection methods can obtain the operational taxonomic units (OTUs), or functional features, that have a high-level of influence on the condition being studied. For example, in a previous study we have used information-theoretic feature selection to understand the differences between protein family abundances that best discriminate between age groups in the human gut microbiome. We have developed a new Python command line tool, which is compatible with the widely adopted BIOM format, for microbial ecologists that implements information-theoretic subset selection methods for biological data formats. We demonstrate the software tools capabilities on publicly available datasets. We have made the software implementation of Fizzy available to the public under the GNU GPL license. The standalone implementation can be found at http://github.com/EESI/Fizzy.
Fizzy. Feature subset selection for metagenomics
Ditzler, Gregory; Morrison, J. Calvin; Lan, Yemin; ...
2015-11-04
Background: Some of the current software tools for comparative metagenomics provide ecologists with the ability to investigate and explore bacterial communities using α– & β–diversity. Feature subset selection – a sub-field of machine learning – can also provide a unique insight into the differences between metagenomic or 16S phenotypes. In particular, feature subset selection methods can obtain the operational taxonomic units (OTUs), or functional features, that have a high-level of influence on the condition being studied. For example, in a previous study we have used information-theoretic feature selection to understand the differences between protein family abundances that best discriminate betweenmore » age groups in the human gut microbiome. Results: We have developed a new Python command line tool, which is compatible with the widely adopted BIOM format, for microbial ecologists that implements information-theoretic subset selection methods for biological data formats. We demonstrate the software tools capabilities on publicly available datasets. Conclusions: We have made the software implementation of Fizzy available to the public under the GNU GPL license. The standalone implementation can be found at http://github.com/EESI/Fizzy.« less
Print Advertisements for Alzheimer’s Disease Drugs: Informational and Transformational Features
Gooblar, Jonathan; Carpenter, Brian D.
2014-01-01
Purpose We examined print advertisements for Alzheimer’s disease drugs published in journals and magazines between January 2008 and February 2012, using an informational versus transformational theoretical framework to identify objective and persuasive features. Methods In 29 unique advertisements, we used qualitative methods to code and interpret identifying information, charts, benefit and side effect language, and persuasive appeals embedded in graphics and narratives. Results Most elements contained a mixture of informational and transformational features. Charts were used infrequently, but when they did appear the accompanying text often exaggerated the data. Benefit statements covered an array of symptoms, drug properties, and caregiver issues. Side effect statements often used positive persuasive appeals. Graphics and narrative features emphasized positive emotions and outcomes. Implications We found subtle and sophisticated attempts both to educate and to persuade readers. It is important for consumers and prescribing physicians to read print advertisements critically so that they can make informed treatment choices. PMID:23687184
Accurate airway centerline extraction based on topological thinning using graph-theoretic analysis.
Bian, Zijian; Tan, Wenjun; Yang, Jinzhu; Liu, Jiren; Zhao, Dazhe
2014-01-01
The quantitative analysis of the airway tree is of critical importance in the CT-based diagnosis and treatment of popular pulmonary diseases. The extraction of airway centerline is a precursor to identify airway hierarchical structure, measure geometrical parameters, and guide visualized detection. Traditional methods suffer from extra branches and circles due to incomplete segmentation results, which induce false analysis in applications. This paper proposed an automatic and robust centerline extraction method for airway tree. First, the centerline is located based on the topological thinning method; border voxels are deleted symmetrically to preserve topological and geometrical properties iteratively. Second, the structural information is generated using graph-theoretic analysis. Then inaccurate circles are removed with a distance weighting strategy, and extra branches are pruned according to clinical anatomic knowledge. The centerline region without false appendices is eventually determined after the described phases. Experimental results show that the proposed method identifies more than 96% branches and keep consistency across different cases and achieves superior circle-free structure and centrality.
ERIC Educational Resources Information Center
Miranda, Silvania V.; Tarapanoff, Kira M. A.
2008-01-01
Introduction: The paper deals with the identification of the information needs and information competencies of a professional group. Theoretical basis: A theoretical relationship between information needs and information competencies as subjects is proposed. Three dimensions are examine: cognitive, affective and situational. The recognition of an…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sciarrino, Fabio; Dipartimento di Fisica and Consorzio Nazionale Interuniversitario per le Scienze Fisiche della Materia, Universita 'La Sapienza', Rome 00185; De Martini, Francesco
The optimal phase-covariant quantum cloning machine (PQCM) broadcasts the information associated to an input qubit into a multiqubit system, exploiting a partial a priori knowledge of the input state. This additional a priori information leads to a higher fidelity than for the universal cloning. The present article first analyzes different innovative schemes to implement the 1{yields}3 PQCM. The method is then generalized to any 1{yields}M machine for an odd value of M by a theoretical approach based on the general angular momentum formalism. Finally different experimental schemes based either on linear or nonlinear methods and valid for single photon polarizationmore » encoded qubits are discussed.« less
A new method for flight test determination of propulsive efficiency and drag coefficient
NASA Technical Reports Server (NTRS)
Bull, G.; Bridges, P. D.
1983-01-01
A flight test method is described from which propulsive efficiency as well as parasite and induced drag coefficients can be directly determined using relatively simple instrumentation and analysis techniques. The method uses information contained in the transient response in airspeed for a small power change in level flight in addition to the usual measurement of power required for level flight. Measurements of pitch angle and longitudinal and normal acceleration are eliminated. The theoretical basis for the method, the analytical techniques used, and the results of application of the method to flight test data are presented.
Evans, R; Ferguson, E
2014-01-01
Background and Objectives While blood donation is traditionally described as a behaviour motivated by pure altruism, the assessment of altruism in the blood donation literature has not been theoretically informed. Drawing on theories of altruism from psychology, economics and evolutionary biology, it is argued that a theoretically derived psychometric assessment of altruism is needed. Such a measure is developed in this study that can be used to help inform both our understanding of the altruistic motives of blood donors and recruitment intervention strategies. Materials and Methods A cross-sectional survey (N = 414), with a 1-month behavioural follow-up (time 2, N = 77), was designed to assess theoretically derived constructs from psychological, economic and evolutionary biological theories of altruism. Theory of planned behaviour (TPB) variables and co-operation were also assessed at time 1 and a measure of behavioural co-operation at time 2. Results Five theoretical dimensions (impure altruism, kinship, self-regarding motives, reluctant altruism and egalitarian warm glow) of altruism were identified through factor analyses. These five altruistic motives differentiated blood donors from non-donors (donors scored higher on impure altruism and reluctant altruism), showed incremental validity over TPB constructs to predict donor intention and predicted future co-operative behaviour. Conclusions These findings show that altruism in the context of blood donation is multifaceted and complex and, does not reflect pure altruism. This has implication for recruitment campaigns that focus solely on pure altruism. PMID:24117697
NASA Astrophysics Data System (ADS)
Tapia-Herrera, R.; Huerta-Lopez, C. I.; Martinez-Cruzado, J. A.
2009-05-01
Results of site characterization for an experimental site in the metropolitan area of Tijuana, B. C., Mexico are presented as part of the on-going research in which time series of earthquakes, ambient noise, and induced vibrations were processed with three different methods: H/V spectral ratios, Spectral Analysis of Surface Waves (SASW), and the Random Decrement Method, (RDM). Forward modeling using the wave propagation stiffness matrix method (Roësset and Kausel, 1981) was used to compute the theoretical SH/P, SV/P spectral ratios, and the experimental H/V spectral ratios were computed following the conventional concepts of Fourier analysis. The modeling/comparison between the theoretical and experimental H/V spectral ratios was carried out. For the SASW method the theoretical dispersion curves were also computed and compared with the experimental one, and finally the theoretical free vibration decay curve was compared with the experimental one obtained with the RDM. All three methods were tested with ambient noise, induced vibrations, and earthquake signals. Both experimental spectral ratios obtained with ambient noise as well as earthquake signals agree quite well with the theoretical spectral ratios, particularly at the fundamental vibration frequency of the recording site. Differences between the fundamental vibration frequencies are evident for sites located at alluvial fill (~0.6 Hz) and at sites located at conglomerate/sandstones fill (0.75 Hz). Shear wave velocities for the soft soil layers of the 4-layer discrete soil model ranges as low as 100 m/s and up to 280 m/s. The results with the SASW provided information that allows to identify low velocity layers, not seen before with the traditional seismic methods. The damping estimations obtained with the RDM are within the expected values, and the dominant frequency of the system also obtained with the RDM correlates within the range of plus-minus 20 % with the one obtained by means of the H/V spectral ratio.
A new frequency matching technique for FRF-based model updating
NASA Astrophysics Data System (ADS)
Yang, Xiuming; Guo, Xinglin; Ouyang, Huajiang; Li, Dongsheng
2017-05-01
Frequency Response Function (FRF) residues have been widely used to update Finite Element models. They are a kind of original measurement information and have the advantages of rich data and no extraction errors, etc. However, like other sensitivity-based methods, an FRF-based identification method also needs to face the ill-conditioning problem which is even more serious since the sensitivity of the FRF in the vicinity of a resonance is much greater than elsewhere. Furthermore, for a given frequency measurement, directly using a theoretical FRF at a frequency may lead to a huge difference between the theoretical FRF and the corresponding experimental FRF which finally results in larger effects of measurement errors and damping. Hence in the solution process, correct selection of the appropriate frequency to get the theoretical FRF in every iteration in the sensitivity-based approach is an effective way to improve the robustness of an FRF-based algorithm. A primary tool for right frequency selection based on the correlation of FRFs is the Frequency Domain Assurance Criterion. This paper presents a new frequency selection method which directly finds the frequency that minimizes the difference of the order of magnitude between the theoretical and experimental FRFs. A simulated truss structure is used to compare the performance of different frequency selection methods. For the sake of reality, it is assumed that not all the degrees of freedom (DoFs) are available for measurement. The minimum number of DoFs required in each approach to correctly update the analytical model is regarded as the right identification standard.
Privacy-preserving periodical publishing for medical information
NASA Astrophysics Data System (ADS)
Jin, Hua; Ju, Shi-guang; Liu, Shan-cheng
2013-07-01
Existing privacy-preserving publishing models can not meet the requirement of periodical publishing for medical information whether these models are static or dynamic. This paper presents a (k,l)-anonymity model with keeping individual association and a principle based on (Epsilon)-invariance group for subsequent periodical publishing, and then, the PKIA and PSIGI algorithms are designed for them. The proposed methods can reserve more individual association with privacy-preserving and have better publishing quality. Experiments confirm our theoretical results and its practicability.
Clinical Case Studies in Psychoanalytic and Psychodynamic Treatment
Willemsen, Jochem; Della Rosa, Elena; Kegerreis, Sue
2017-01-01
This manuscript provides a review of the clinical case study within the field of psychoanalytic and psychodynamic treatment. The method has been contested for methodological reasons and because it would contribute to theoretical pluralism in the field. We summarize how the case study method is being applied in different schools of psychoanalysis, and we clarify the unique strengths of this method and areas for improvement. Finally, based on the literature and on our own experience with case study research, we come to formulate nine guidelines for future case study authors: (1) basic information to include, (2) clarification of the motivation to select a particular patient, (3) information about informed consent and disguise, (4) patient background and context of referral or self-referral, (5) patient's narrative, therapist's observations and interpretations, (6) interpretative heuristics, (7) reflexivity and counter-transference, (8) leaving room for interpretation, and (9) answering the research question, and comparison with other cases. PMID:28210235
ERIC Educational Resources Information Center
Koh, Kyungwon
2011-01-01
Contemporary young people are engaged in a variety of information behaviors, such as information seeking, using, sharing, and creating. The ways youth interact with information have transformed in the shifting digital information environment; however, relatively little empirical research exists and no theoretical framework adequately explains…
High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.
Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min
2012-01-01
The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.
Automatic indexing of compound words based on mutual information for Korean text retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan Koo Kim; Yoo Kun Cho
In this paper, we present an automatic indexing technique for compound words suitable to an aggulutinative language, specifically Korean. Firstly, we present the construction conditions to compose compound words as indexing terms. Also we present the decomposition rules applicable to consecutive nouns to extract all contents of text. Finally we propose a measure to estimate the usefulness of a term, mutual information, to calculate the degree of word association of compound words, based on the information theoretic notion. By applying this method, our system has raised the precision rate of compound words from 72% to 87%.
2011-01-01
Background Tobacco use adversely affects oral health. Clinical guidelines recommend that dental providers promote tobacco abstinence and provide patients who use tobacco with brief tobacco use cessation counselling. Research shows that these guidelines are seldom implemented, however. To improve guideline adherence and to develop effective interventions, it is essential to understand provider behaviour and challenges to implementation. This study aimed to develop a theoretically informed measure for assessing among dental providers implementation difficulties related to tobacco use prevention and cessation (TUPAC) counselling guidelines, to evaluate those difficulties among a sample of dental providers, and to investigate a possible underlying structure of applied theoretical domains. Methods A 35-item questionnaire was developed based on key theoretical domains relevant to the implementation behaviours of healthcare providers. Specific items were drawn mostly from the literature on TUPAC counselling studies of healthcare providers. The data were collected from dentists (n = 73) and dental hygienists (n = 22) in 36 dental clinics in Finland using a web-based survey. Of 95 providers, 73 participated (76.8%). We used Cronbach's alpha to ascertain the internal consistency of the questionnaire. Mean domain scores were calculated to assess different aspects of implementation difficulties and exploratory factor analysis to assess the theoretical domain structure. The authors agreed on the labels assigned to the factors on the basis of their component domains and the broader behavioural and theoretical literature. Results Internal consistency values for theoretical domains varied from 0.50 ('emotion') to 0.71 ('environmental context and resources'). The domain environmental context and resources had the lowest mean score (21.3%; 95% confidence interval [CI], 17.2 to 25.4) and was identified as a potential implementation difficulty. The domain emotion provided the highest mean score (60%; 95% CI, 55.0 to 65.0). Three factors were extracted that explain 70.8% of the variance: motivation (47.6% of variance, α = 0.86), capability (13.3% of variance, α = 0.83), and opportunity (10.0% of variance, α = 0.71). Conclusions This study demonstrated a theoretically informed approach to identifying possible implementation difficulties in TUPAC counselling among dental providers. This approach provides a method for moving from diagnosing implementation difficulties to designing and evaluating interventions. PMID:21615948
Kotani, Akira; Tsutsumi, Risa; Shoji, Asaki; Hayashi, Yuzuru; Kusu, Fumiyo; Yamamoto, Kazuhiro; Hakamata, Hideki
2016-07-08
This paper puts forward a time and material-saving method for evaluating the repeatability of area measurements in gradient HPLC with UV detection (HPLC-UV), based on the function of mutual information (FUMI) theory which can theoretically provide the measurement standard deviation (SD) and detection limits through the stochastic properties of baseline noise with no recourse to repetitive measurements of real samples. The chromatographic determination of terbinafine hydrochloride and enalapril maleate is taken as an example. The best choice of the number of noise data points, inevitable for the theoretical evaluation, is shown to be 512 data points (10.24s at 50 point/s sampling rate of an A/D converter). Coupled with the relative SD (RSD) of sample injection variability in the instrument used, the theoretical evaluation is proved to give identical values of area measurement RSDs to those estimated by the usual repetitive method (n=6) over a wide concentration range of the analytes within the 95% confidence intervals of the latter RSD. The FUMI theory is not a statistical one, but the "statistical" reliability of its SD estimates (n=1) is observed to be as high as that attained by thirty-one measurements of the same samples (n=31). Copyright © 2016 Elsevier B.V. All rights reserved.
Challenging convention: symbolic interactionism and grounded theory.
Newman, Barbara
2008-01-01
Not very much is written in the literature about decisions made by researchers and the justifications on method as a result of a particular clinical problem, together with an appropriate and congruent theoretical perspective, particularly for Glaserian grounded theory. I contend the utilisation of symbolic interactionism as a theoretical perspective to inform and guide the evolving research process and analysis of data when using classic or Glaserian grounded theory (GT) method, is not always appropriate. Within this article I offer an analysis of the key issues to be addressed when contemplating the use of Glaserian GT and the utilisation of an appropriate theoretical perspective, rather than accepting convention of symbolic interactionism (SI). The analysis became imperative in a study I conducted that sought to explore the concerns, adaptive behaviours, psychosocial processes and relevant interactions over a 12-month period, among newly diagnosed persons with end stage renal disease, dependent on haemodialysis in the home environment for survival. The reality of perception was central to the end product in the study. Human ethics approval was granted by six committees within New South Wales Health Department and one from a university.
Generalization of the Poincare sphere to process 2D displacement signals
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Lamberti, Luciano
2017-06-01
Traditionally the multiple phase method has been considered as an essential tool for phase information recovery. The in-quadrature phase method that theoretically is an alternative pathway to achieve the same goal failed in actual applications. The authors in a previous paper dealing with 1D signals have shown that properly implemented the in-quadrature method yields phase values with the same accuracy than the multiple phase method. The present paper extends the methodology developed in 1D to 2D. This extension is not a straight forward process and requires the introduction of a number of additional concepts and developments. The concept of monogenic function provides the necessary tools required for the extension process. The monogenic function has a graphic representation through the Poincare sphere familiar in the field of Photoelasticity and through the developments introduced in this paper connected to the analysis of displacement fringe patterns. The paper is illustrated with examples of application that show that multiple phases method and the in-quadrature are two aspects of the same basic theoretical model.
2013-01-01
Background Transition from children’s to adult epilepsy services is known to be challenging. Some young people partially or completely disengage from contact with services, thereby risking their health and wellbeing. We conducted a mixed-method systematic review that showed current epilepsy transition models enabling information exchange and developing self-care skills were not working well. We used synthesised evidence to develop a theoretical framework to inform this qualitative study. The aim was to address a critical research gap by exploring communication, information needs, and experiences of knowledge exchange in clinical settings by young people and their parents, during transition from children’s to adult epilepsy services. Method Qualitative comparative embedded Case study with 2 'transition’ cases (epilepsy services) in two hospitals. Fifty-eight participants: 30 young people (13–19 years) and 28 parents were interviewed in-depth (individual or focus group). Clinical documents/guidelines were collated. 'Framework’ thematic analysis was used. The theoretical framework was tested using themes, pattern matching and replication logic. Theory-based evaluation methods were used to understand how and why different models of service delivery worked. Results A joint epilepsy clinic for young people 14–17 years coordinated by children’s and adult services was more likely to influence young people’s behaviour by facilitating more positive engagement with adult healthcare professionals and retention of epilepsy-related self-care information. Critical success factors were continuity of care, on-going and consistent age-appropriate and person centred communication and repeated information exchange. Three young people who experienced a single handover clinic disengaged from services. Psychosocial care was generally inadequate and healthcare professionals lacked awareness of memory impairment. Parents lacked knowledge, skills and support to enable their child to independently self-care. Translation of transition policies/guidelines into practice was weak. Conclusion Findings make a significant contribution to understanding why young people disengage from epilepsy services, why some parents prevent independent self-care, and what constitutes good communication and transition from the perspective of young people and parents. The type of service configuration, delivery and organisation influenced the behaviours of young people at transition to adult services. The novel theoretical framework was substantially supported, underwent further post-hoc development and can be used in future practice/intervention development and research. PMID:24131769
On the Discovery of Evolving Truth
Li, Yaliang; Li, Qi; Gao, Jing; Su, Lu; Zhao, Bo; Fan, Wei; Han, Jiawei
2015-01-01
In the era of big data, information regarding the same objects can be collected from increasingly more sources. Unfortunately, there usually exist conflicts among the information coming from different sources. To tackle this challenge, truth discovery, i.e., to integrate multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. In many real world applications, however, the information may come sequentially, and as a consequence, the truth of objects as well as the reliability of sources may be dynamically evolving. Existing truth discovery methods, unfortunately, cannot handle such scenarios. To address this problem, we investigate the temporal relations among both object truths and source reliability, and propose an incremental truth discovery framework that can dynamically update object truths and source weights upon the arrival of new data. Theoretical analysis is provided to show that the proposed method is guaranteed to converge at a fast rate. The experiments on three real world applications and a set of synthetic data demonstrate the advantages of the proposed method over state-of-the-art truth discovery methods. PMID:26705502
Developing Emotion-Based Case Formulations: A Research-Informed Method.
Pascual-Leone, Antonio; Kramer, Ueli
2017-01-01
New research-informed methods for case conceptualization that cut across traditional therapy approaches are increasingly popular. This paper presents a trans-theoretical approach to case formulation based on the research observations of emotion. The sequential model of emotional processing (Pascual-Leone & Greenberg, 2007) is a process research model that provides concrete markers for therapists to observe the emerging emotional development of their clients. We illustrate how this model can be used by clinicians to track change and provides a 'clinical map,' by which therapist may orient themselves in-session and plan treatment interventions. Emotional processing offers as a trans-theoretical framework for therapists who wish to conduct emotion-based case formulations. First, we present criteria for why this research model translates well into practice. Second, two contrasting case studies are presented to demonstrate the method. The model bridges research with practice by using client emotion as an axis of integration. Key Practitioner Message Process research on emotion can offer a template for therapists to make case formulations while using a range of treatment approaches. The sequential model of emotional processing provides a 'process map' of concrete markers for therapists to (1) observe the emerging emotional development of their clients, and (2) help therapists develop a treatment plan. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.
Suk, Heung-Il; Lee, Seong-Whan
2013-02-01
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.
ERIC Educational Resources Information Center
O'Malley, Kevin Dermit
2010-01-01
Centralized customer support is an established industry method used to improve revenue and profitability. What remains unknown is whether this cost-saving commercial business practice is similarly applicable to the unique, often isolated military environment. This research study statistically tested a theoretical framework for knowledge management…
Foreign Scholars' Theoretical Approaches to Using Social Networks as Educational Strategies
ERIC Educational Resources Information Center
Pazyura, Natalia
2017-01-01
Modern trends in development of information and communication technologies change many aspects in the process of education: from the role of participants to the forms and methods of knowledge delivery. ICTs make it possible to develop students' creative potential. The emergence of online social groups was an important event in the sphere of…
Effects of atmospheric aerosols on scattering reflected visible light from earth resource features
NASA Technical Reports Server (NTRS)
Noll, K. E.; Tschantz, B. A.; Davis, W. T.
1972-01-01
The vertical variations in atmospheric light attenuation under ambient conditions were identified, and a method through which aerial photographs of earth features might be corrected to yield quantitative information about the actual features was provided. A theoretical equation was developed based on the Bouguer-Lambert extinction law and basic photographic theory.
What Do Children with Specific Language Impairment Do with Multiple Forms of "DO"?
ERIC Educational Resources Information Center
Rice, Mabel L.; Blossom, Megan
2013-01-01
Purpose: This study was designed to examine the early usage patterns of multiple grammatical functions of "DO" in children with and without specific language impairment (SLI). Children's use of this plurifunctional form is informative for evaluation of theoretical accounts of the deficit in SLI. Method: Spontaneous uses of multiple functions of…
Shuttling between Worlds: Quandaries of Performing Queered Research in Asian American Contexts
ERIC Educational Resources Information Center
Varney, Joan
2008-01-01
This article explores how the tensions that grow out of being a researcher in my community of queer Asian Americans lead to the formulation of a different kind of ethnographic approach. A hybrid notion of identity can require and inform a hybrid or poststructural ethnographic practice. This hybridized research method draws upon theoretical strands…
Anomaly clustering in hyperspectral images
NASA Astrophysics Data System (ADS)
Doster, Timothy J.; Ross, David S.; Messinger, David W.; Basener, William F.
2009-05-01
The topological anomaly detection algorithm (TAD) differs from other anomaly detection algorithms in that it uses a topological/graph-theoretic model for the image background instead of modeling the image with a Gaussian normal distribution. In the construction of the model, TAD produces a hard threshold separating anomalous pixels from background in the image. We build on this feature of TAD by extending the algorithm so that it gives a measure of the number of anomalous objects, rather than the number of anomalous pixels, in a hyperspectral image. This is done by identifying, and integrating, clusters of anomalous pixels via a graph theoretical method combining spatial and spectral information. The method is applied to a cluttered HyMap image and combines small groups of pixels containing like materials, such as those corresponding to rooftops and cars, into individual clusters. This improves visualization and interpretation of objects.
ATR applications of minimax entropy models of texture and shape
NASA Astrophysics Data System (ADS)
Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.
2001-10-01
Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.
Multi-scale integration and predictability in resting state brain activity
Kolchinsky, Artemy; van den Heuvel, Martijn P.; Griffa, Alessandra; Hagmann, Patric; Rocha, Luis M.; Sporns, Olaf; Goñi, Joaquín
2014-01-01
The human brain displays heterogeneous organization in both structure and function. Here we develop a method to characterize brain regions and networks in terms of information-theoretic measures. We look at how these measures scale when larger spatial regions as well as larger connectome sub-networks are considered. This framework is applied to human brain fMRI recordings of resting-state activity and DSI-inferred structural connectivity. We find that strong functional coupling across large spatial distances distinguishes functional hubs from unimodal low-level areas, and that this long-range functional coupling correlates with structural long-range efficiency on the connectome. We also find a set of connectome regions that are both internally integrated and coupled to the rest of the brain, and which resemble previously reported resting-state networks. Finally, we argue that information-theoretic measures are useful for characterizing the functional organization of the brain at multiple scales. PMID:25104933
Applying information theory to small groups assessment: emotions and well-being at work.
García-Izquierdo, Antonio León; Moreno, Blanca; García-Izquierdo, Mariano
2010-05-01
This paper explores and analyzes the relations between emotions and well-being in a sample of aviation personnel, passenger crew (flight attendants). There is an increasing interest in studying the influence of emotions and its role as psychosocial factors in the work environment as they are able to act as facilitators or shock absorbers. The contrast of the theoretical models by using traditional parametric techniques requires a large sample size to the efficient estimation of the coefficients that quantify the relations between variables. Since the available sample that we have is small, the most common size in European enterprises, we used the maximum entropy principle to explore the emotions that are involved in the psychosocial risks. The analyses show that this method takes advantage of the limited information available and guarantee an optimal estimation, the results of which are coherent with theoretical models and numerous empirical researches about emotions and well-being.
A Thematic Analysis of Theoretical Models for Translational Science in Nursing: Mapping the Field
Mitchell, Sandra A.; Fisher, Cheryl A.; Hastings, Clare E.; Silverman, Leanne B.; Wallen, Gwenyth R.
2010-01-01
Background The quantity and diversity of conceptual models in translational science may complicate rather than advance the use of theory. Purpose This paper offers a comparative thematic analysis of the models available to inform knowledge development, transfer, and utilization. Method Literature searches identified 47 models for knowledge translation. Four thematic areas emerged: (1) evidence-based practice and knowledge transformation processes; (2) strategic change to promote adoption of new knowledge; (3) knowledge exchange and synthesis for application and inquiry; (4) designing and interpreting dissemination research. Discussion This analysis distinguishes the contributions made by leaders and researchers at each phase in the process of discovery, development, and service delivery. It also informs the selection of models to guide activities in knowledge translation. Conclusions A flexible theoretical stance is essential to simultaneously develop new knowledge and accelerate the translation of that knowledge into practice behaviors and programs of care that support optimal patient outcomes. PMID:21074646
Variational learning and bits-back coding: an information-theoretic view to Bayesian learning.
Honkela, Antti; Valpola, Harri
2004-07-01
The bits-back coding first introduced by Wallace in 1990 and later by Hinton and van Camp in 1993 provides an interesting link between Bayesian learning and information-theoretic minimum-description-length (MDL) learning approaches. The bits-back coding allows interpreting the cost function used in the variational Bayesian method called ensemble learning as a code length in addition to the Bayesian view of misfit of the posterior approximation and a lower bound of model evidence. Combining these two viewpoints provides interesting insights to the learning process and the functions of different parts of the model. In this paper, the problem of variational Bayesian learning of hierarchical latent variable models is used to demonstrate the benefits of the two views. The code-length interpretation provides new views to many parts of the problem such as model comparison and pruning and helps explain many phenomena occurring in learning.
Leite, Fabio L.; Bueno, Carolina C.; Da Róz, Alessandra L.; Ziemath, Ervino C.; Oliveira, Osvaldo N.
2012-01-01
The increasing importance of studies on soft matter and their impact on new technologies, including those associated with nanotechnology, has brought intermolecular and surface forces to the forefront of physics and materials science, for these are the prevailing forces in micro and nanosystems. With experimental methods such as the atomic force spectroscopy (AFS), it is now possible to measure these forces accurately, in addition to providing information on local material properties such as elasticity, hardness and adhesion. This review provides the theoretical and experimental background of AFS, adhesion forces, intermolecular interactions and surface forces in air, vacuum and in solution. PMID:23202925
NASA Astrophysics Data System (ADS)
Llorens, Ariadna; Berbegal-Mirabent, Jasmina; Llinàs-Audet, Xavier
2017-07-01
Engineering education is facing new challenges to effectively provide the appropriate skills to future engineering professionals according to market demands. This study proposes a model based on active learning methods, which is expected to facilitate the acquisition of the professional skills most highly valued in the information and communications technology (ICT) market. The theoretical foundations of the study are based on the specific literature on active learning methodologies. The Delphi method is used to establish the fit between learning methods and generic skills required by the ICT sector. An innovative proposition is therefore presented that groups the required skills in relation to the teaching method that best develops them. The qualitative research suggests that a combination of project-based learning and the learning contract is sufficient to ensure a satisfactory skills level for this profile of engineers.
Estimating groundwater recharge
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Understanding groundwater recharge is essential for successful management of water resources and modeling fluid and contaminant transport within the subsurface. This book provides a critical evaluation of the theory and assumptions that underlie methods for estimating rates of groundwater recharge. Detailed explanations of the methods are provided - allowing readers to apply many of the techniques themselves without needing to consult additional references. Numerous practical examples highlight benefits and limitations of each method. Approximately 900 references allow advanced practitioners to pursue additional information on any method. For the first time, theoretical and practical considerations for selecting and applying methods for estimating groundwater recharge are covered in a single volume with uniform presentation. Hydrogeologists, water-resource specialists, civil and agricultural engineers, earth and environmental scientists and agronomists will benefit from this informative and practical book. It can serve as the primary text for a graduate-level course on groundwater recharge or as an adjunct text for courses on groundwater hydrology or hydrogeology.
NASA Astrophysics Data System (ADS)
Feng, Ximeng; Li, Gang; Yu, Haixia; Wang, Shaohui; Yi, Xiaoqing; Lin, Ling
2018-03-01
Noninvasive blood component analysis by spectroscopy has been a hotspot in biomedical engineering in recent years. Dynamic spectrum provides an excellent idea for noninvasive blood component measurement, but studies have been limited to the application of broadband light sources and high-resolution spectroscopy instruments. In order to remove redundant information, a more effective wavelength selection method has been presented in this paper. In contrast to many common wavelength selection methods, this method is based on sensing mechanism which has a clear mechanism and can effectively avoid the noise from acquisition system. The spectral difference coefficient was theoretically proved to have a guiding significance for wavelength selection. After theoretical analysis, the multi-band spectral difference coefficient-wavelength selection method combining with the dynamic spectrum was proposed. An experimental analysis based on clinical trial data from 200 volunteers has been conducted to illustrate the effectiveness of this method. The extreme learning machine was used to develop the calibration models between the dynamic spectrum data and hemoglobin concentration. The experiment result shows that the prediction precision of hemoglobin concentration using multi-band spectral difference coefficient-wavelength selection method is higher compared with other methods.
A Bayesian method for assessing multiscalespecies-habitat relationships
Stuber, Erica F.; Gruber, Lutz F.; Fontaine, Joseph J.
2017-01-01
ContextScientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multi-scale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large.ObjectivesOur objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance.MethodsWe introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA.ResultsOur method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%.ConclusionsGiven the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and testing hypotheses of scaling relationships.
2014-01-01
Background More than a fifth of Australian children arrive at school developmentally vulnerable. To counteract this, the Healthy Kids Check (HKC), a one-off health assessment aimed at preschool children, was introduced in 2008 into Australian general practice. Delivery of services has, however, remained low. The Theoretical Domains Framework, which provides a method to understand behaviours theoretically, can be condensed into three core components: capability, opportunity and motivation, and the COM-B model. Utilising this system, this study aimed to determine the barriers and enablers to delivery of the HKC, to inform the design of an intervention to promote provision of HKC services in Australian general practice. Methods Data from 6 focus group discussions with 40 practitioners from general practices in socio-culturally diverse areas of Melbourne, Victoria, were analysed using thematic analysis. Results Many practitioners expressed uncertainty regarding their capabilities and the practicalities of delivering HKCs, but in some cases HKCs had acted as a catalyst for professional development. Key connections between immunisation services and delivery of HKCs prompted practices to have systems of recall and reminder in place. Standardisation of methods for developmental assessment and streamlined referral pathways affected practitioners’ confidence and motivation to perform HKCs. Conclusion Application of a systematic framework effectively demonstrated how a number of behaviours could be targeted to increase delivery of HKCs. Interventions need to target practice systems, the support of office staff and referral options, as well as practitioners’ training. Many behavioural changes could be applied through a single intervention programme delivered by the primary healthcare organisations charged with local healthcare needs (Medicare Locals) providing vital links between general practice, community and the health of young children. PMID:24886520
Information-Theoretic Metrics for Visualizing Gene-Environment Interactions
Chanda, Pritam ; Zhang, Aidong ; Brazeau, Daniel ; Sucheston, Lara ; Freudenheim, Jo L. ; Ambrosone, Christine ; Ramanathan, Murali
2007-01-01
The purpose of our work was to develop heuristics for visualizing and interpreting gene-environment interactions (GEIs) and to assess the dependence of candidate visualization metrics on biological and study-design factors. Two information-theoretic metrics, the k-way interaction information (KWII) and the total correlation information (TCI), were investigated. The effectiveness of the KWII and TCI to detect GEIs in a diverse range of simulated data sets and a Crohn disease data set was assessed. The sensitivity of the KWII and TCI spectra to biological and study-design variables was determined. Head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and the pedigree disequilibrium test (PDT) methods were obtained. The KWII and TCI spectra, which are graphical summaries of the KWII and TCI for each subset of environmental and genotype variables, were found to detect each known GEI in the simulated data sets. The patterns in the KWII and TCI spectra were informative for factors such as case-control misassignment, locus heterogeneity, allele frequencies, and linkage disequilibrium. The KWII and TCI spectra were found to have excellent sensitivity for identifying the key disease-associated genetic variations in the Crohn disease data set. In head-to-head comparisons with the relevance-chain, multifactor dimensionality reduction, and PDT methods, the results from visual interpretation of the KWII and TCI spectra performed satisfactorily. The KWII and TCI are promising metrics for visualizing GEIs. They are capable of detecting interactions among numerous single-nucleotide polymorphisms and environmental variables for a diverse range of GEI models. PMID:17924337
Maximizing the Biochemical Resolving Power of Fluorescence Microscopy
Esposito, Alessandro; Popleteeva, Marina; Venkitaraman, Ashok R.
2013-01-01
Most recent advances in fluorescence microscopy have focused on achieving spatial resolutions below the diffraction limit. However, the inherent capability of fluorescence microscopy to non-invasively resolve different biochemical or physical environments in biological samples has not yet been formally described, because an adequate and general theoretical framework is lacking. Here, we develop a mathematical characterization of the biochemical resolution in fluorescence detection with Fisher information analysis. To improve the precision and the resolution of quantitative imaging methods, we demonstrate strategies for the optimization of fluorescence lifetime, fluorescence anisotropy and hyperspectral detection, as well as different multi-dimensional techniques. We describe optimized imaging protocols, provide optimization algorithms and describe precision and resolving power in biochemical imaging thanks to the analysis of the general properties of Fisher information in fluorescence detection. These strategies enable the optimal use of the information content available within the limited photon-budget typically available in fluorescence microscopy. This theoretical foundation leads to a generalized strategy for the optimization of multi-dimensional optical detection, and demonstrates how the parallel detection of all properties of fluorescence can maximize the biochemical resolving power of fluorescence microscopy, an approach we term Hyper Dimensional Imaging Microscopy (HDIM). Our work provides a theoretical framework for the description of the biochemical resolution in fluorescence microscopy, irrespective of spatial resolution, and for the development of a new class of microscopes that exploit multi-parametric detection systems. PMID:24204821
Estimating Temporal Causal Interaction between Spike Trains with Permutation and Transfer Entropy
Li, Zhaohui; Li, Xiaoli
2013-01-01
Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE) to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich’s neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich’s cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding. PMID:23940662
Farmer, Richard F; Seeley, John R; Kosty, Derek B; Lewinsohn, Peter M
2009-11-01
Research on hierarchical modeling of psychopathology has frequently identified 2 higher order latent factors, internalizing and externalizing. When based on the comorbidity of psychiatric diagnoses, the externalizing domain has usually been modeled as a single latent factor. Multivariate studies of externalizing symptom features, however, suggest multidimensionality. To address this apparent contradiction, confirmatory factor analytic methods and information-theoretic criteria were used to evaluate 4 theoretically plausible measurement models based on lifetime comorbidity patterns of 7 putative externalizing disorders. Diagnostic information was collected at 4 assessment waves from an age-based cohort of 816 persons between the ages of 14 and 33. A 2-factor model that distinguished oppositional behavior disorders (attention-deficit/hyperactivity disorder, oppositional defiant disorder) from social norm violation disorders (conduct disorder, adult antisocial behavior, alcohol use disorder, cannabis use disorder, hard drug use disorder) demonstrated consistently good fit and superior approximating abilities. Analyses of psychosocial outcomes measured at the last assessment wave supported the validity of this 2-factor model. Implications of this research for the theoretical understanding of domain-related disorders and the organization of classification systems are discussed. PsycINFO Database Record 2009 APA, all rights reserved.
The effect of density gradients on hydrometers
NASA Astrophysics Data System (ADS)
Heinonen, Martti; Sillanpää, Sampo
2003-05-01
Hydrometers are simple but effective instruments for measuring the density of liquids. In this work, we studied the effect of non-uniform density of liquid on a hydrometer reading. The effect induced by vertical temperature gradients was investigated theoretically and experimentally. A method for compensating for the effect mathematically was developed and tested with experimental data obtained with the MIKES hydrometer calibration system. In the tests, the method was found reliable. However, the reliability depends on the available information on the hydrometer dimensions and density gradients.
Examining Neuronal Connectivity and Its Role in Learning and Memory
NASA Astrophysics Data System (ADS)
Gala, Rohan
Learning and long-term memory formation are accompanied with changes in the patterns and weights of synaptic connections in the underlying neuronal network. However, the fundamental rules that drive connectivity changes, and the precise structure-function relationships within neuronal networks remain elusive. Technological improvements over the last few decades have enabled the observation of large but specific subsets of neurons and their connections in unprecedented detail. Devising robust and automated computational methods is critical to distill information from ever-increasing volumes of raw experimental data. Moreover, statistical models and theoretical frameworks are required to interpret the data and assemble evidence into understanding of brain function. In this thesis, I first describe computational methods to reconstruct connectivity based on light microscopy imaging experiments. Next, I use these methods to quantify structural changes in connectivity based on in vivo time-lapse imaging experiments. Finally, I present a theoretical model of associative learning that can explain many stereotypical features of experimentally observed connectivity.
Sabbar, Shaho; Hyun, Daiwon
2016-01-01
After a piece of information is put into a network, its fate depends on the behaviors of the nodes of the network; nodes that are equipped with the hardware and software of the age of information and are more powerful than any time in the past. This study suggests that a useful research for communication, marketing and advertising would be one that looks for patterns in the reactions of the nodes toward different pieces of information. This study has used Facebook to see how people have reacted to different types of messages in terms of liking, sharing and commenting. Rather than looking for universal, generalizable patterns we have tried to examine the practicality of the proposed method. The practical aspect of the study comes after a short theoretical discussion on the issue of flow of information in a digital world. The results revealed dozens of significant relations between the examined variables. This study, its theoretical discussion and results suggest that it would be practical to study the relations between the characteristics of Facebook messages and the type of reactions (liking, sharing and commenting) that they attract.
Yardley, Lucy; Morrison, Leanne G; Andreou, Panayiota; Joseph, Judith; Little, Paul
2010-09-17
It is recognised as good practice to use qualitative methods to elicit users' views of internet-delivered health-care interventions during their development. This paper seeks to illustrate the advantages of combining usability testing with 'theoretical modelling', i.e. analyses that relate the findings of qualitative studies during intervention development to social science theory, in order to gain deeper insights into the reasons and context for how people respond to the intervention. This paper illustrates how usability testing may be enriched by theoretical modelling by means of two qualitative studies of users' views of the delivery of information in an internet-delivered intervention to help users decide whether they needed to seek medical care for their cold or flu symptoms. In Study 1, 21 participants recruited from a city in southern England were asked to 'think aloud' while viewing draft web-pages presented in paper format. In Study 2, views of our prototype website were elicited, again using think aloud methods, in a sample of 26 participants purposively sampled for diversity in education levels. Both data-sets were analysed by thematic analysis. Study 1 revealed that although the information provided by the draft web-pages had many of the intended empowering benefits, users often felt overwhelmed by the quantity of information. Relating these findings to theory and research on factors influencing preferences for information-seeking we hypothesised that to meet the needs of different users (especially those with lower literacy levels) our website should be designed to provide only essential personalised advice, but with options to access further information. Study 2 showed that our website design did prove accessible to users with different literacy levels. However, some users seemed to want still greater control over how information was accessed. Educational level need not be an insuperable barrier to appreciating web-based access to detailed health-related information, provided that users feel they can quickly gain access to the specific information they seek.
Milles, Julien; Zhu, Yue Min; Gimenez, Gérard; Guttmann, Charles R G; Magnin, Isabelle E
2007-03-01
A novel approach for correcting intensity nonuniformity in magnetic resonance imaging (MRI) is presented. This approach is based on the simultaneous use of spatial and gray-level histogram information. Spatial information about intensity nonuniformity is obtained using cubic B-spline smoothing. Gray-level histogram information of the image corrupted by intensity nonuniformity is exploited from a frequential point of view. The proposed correction method is illustrated using both physical phantom and human brain images. The results are consistent with theoretical prediction, and demonstrate a new way of dealing with intensity nonuniformity problems. They are all the more significant as the ground truth on intensity nonuniformity is unknown in clinical images.
Theoretical methods for estimating moments of inertia of trees and boles.
John A. Sturos
1973-01-01
Presents a theoretical method for estimating the mass moments of inertia of full trees and boles about a transverse axis. Estimates from the theoretical model compared closely with experimental data on aspen and red pine trees obtained in the field by the pendulum method. The theoretical method presented may be used to estimate the mass moments of inertia and other...
Kwasnicka, Dominika; Dombrowski, Stephan U; White, Martin; Sniehotta, Falko
2016-01-01
ABSTRACT Background: Behaviour change interventions are effective in supporting individuals in achieving temporary behaviour change. Behaviour change maintenance, however, is rarely attained. The aim of this review was to identify and synthesise current theoretical explanations for behaviour change maintenance to inform future research and practice. Methods: Potentially relevant theories were identified through systematic searches of electronic databases (Ovid MEDLINE, Embase, PsycINFO). In addition, an existing database of 80 theories was searched, and 25 theory experts were consulted. Theories were included if they formulated hypotheses about behaviour change maintenance. Included theories were synthesised thematically to ascertain overarching explanations for behaviour change maintenance. Initial theoretical themes were cross-validated. Findings: One hundred and seventeen behaviour theories were identified, of which 100 met the inclusion criteria. Five overarching, interconnected themes representing theoretical explanations for behaviour change maintenance emerged. Theoretical explanations of behaviour change maintenance focus on the differential nature and role of motives, self-regulation, resources (psychological and physical), habits, and environmental and social influences from initiation to maintenance. Discussion: There are distinct patterns of theoretical explanations for behaviour change and for behaviour change maintenance. The findings from this review can guide the development and evaluation of interventions promoting maintenance of health behaviours and help in the development of an integrated theory of behaviour change maintenance. PMID:26854092
ERIC Educational Resources Information Center
Roberts, Leah; González Alonso, Jorge; Pliatsikas, Christos; Rothman, Jason
2018-01-01
This special issue is a testament to the recent burgeoning interest by theoretical linguists, language acquisitionists and teaching practitioners in the neuroscience of language. It offers a highly valuable, state-of-the-art overview of the neurophysiological methods that are currently being applied to questions in the field of second language…
ERIC Educational Resources Information Center
Gable, Nancy Eileen
2014-01-01
The purpose of this mixed method study was to discover how diaconal ministers in the Evangelical Lutheran Church in America (ELCA) practice their ministry and describe and understand their role as educators of adults. The theoretical framework of the study was informed by the intersection of critical theory, feminist theory, and liberation…
Measuring, Understanding, and Responding to Covert Social Networks: Passive and Active Tomography
2017-11-29
Methods for generating a random sample of networks with desired properties are important tools for the analysis of social , biological, and information...on Theoretical Foundations for Statistical Network Analysis at the Isaac Newton Institute for Mathematical Sciences at Cambridge U. (organized by...Approach SOCIAL SCIENCES STATISTICS EECS Problems span three disciplines Scientific focus is needed at the interfaces
Jay M. Ver Hoef; Hailemariam Temesgen; Sergio Gómez
2013-01-01
Forest surveys provide critical information for many diverse interests. Data are often collected from samples, and from these samples, maps of resources and estimates of aerial totals or averages are required. In this paper, two approaches for mapping and estimating totals; the spatial linear model (SLM) and k-NN (k-Nearest Neighbor) are compared, theoretically,...
Uncertainty Propagation and the Fano Based Infromation Theoretic Method: A Radar Example
2015-02-01
Hogg, “Phase transitions and the search problem by, artificial intellience ”, (an Elsevier journal) volume 81, published in 1996, Pages 1- 15. [39] R...dispersion of the mean mutual information of the estimate is low enough to support the use of the linear approximation. M ut ua l In M uf or m at io n
Exploring super-Gaussianity toward robust information-theoretical time delay estimation.
Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee
2013-03-01
Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.
Combination of real options and game-theoretic approach in investment analysis
NASA Astrophysics Data System (ADS)
Arasteh, Abdollah
2016-09-01
Investments in technology create a large amount of capital investments by major companies. Assessing such investment projects is identified as critical to the efficient assignment of resources. Viewing investment projects as real options, this paper expands a method for assessing technology investment decisions in the linkage existence of uncertainty and competition. It combines the game-theoretic models of strategic market interactions with a real options approach. Several key characteristics underlie the model. First, our study shows how investment strategies rely on competitive interactions. Under the force of competition, firms hurry to exercise their options early. The resulting "hurry equilibrium" destroys the option value of waiting and involves violent investment behavior. Second, we get best investment policies and critical investment entrances. This suggests that integrating will be unavoidable in some information product markets. The model creates some new intuitions into the forces that shape market behavior as noticed in the information technology industry. It can be used to specify best investment policies for technology innovations and adoptions, multistage R&D, and investment projects in information technology.
Sequence information gain based motif analysis.
Maynou, Joan; Pairó, Erola; Marco, Santiago; Perera, Alexandre
2015-11-09
The detection of regulatory regions in candidate sequences is essential for the understanding of the regulation of a particular gene and the mechanisms involved. This paper proposes a novel methodology based on information theoretic metrics for finding regulatory sequences in promoter regions. This methodology (SIGMA) has been tested on genomic sequence data for Homo sapiens and Mus musculus. SIGMA has been compared with different publicly available alternatives for motif detection, such as MEME/MAST, Biostrings (Bioconductor package), MotifRegressor, and previous work such Qresiduals projections or information theoretic based detectors. Comparative results, in the form of Receiver Operating Characteristic curves, show how, in 70% of the studied Transcription Factor Binding Sites, the SIGMA detector has a better performance and behaves more robustly than the methods compared, while having a similar computational time. The performance of SIGMA can be explained by its parametric simplicity in the modelling of the non-linear co-variability in the binding motif positions. Sequence Information Gain based Motif Analysis is a generalisation of a non-linear model of the cis-regulatory sequences detection based on Information Theory. This generalisation allows us to detect transcription factor binding sites with maximum performance disregarding the covariability observed in the positions of the training set of sequences. SIGMA is freely available to the public at http://b2slab.upc.edu.
Ege, Sarah; Reinholdt-Dunne, Marie Louise
2016-12-01
Cognitive behavioural therapy (CBT) is considered the treatment of choice for paediatric anxiety disorders, yet there remains substantial room for improvement in treatment outcomes. This paper examines whether theory and research into the role of information-processing in the underlying psychopathology of paediatric anxiety disorders indicate possibilities for improving treatment response. Using a critical review of recent theoretical, empirical and academic literature, the paper examines the role of information-processing biases in paediatric anxiety disorders, the extent to which CBT targets information-processing biases, and possibilities for improving treatment response. The literature reviewed indicates a role for attentional and interpretational biases in anxious psychopathology. While there is theoretical grounding and limited empirical evidence to indicate that CBT ameliorates interpretational biases, evidence regarding the effects of CBT on attentional biases is mixed. Novel treatment methods including attention bias modification training, attention feedback awareness and control training, and mindfulness-based therapy may hold potential in targeting attentional biases, and thereby in improving treatment response. The integration of novel interventions into an existing evidence-based protocol is a complex issue and faces important challenges with regard to determining the optimal treatment package. Novel interventions targeting information-processing biases may hold potential in improving response to CBT for paediatric anxiety disorders. Many important questions remain to be answered.
Information spreading in Delay Tolerant Networks based on nodes' behaviors
NASA Astrophysics Data System (ADS)
Wu, Yahui; Deng, Su; Huang, Hongbin
2014-07-01
Information spreading in DTNs (Delay Tolerant Networks) adopts a store-carry-forward method, and nodes receive the message from others directly. However, it is hard to judge whether the information is safe in this communication mode. In this case, a node may observe other nodes' behaviors. At present, there is no theoretical model to describe the varying rule of the nodes' trusting level. In addition, due to the uncertainty of the connectivity in DTN, a node is hard to get the global state of the network. Therefore, a rational model about the node's trusting level should be a function of the node's own observing result. For example, if a node finds k nodes carrying a message, it may trust the information with probability p(k). This paper does not explore the real distribution of p(k), but instead presents a unifying theoretical framework to evaluate the performance of the information spreading in above case. This framework is an extension of the traditional SI (susceptible-infected) model, and is useful when p(k) conforms to any distribution. Simulations based on both synthetic and real motion traces show the accuracy of the framework. Finally, we explore the impact of the nodes' behaviors based on certain special distributions through numerical results.
Shuryak, Igor; Loucas, Bradford D; Cornforth, Michael N
2017-01-01
The concept of curvature in dose-response relationships figures prominently in radiation biology, encompassing a wide range of interests including radiation protection, radiotherapy and fundamental models of radiation action. In this context, the ability to detect even small amounts of curvature becomes important. Standard (ST) statistical approaches used for this purpose typically involve least-squares regression, followed by a test on sums of squares. Because we have found that these methods are not particularly robust, we investigated an alternative information theoretic (IT) approach, which involves Poisson regression followed by information-theoretic model selection. Our first objective was to compare the performances of the ST and IT methods by using them to analyze mFISH data on gamma-ray-induced simple interchanges in human lymphocytes, and on Monte Carlo simulated data. Real and simulated data sets that contained small-to-moderate curvature were deliberately selected for this exercise. The IT method tended to detect curvature with higher confidence than the ST method. The finding of curvature in the dose response for true simple interchanges is discussed in the context of fundamental models of radiation action. Our second objective was to optimize the design of experiments aimed specifically at detecting curvature. We used Monte Carlo simulation to investigate the following parameters. Constrained by available resources (i.e., the total number of cells to be scored) these include: the optimal number of dose points to use; the best way to apportion the total number of cells among these dose points; and the spacing of dose intervals. Counterintuitively, our simulation results suggest that 4-5 radiation doses were typically optimal, whereas adding more dose points may actually prove detrimental. Superior results were also obtained by implementing unequal dose spacing and unequal distributions in the number of cells scored at each dose.
Flight-Test Evaluation of Flutter-Prediction Methods
NASA Technical Reports Server (NTRS)
Lind, RIck; Brenner, Marty
2003-01-01
The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.
A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process
NASA Technical Reports Server (NTRS)
Wang, Yi; Tamai, Tetsuo
2009-01-01
Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.
Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases
NASA Astrophysics Data System (ADS)
Pezzè, Luca; Ciampini, Mario A.; Spagnolo, Nicolò; Humphreys, Peter C.; Datta, Animesh; Walmsley, Ian A.; Barbieri, Marco; Sciarrino, Fabio; Smerzi, Augusto
2017-09-01
A quantum theory of multiphase estimation is crucial for quantum-enhanced sensing and imaging and may link quantum metrology to more complex quantum computation and communication protocols. In this Letter, we tackle one of the key difficulties of multiphase estimation: obtaining a measurement which saturates the fundamental sensitivity bounds. We derive necessary and sufficient conditions for projective measurements acting on pure states to saturate the ultimate theoretical bound on precision given by the quantum Fisher information matrix. We apply our theory to the specific example of interferometric phase estimation using photon number measurements, a convenient choice in the laboratory. Our results thus introduce concepts and methods relevant to the future theoretical and experimental development of multiparameter estimation.
Temporal Methods to Detect Content-Based Anomalies in Social Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skryzalin, Jacek; Field, Jr., Richard; Fisher, Andrew N.
Here, we develop a method for time-dependent topic tracking and meme trending in social media. Our objective is to identify time periods whose content differs signifcantly from normal, and we utilize two techniques to do so. The first is an information-theoretic analysis of the distributions of terms emitted during different periods of time. In the second, we cluster documents from each time period and analyze the tightness of each clustering. We also discuss a method of combining the scores created by each technique, and we provide ample empirical analysis of our methodology on various Twitter datasets.
Information theoretic quantification of diagnostic uncertainty.
Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T
2012-01-01
Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.
Orbital Evasive Target Tracking and Sensor Management
2012-03-30
maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game theoretic criterion where...tracking with multiple space borne observers. The results indicate that the game theoretic approach is more effective than the information based approach in...sensor management is to maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game
Nano-optical information storage induced by the nonlinear saturable absorption effect
NASA Astrophysics Data System (ADS)
Wei, Jingsong; Liu, Shuang; Geng, Yongyou; Wang, Yang; Li, Xiaoyi; Wu, Yiqun; Dun, Aihuan
2011-08-01
Nano-optical information storage is very important in meeting information technology requirements. However, obtaining nanometric optical information recording marks by the traditional optical method is difficult due to diffraction limit restrictions. In the current work, the nonlinear saturable absorption effect is used to generate a subwavelength optical spot and to induce nano-optical information recording and readout. Experimental results indicate that information marks below 100 nm are successfully recorded and read out by a high-density digital versatile disk dynamic testing system with a laser wavelength of 405 nm and a numerical aperture of 0.65. The minimum marks of 60 nm are realized, which is only about 1/12 of the diffraction-limited theoretical focusing spot. This physical scheme is very useful in promoting the development of optical information storage in the nanoscale field.
Pattern Activity Clustering and Evaluation (PACE)
NASA Astrophysics Data System (ADS)
Blasch, Erik; Banas, Christopher; Paul, Michael; Bussjager, Becky; Seetharaman, Guna
2012-06-01
With the vast amount of network information available on activities of people (i.e. motions, transportation routes, and site visits) there is a need to explore the salient properties of data that detect and discriminate the behavior of individuals. Recent machine learning approaches include methods of data mining, statistical analysis, clustering, and estimation that support activity-based intelligence. We seek to explore contemporary methods in activity analysis using machine learning techniques that discover and characterize behaviors that enable grouping, anomaly detection, and adversarial intent prediction. To evaluate these methods, we describe the mathematics and potential information theory metrics to characterize behavior. A scenario is presented to demonstrate the concept and metrics that could be useful for layered sensing behavior pattern learning and analysis. We leverage work on group tracking, learning and clustering approaches; as well as utilize information theoretical metrics for classification, behavioral and event pattern recognition, and activity and entity analysis. The performance evaluation of activity analysis supports high-level information fusion of user alerts, data queries and sensor management for data extraction, relations discovery, and situation analysis of existing data.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
Single nucleotide variations: Biological impact and theoretical interpretation
Katsonis, Panagiotis; Koire, Amanda; Wilson, Stephen Joseph; Hsu, Teng-Kuei; Lua, Rhonald C; Wilkins, Angela Dawn; Lichtarge, Olivier
2014-01-01
Genome-wide association studies (GWAS) and whole-exome sequencing (WES) generate massive amounts of genomic variant information, and a major challenge is to identify which variations drive disease or contribute to phenotypic traits. Because the majority of known disease-causing mutations are exonic non-synonymous single nucleotide variations (nsSNVs), most studies focus on whether these nsSNVs affect protein function. Computational studies show that the impact of nsSNVs on protein function reflects sequence homology and structural information and predict the impact through statistical methods, machine learning techniques, or models of protein evolution. Here, we review impact prediction methods and discuss their underlying principles, their advantages and limitations, and how they compare to and complement one another. Finally, we present current applications and future directions for these methods in biological research and medical genetics. PMID:25234433
Measurement of the configuration of a concave surface by the interference of reflected light
NASA Technical Reports Server (NTRS)
Kumazawa, T.; Sakamoto, T.; Shida, S.
1985-01-01
A method whereby a concave surface is irradiated with coherent light and the resulting interference fringes yield information on the concave surface is described. This method can be applied to a surface which satisfies the following conditions: (1) the concave face has a mirror surface; (2) the profile of the face is expressed by a mathematical function with a point of inflection. In this interferometry, multilight waves reflected from the concave surface interfere and make fringes wherever the reflected light propagates. Interference fringe orders. Photographs of the fringe patterns for a uniformly loaded thin silicon plate clamped at the edge are shown experimentally. The experimental and the theoretical values of the maximum optical path difference show good agreement. This simple method can be applied to obtain accurate information on concave surfaces.
Hadwin, Julie A; Garner, Matthew; Perez-Olivas, Gisela
2006-11-01
The aim of this paper is to explore parenting as one potential route through which information processing biases for threat develop in children. It reviews information processing biases in childhood anxiety in the context of theoretical models and empirical research in the adult anxiety literature. Specifically, it considers how adult models have been used and adapted to develop a theoretical framework with which to investigate information processing biases in children. The paper then considers research which specifically aims to understand the relationship between parenting and the development of information processing biases in children. It concludes that a clearer theoretical framework is required to understand the significance of information biases in childhood anxiety, as well as their origins in parenting.
Nonlinear dimensionality reduction of data lying on the multicluster manifold.
Meng, Deyu; Leung, Yee; Fung, Tung; Xu, Zongben
2008-08-01
A new method, which is called decomposition-composition (D-C) method, is proposed for the nonlinear dimensionality reduction (NLDR) of data lying on the multicluster manifold. The main idea is first to decompose a given data set into clusters and independently calculate the low-dimensional embeddings of each cluster by the decomposition procedure. Based on the intercluster connections, the embeddings of all clusters are then composed into their proper positions and orientations by the composition procedure. Different from other NLDR methods for multicluster data, which consider associatively the intracluster and intercluster information, the D-C method capitalizes on the separate employment of the intracluster neighborhood structures and the intercluster topologies for effective dimensionality reduction. This, on one hand, isometrically preserves the rigid-body shapes of the clusters in the embedding process and, on the other hand, guarantees the proper locations and orientations of all clusters. The theoretical arguments are supported by a series of experiments performed on the synthetic and real-life data sets. In addition, the computational complexity of the proposed method is analyzed, and its efficiency is theoretically analyzed and experimentally demonstrated. Related strategies for automatic parameter selection are also examined.
Auditory processing theories of language disorders: past, present, and future.
Miller, Carol A
2011-07-01
The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory processing disorder (APD), specific language impairment (SLI), and dyslexia. The history of auditory processing theories of these 3 disorders is described, points of convergence and controversy within and among the different branches of research literature are considered, and the influence of research on practice is discussed. The theoretical and clinical contributions of neurophysiological methods are also reviewed, and suggested approaches for critical reading of the research literature are provided. Research on the role of auditory processing in communication disorders springs from a variety of theoretical perspectives and assumptions, and this variety, combined with controversies over the interpretation of research results, makes it difficult to draw clinical implications from the literature. Neurophysiological research methods are a promising route to better understanding of auditory processing. Progress in theory development and its clinical application is most likely to be made when researchers from different disciplines and theoretical perspectives communicate clearly and combine the strengths of their approaches.
Inkaya, Ersin; Dinçer, Muharrem; Sahan, Emine; Yıldırım, Ismail
2013-10-01
In this paper, we will report a combined experimental and theoretical investigation of the molecular structure and spectroscopic parameters (FT-IR, (1)H NMR, (13)C NMR) of 5-benzoyl-4-phenyl-2-methylthio-1H-pyrimidine. The compound crystallizes in the triclinic space group P-1 with Z=2. The molecular geometry was also optimized using density functional theory (DFT/B3LYP) method with the 6-311G(d,p) and 6-311++G(d,p) basis sets in ground state and compared with the experimental data. All the assignments of the theoretical frequencies were performed by potential energy distributions using VEDA 4 program. Information about the size, shape, charge density distribution and site of chemical reactivity of the molecules has been obtained by mapping electron density isosurface with electrostatic potential (ESP). Also, non-linear optical properties of the title compound were performed at B3LYP/6-311++G(d,p) level. The theoretical results showed an excellent agreement with the experimental values. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
İnkaya, Ersin; Dinçer, Muharrem; Şahan, Emine; Yıldırım, İsmail
2013-10-01
In this paper, we will report a combined experimental and theoretical investigation of the molecular structure and spectroscopic parameters (FT-IR, 1H NMR, 13C NMR) of 5-benzoyl-4-phenyl-2-methylthio-1H-pyrimidine. The compound crystallizes in the triclinic space group P-1 with Z = 2. The molecular geometry was also optimized using density functional theory (DFT/B3LYP) method with the 6-311G(d,p) and 6-311++G(d,p) basis sets in ground state and compared with the experimental data. All the assignments of the theoretical frequencies were performed by potential energy distributions using VEDA 4 program. Information about the size, shape, charge density distribution and site of chemical reactivity of the molecules has been obtained by mapping electron density isosurface with electrostatic potential (ESP). Also, non-linear optical properties of the title compound were performed at B3LYP/6-311++G(d,p) level. The theoretical results showed an excellent agreement with the experimental values.
Acoustic imaging of a duct spinning mode by the use of an in-duct circular microphone array.
Wei, Qingkai; Huang, Xun; Peers, Edward
2013-06-01
An imaging method of acoustic spinning modes propagating within a circular duct simply with surface pressure information is introduced in this paper. The proposed method is developed in a theoretical way and is demonstrated by a numerical simulation case. Nowadays, the measurements within a duct have to be conducted using in-duct microphone array, which is unable to provide information of complete acoustic solutions across the test section. The proposed method can estimate immeasurable information by forming a so-called observer. The fundamental idea behind the testing method was originally developed in control theory for ordinary differential equations. Spinning mode propagation, however, is formulated in partial differential equations. A finite difference technique is used to reduce the associated partial differential equations to a classical form in control. The observer method can thereafter be applied straightforwardly. The algorithm is recursive and, thus, could be operated in real-time. A numerical simulation for a straight circular duct is conducted. The acoustic solutions on the test section can be reconstructed with good agreement to analytical solutions. The results suggest the potential and applications of the proposed method.
Török I, András; Vincze, Gábor
2011-01-01
[corrected] The Szondi-test is widely applied in clinical diagnostics in Hungary too, and the evidence resulting from the theory is that we can get information about attachment during its interpreting. Its validity is proven by empirical research and clinical experiences. By analyzing the modern attachment theory more thoroughly, it becomes clear in what ways the Szondi-test constellations regarding attachment are different from the classificationbased on questionnaires, allowing the discrete measurement of the attachment style. With the Szondi-test the classification to attachment style is more insecure, but if it is completed with exploration, it is more informative in vector C (vector of relation, attachment information), while short questionnaires make the classification to attachment style possible. In our empirical analysis we represent the integration of the above mentioned clinical and theoretical experiences. In the present analysis we compare the vector C and S constellation of the two-profile Szondi-test of 80 persons with the dimensions of ECR-R questionnaire and with Collins and Read's questionnaire classification regarding attachment style. The statistical results refer to the fact that there is a legitimacy to compare questionnaire processes allowing the discrete classification of attachment and the Szondi-test's information content regarding attachment. With applying the methods together, we get a unique, complementary section of the information relating to attachment. Comparing the two methods (projective and questionnaire) establishes the need of theoretical integration as well. We also make an attempt to explain Fraley's evolutionary non-adaptivity of avoidant attachment, in the case of whose presence adaptivity of early attachment, counterbalancing the exploration and security need, and providing closeness--farness loses its balance.
Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wemhoff, A P; Burnham, A K; de Supinski, B
2006-11-07
This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less
2014-01-01
Background The current paper presents a pilot study of interactive assessment using information and communication technology (ICT) to evaluate the knowledge, skills and abilities of staff with no formal education who are working in Swedish elderly care. Methods Theoretical and practical assessment methods were developed and used with simulated patients and computer-based tests to identify strengths and areas for personal development among staff with no formal education. Results Of the 157 staff with no formal education, 87 began the practical and/or theoretical assessments, and 63 completed both assessments. Several of the staff passed the practical assessments, except the morning hygiene assessment, where several failed. Other areas for staff development, i.e. where several failed (>50%), were the theoretical assessment of the learning objectives: Health, Oral care, Ergonomics, hygiene, esthetic, environmental, Rehabilitation, Assistive technology, Basic healthcare and Laws and organization. None of the staff passed all assessments. Number of years working in elderly care and staff age were not statistically significantly related to the total score of grades on the various learning objectives. Conclusion The interactive assessments were useful in assessing staff members’ practical and theoretical knowledge, skills, and abilities and in identifying areas in need of development. It is important that personnel who lack formal qualifications be clearly identified and given a chance to develop their competence through training, both theoretical and practical. The interactive e-assessment approach analyzed in the present pilot study could serve as a starting point. PMID:24742168
NASA Astrophysics Data System (ADS)
Deepha, V.; Praveena, R.; Sivakumar, Raman; Sadasivam, K.
2014-03-01
The increasing interests in naturally occurring flavonoids are well known for their bioactivity as antioxidants. The present investigations with combined experimental and theoretical methods are employed to determine the radical scavenging activity and phytochemicals present in Crotalaria globosa, a novel plant source. Preliminary quantification of ethanolic extract of leaves shows high phenolic and flavonoid content than root extract; also it is validated through DPPHrad assay. Further analysis is carried out with successive extracts of leaves of varying polarity of solvents. In DPPHrad and FRAP assays, ethyl acetate fraction (EtOAc) exhibit higher scavenging activity followed by ethanol fraction (EtOH) whereas in NOS assay ethanol fraction is slightly predominant over the EtOAc fraction. The LC-MS analysis provides tentative information about the presence of flavonoid C-glycoside in EtOAc fraction (yellow solid). Presence of flavonoid isorientin has been confirmed through isolation (PTLC) and detected by spectroscopy methods (UV-visible and 1H NMR). Utilizing B3LYP/6-311G (d,p) level of theory the structure and reactivity of flavonoid isoorientin theoretically have been explored. The analysis of the theoretical Bond dissociation energy values, for all Osbnd H sites of isoorientin reveals that minimum energy is required to dissociate H-atom from B-ring than A and C-rings. In order to validate the antioxidant characteristics of isoorientin the relevant molecular descriptors IP, HOMO-LUMO, Mulliken spin density analysis and molecular electrostatic potential surfaces have been computed and interpreted. From experimental and theoretical results, it is proved that isoorientin can act as potent antiradical scavenger in oxidative system.
2013-01-01
Background In 2005, the International Patient Decision Aids Standards Collaboration identified twelve quality dimensions to guide assessment of patient decision aids. One dimension—the delivery of patient decision aids on the Internet—is relevant when the Internet is used to provide some or all components of a patient decision aid. Building on the original background chapter, this paper provides an updated definition for this dimension, outlines a theoretical rationale, describes current evidence, and discusses emerging research areas. Methods An international, multidisciplinary panel of authors examined the relevant theoretical literature and empirical evidence through 2012. Results The updated definition distinguishes Internet-delivery of patient decision aids from online health information and clinical practice guidelines. Theories in cognitive psychology, decision psychology, communication, and education support the value of Internet features for providing interactive information and deliberative support. Dissemination and implementation theories support Internet-delivery for providing the right information (rapidly updated), to the right person (tailored), at the right time (the appropriate point in the decision making process). Additional efforts are needed to integrate the theoretical rationale and empirical evidence from health technology perspectives, such as consumer health informatics, user experience design, and human-computer interaction. Despite Internet usage ranging from 74% to 85% in developed countries and 80% of users searching for health information, it is unknown how many individuals specifically seek patient decision aids on the Internet. Among the 86 randomized controlled trials in the 2011 Cochrane Collaboration’s review of patient decision aids, only four studies focused on Internet-delivery. Given the limited number of published studies, this paper particularly focused on identifying gaps in the empirical evidence base and identifying emerging areas of research. Conclusions As of 2012, the updated theoretical rationale and emerging evidence suggest potential benefits to delivering patient decision aids on the Internet. However, additional research is needed to identify best practices and quality metrics for Internet-based development, evaluation, and dissemination, particularly in the areas of interactivity, multimedia components, socially-generated information, and implementation strategies. PMID:24625064
Schalock, Robert L; Luckasson, Ruth; Tassé, Marc J; Verdugo, Miguel Angel
2018-04-01
This article describes a holistic theoretical framework that can be used to explain intellectual disability (ID) and organize relevant information into a usable roadmap to guide understanding and application. Developing the framework involved analyzing the four current perspectives on ID and synthesizing this information into a holistic theoretical framework. Practices consistent with the framework are described, and examples are provided of how multiple stakeholders can apply the framework. The article concludes with a discussion of the advantages and implications of a holistic theoretical approach to ID.
Drug-Target Interaction Prediction through Label Propagation with Linear Neighborhood Information.
Zhang, Wen; Chen, Yanlin; Li, Dingfang
2017-11-25
Interactions between drugs and target proteins provide important information for the drug discovery. Currently, experiments identified only a small number of drug-target interactions. Therefore, the development of computational methods for drug-target interaction prediction is an urgent task of theoretical interest and practical significance. In this paper, we propose a label propagation method with linear neighborhood information (LPLNI) for predicting unobserved drug-target interactions. Firstly, we calculate drug-drug linear neighborhood similarity in the feature spaces, by considering how to reconstruct data points from neighbors. Then, we take similarities as the manifold of drugs, and assume the manifold unchanged in the interaction space. At last, we predict unobserved interactions between known drugs and targets by using drug-drug linear neighborhood similarity and known drug-target interactions. The experiments show that LPLNI can utilize only known drug-target interactions to make high-accuracy predictions on four benchmark datasets. Furthermore, we consider incorporating chemical structures into LPLNI models. Experimental results demonstrate that the model with integrated information (LPLNI-II) can produce improved performances, better than other state-of-the-art methods. The known drug-target interactions are an important information source for computational predictions. The usefulness of the proposed method is demonstrated by cross validation and the case study.
Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel
2018-02-08
Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.
ERIC Educational Resources Information Center
Healy, Lulu; de Carvalho, Cláudia Cristina Soares
2014-01-01
This article focusses on a programme of research into the teaching and learning of proof inspired by Celia Hoyles. By revisiting the first of a series of projects into justifying and proving in school mathematics developed by Celia in the 1990s and by considering how the innovative research methods adopted as well as the results obtained impacted…
ERIC Educational Resources Information Center
Phillipson, J., Ed.
Included are papers (in French or English, with summaries in the other language) presented at a UNESCO-International Biological Programme symposium in 1967. The symposium provided an opportunity for the exchange of information on recent advances in soil ecology, with particular emphasis on soil productivity. Papers on broader theoretical aspects…
1987-03-01
1. Introduction R Analyses of industrial competition have attained a new vigor with the application of game -theoretic methods. The process of... competition is represented in models that reflect genuine struggles for entry, market power, and continuing survival. Dynamics and informational effects are...presents a few of the models developed recently to study competitive processes that affect a firm’s entry into a market , and the decision to exit. The
Swapping Settings: Researching Information Literacy in Workplace and in Educational Contexts
ERIC Educational Resources Information Center
Lundh, Anna Hampson; Limberg, Louise; Lloyd, Annemaree
2013-01-01
Introduction: Information literacy research is characterised by a multitude of interests, research approaches and theoretical starting-points. Challenges lie in the relevance of research to professional fields where information literacy is a concern, and the need to build a strong theoretical base for the research area. We aim to lay a foundation…
The Theoretical Principles of the Organization of Information Systems.
ERIC Educational Resources Information Center
Kulikowski, Juliusz Lech
A survey of the theoretical problems connected with the organization and design of systems for processing and transmitting information is presented in this article. It gives a definition of Information Systems (IS) and classifies them from various points of view. It discusses briefly the most important aspects of the organization of IS, such as…
NASA Astrophysics Data System (ADS)
Albers, D. J.; Hripcsak, George
2012-03-01
This paper addresses how to calculate and interpret the time-delayed mutual information (TDMI) for a complex, diversely and sparsely measured, possibly non-stationary population of time-series of unknown composition and origin. The primary vehicle used for this analysis is a comparison between the time-delayed mutual information averaged over the population and the time-delayed mutual information of an aggregated population (here, aggregation implies the population is conjoined before any statistical estimates are implemented). Through the use of information theoretic tools, a sequence of practically implementable calculations are detailed that allow for the average and aggregate time-delayed mutual information to be interpreted. Moreover, these calculations can also be used to understand the degree of homo or heterogeneity present in the population. To demonstrate that the proposed methods can be used in nearly any situation, the methods are applied and demonstrated on the time series of glucose measurements from two different subpopulations of individuals from the Columbia University Medical Center electronic health record repository, revealing a picture of the composition of the population as well as physiological features.
Information theory in systems biology. Part II: protein-protein interaction and signaling networks.
Mousavian, Zaynab; Díaz, José; Masoudi-Nejad, Ali
2016-03-01
By the development of information theory in 1948 by Claude Shannon to address the problems in the field of data storage and data communication over (noisy) communication channel, it has been successfully applied in many other research areas such as bioinformatics and systems biology. In this manuscript, we attempt to review some of the existing literatures in systems biology, which are using the information theory measures in their calculations. As we have reviewed most of the existing information-theoretic methods in gene regulatory and metabolic networks in the first part of the review, so in the second part of our study, the application of information theory in other types of biological networks including protein-protein interaction and signaling networks will be surveyed. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Methodology for Investigating Adaptive Postural Control
NASA Technical Reports Server (NTRS)
McDonald, P. V.; Riccio, G. E.
1999-01-01
Our research on postural control and human-environment interactions provides an appropriate scientific foundation for understanding the skill of mass handling by astronauts in weightless conditions (e.g., extravehicular activity or EVA). We conducted an investigation of such skills in NASA's principal mass-handling simulator, the Precision Air-Bearing Floor, at the Johnson Space Center. We have studied skilled movement-body within a multidisciplinary context that draws on concepts and methods from biological and behavioral sciences (e.g., psychology, kinesiology and neurophysiology) as well as bioengineering. Our multidisciplinary research has led to the development of measures, for manual interactions between individuals and the substantial environment, that plausibly are observable by human sensory systems. We consider these methods to be the most important general contribution of our EVA investigation. We describe our perspective as control theoretic because it draws more on fundamental concepts about control systems in engineering than it does on working constructs from the subdisciplines of biomechanics and motor control in the bio-behavioral sciences. At the same time, we have attempted to identify the theoretical underpinnings of control-systems engineering that are most relevant to control by human beings. We believe that these underpinnings are implicit in the assumptions that cut across diverse methods in control-systems engineering, especially the various methods associated with "nonlinear control", "fuzzy control," and "adaptive control" in engineering. Our methods are based on these theoretical foundations rather than on the mathematical formalisms that are associated with particular methods in control-systems engineering. The most important aspects of the human-environment interaction in our investigation of mass handling are the functional consequences that body configuration and stability have for the pick up of information or the achievement of overt goals. It follows that an essential characteristic of postural behavior is the effective maintenance of the orientation and stability of the sensory and motor "platforms" (e.g., head or shoulders) over variations in the human, the environment and the task. This general skill suggests that individuals should be sensitive to the functional consequences of body configuration and stability. In other words, individuals should perceive the relation between configuration, stability, and performance so that they can adaptively control their interaction with the surroundings. Human-environment interactions constitute robust systems in that individuals can maintain the stability of such interactions over uncertainty about and variations in the dynamics of the interaction. Robust interactions allow individuals to adopt orientations and configurations that are not optimal with respect to purely energetic criteria. Individuals can tolerate variation in postural states, and such variation can serve an important function in adaptive systems. Postural variability generates stimulation which is "textured" by the dynamics of the human-environment system. The texture or structure in stimulation provides information about variation in dynamics, and such information can be sufficient to guide adaption in control strategies. Our method were designed to measure informative patterns of movement variability.
Kiesewetter, Jan; Fischer, Martin R.
2015-01-01
Background: Simulation-based teamwork trainings are considered a powerful training method to advance teamwork, which becomes more relevant in medical education. The measurement of teamwork is of high importance and several instruments have been developed for various medical domains to meet this need. To our knowledge, no theoretically-based and easy-to-use measurement instrument has been published nor developed specifically for simulation-based teamwork trainings of medical students. Internist ward-rounds function as an important example of teamwork in medicine. Purposes: The purpose of this study was to provide a validated, theoretically-based instrument that is easy-to-use. Furthermore, this study aimed to identify if and when rater scores relate to performance. Methods: Based on a theoretical framework for teamwork behaviour, items regarding four teamwork components (Team Coordination, Team Cooperation, Information Exchange, Team Adjustment Behaviours) were developed. In study one, three ward-round scenarios, simulated by 69 students, were videotaped and rated independently by four trained raters. The instrument was tested for the embedded psychometric properties and factorial structure. In study two, the instrument was tested for construct validity with an external criterion with a second set of 100 students and four raters. Results: In study one, the factorial structure matched the theoretical components but was unable to separate Information Exchange and Team Cooperation. The preliminary version showed adequate psychometric properties (Cronbach’s α=.75). In study two, the instrument showed physician rater scores were more reliable in measurement than those of student raters. Furthermore, a close correlation between the scale and clinical performance as an external criteria was shown (r=.64) and the sufficient psychometric properties were replicated (Cronbach’s α=.78). Conclusions: The validation allows for use of the simulated teamwork assessment scale in undergraduate medical ward-round trainings to reliably measure teamwork by physicians. Further studies are needed to verify the applicability of the instrument. PMID:26038684
Raicu, Valerică
2018-06-15
Investigations of static or dynamic interactions between proteins or other biological macromolecules in living cells often rely on the use of fluorescent tags with two different colors in conjunction with adequate theoretical descriptions of Förster Resonance Energy Transfer (FRET) and molecular-level micro-spectroscopic technology. One such method based on these general principles is FRET spectrometry, which allows determination of the quaternary structure of biomolecules from cell-level images of the distributions, or spectra of occurrence frequency of FRET efficiencies. Subsequent refinements allowed combining FRET frequency spectra with molecular concentration information, thereby providing the proportion of molecular complexes with various quaternary structures as well as their binding/dissociation energies. In this paper, we build on the mathematical principles underlying FRET spectrometry to propose two new spectrometric methods, which have distinct advantages compared to other methods. One of these methods relies on statistical analysis of color mixing in subpopulations of fluorescently tagged molecules to probe molecular association stoichiometry, while the other exploits the color shift induced by FRET to also derive geometric information in addition to stoichiometry. The appeal of the first method stems from its sheer simplicity, while the strength of the second consists in its ability to provide structural information. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Raicu, Valerică
2018-06-01
Investigations of static or dynamic interactions between proteins or other biological macromolecules in living cells often rely on the use of fluorescent tags with two different colors in conjunction with adequate theoretical descriptions of Förster Resonance Energy Transfer (FRET) and molecular-level micro-spectroscopic technology. One such method based on these general principles is FRET spectrometry, which allows determination of the quaternary structure of biomolecules from cell-level images of the distributions, or spectra of occurrence frequency of FRET efficiencies. Subsequent refinements allowed combining FRET frequency spectra with molecular concentration information, thereby providing the proportion of molecular complexes with various quaternary structures as well as their binding/dissociation energies. In this paper, we build on the mathematical principles underlying FRET spectrometry to propose two new spectrometric methods, which have distinct advantages compared to other methods. One of these methods relies on statistical analysis of color mixing in subpopulations of fluorescently tagged molecules to probe molecular association stoichiometry, while the other exploits the color shift induced by FRET to also derive geometric information in addition to stoichiometry. The appeal of the first method stems from its sheer simplicity, while the strength of the second consists in its ability to provide structural information.
Patterns-Based IS Change Management in SMEs
NASA Astrophysics Data System (ADS)
Makna, Janis; Kirikova, Marite
The majority of information systems change management guidelines and standards are either too abstract or too bureaucratic to be easily applicable in small enterprises. This chapter proposes the approach, the method, and the prototype that are designed especially for information systems change management in small and medium enterprises. The approach is based on proven patterns of changes in the set of information systems elements. The set of elements was obtained by theoretical analysis of information systems and business process definitions and enterprise architectures. The patterns were evolved from a number of information systems theories and tested in 48 information systems change management projects. The prototype presents and helps to handle three basic change patterns, which help to anticipate the overall scope of changes related to particular elementary changes in an enterprise information system. The use of prototype requires just basic knowledge in organizational business process and information management.
Shannon information entropy in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Ma, Chun-Wang; Ma, Yu-Gang
2018-03-01
The general idea of information entropy provided by C.E. Shannon "hangs over everything we do" and can be applied to a great variety of problems once the connection between a distribution and the quantities of interest is found. The Shannon information entropy essentially quantify the information of a quantity with its specific distribution, for which the information entropy based methods have been deeply developed in many scientific areas including physics. The dynamical properties of heavy-ion collisions (HICs) process make it difficult and complex to study the nuclear matter and its evolution, for which Shannon information entropy theory can provide new methods and observables to understand the physical phenomena both theoretically and experimentally. To better understand the processes of HICs, the main characteristics of typical models, including the quantum molecular dynamics models, thermodynamics models, and statistical models, etc., are briefly introduced. The typical applications of Shannon information theory in HICs are collected, which cover the chaotic behavior in branching process of hadron collisions, the liquid-gas phase transition in HICs, and the isobaric difference scaling phenomenon for intermediate mass fragments produced in HICs of neutron-rich systems. Even though the present applications in heavy-ion collision physics are still relatively simple, it would shed light on key questions we are seeking for. It is suggested to further develop the information entropy methods in nuclear reactions models, as well as to develop new analysis methods to study the properties of nuclear matters in HICs, especially the evolution of dynamics system.
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes. PMID:29872389
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.
Scholz, Karoline; Dekant, Wolfgang; Völkel, Wolfgang; Pähler, Axel
2005-12-01
A sensitive and specific liquid chromatography-mass spectrometry (LC-MS) method based on the combination of constant neutral loss scans (CNL) with product ion scans was developed on a linear ion trap. The method is applicable for the detection and identification of analytes with identical chemical substructures (such as conjugates of xenobiotics formed in biological systems) which give common CNLs. A specific CNL was observed for thioethers of N-acetyl-L-cysteine (mercapturic acids, MA) by LC-MS/MS. MS and HPLC parameters were optimized with 16 MAs available as reference compounds. All of these provided a CNL of 129 Da in the negative-ion mode. To assess sensitivity, a multiple reaction monitoring (MRM) mode with 251 theoretical transitions using the CNL of 129 Da combined with a product ion scan (IDA thMRM) was compared with CNL combined with a product ion scan (IDA CNL). An information-dependent acquisition (IDA) uses a survey scan such as MRM (multiple reaction monitoring) to generate "informations" and starting a second acquisition experiment such as a product ion scan using these "informations." Th-MRM means calculated transitions and not transitions generated from an available standard in the tuning mode. The product ion spectra provide additional information on the chemical structure of the unknown analytes. All MA standards were spiked in low concentrations to rat urines and were detected with both methods with LODs ranging from 60 pmol/mL to 1.63 nmol/mL with IDA thMRM. The expected product ion spectra were observed in urine. Application of this screening method to biological samples indicated the presence of a number of MAs in urine of unexposed rats, and resulted in the identification of 1,4-dihydroxynonene mercapturic acid as one of these MAs by negative and positive product ion spectra. These results show that the developed methods have a high potential to serve as both a prescreen to detect unknown MAs and to identify these analytes in complex matrix.
Study on the supply chain of an enterprise based on axiomatic design
NASA Astrophysics Data System (ADS)
Fan, Shu-hai; Lin, Chao-qun; Ji, Chun; Zhou, Ce; Chen, Peng
2018-06-01
This paper first expounds the basic theoretical knowledge of axiomatic design, and then designs and improves the enterprise supply chain through two design axioms (axiom of independence and information axiom). In the axiomatic design of the axiom of independence, the user needs to determine the needs and problems to be solved, to determine the top total goals, the total goal decomposition, and to determine their own design equations. In the application of information axiom, the concept of cloud is used to quantify the amount of information, and the two schemes are evaluated and compared. Finally, through axiomatic design, we can get the best solution for the improvement of supply chain design. Axiomatic design is a generic, systematic and sophisticated approach to design that addresses the needs of different customers. Using this method to improve the level of supply chain management is creative. As a mature method, it will make the process efficient and convenient.
Data Structures in Natural Computing: Databases as Weak or Strong Anticipatory Systems
NASA Astrophysics Data System (ADS)
Rossiter, B. N.; Heather, M. A.
2004-08-01
Information systems anticipate the real world. Classical databases store, organise and search collections of data of that real world but only as weak anticipatory information systems. This is because of the reductionism and normalisation needed to map the structuralism of natural data on to idealised machines with von Neumann architectures consisting of fixed instructions. Category theory developed as a formalism to explore the theoretical concept of naturality shows that methods like sketches arising from graph theory as only non-natural models of naturality cannot capture real-world structures for strong anticipatory information systems. Databases need a schema of the natural world. Natural computing databases need the schema itself to be also natural. Natural computing methods including neural computers, evolutionary automata, molecular and nanocomputing and quantum computation have the potential to be strong. At present they are mainly at the stage of weak anticipatory systems.
Coherent detection of position errors in inter-satellite laser communications
NASA Astrophysics Data System (ADS)
Xu, Nan; Liu, Liren; Liu, De'an; Sun, Jianfeng; Luan, Zhu
2007-09-01
Due to the improved receiver sensitivity and wavelength selectivity, coherent detection became an attractive alternative to direct detection in inter-satellite laser communications. A novel method to coherent detection of position errors information is proposed. Coherent communication system generally consists of receive telescope, local oscillator, optical hybrid, photoelectric detector and optical phase lock loop (OPLL). Based on the system composing, this method adds CCD and computer as position error detector. CCD captures interference pattern while detection of transmission data from the transmitter laser. After processed and analyzed by computer, target position information is obtained from characteristic parameter of the interference pattern. The position errors as the control signal of PAT subsystem drive the receiver telescope to keep tracking to the target. Theoretical deviation and analysis is presented. The application extends to coherent laser rang finder, in which object distance and position information can be obtained simultaneously.
Alonso, Ariel; Van der Elst, Wim; Molenberghs, Geert; Buyse, Marc; Burzykowski, Tomasz
2016-09-01
In this work a new metric of surrogacy, the so-called individual causal association (ICA), is introduced using information-theoretic concepts and a causal inference model for a binary surrogate and true endpoint. The ICA has a simple and appealing interpretation in terms of uncertainty reduction and, in some scenarios, it seems to provide a more coherent assessment of the validity of a surrogate than existing measures. The identifiability issues are tackled using a two-step procedure. In the first step, the region of the parametric space of the distribution of the potential outcomes, compatible with the data at hand, is geometrically characterized. Further, in a second step, a Monte Carlo approach is proposed to study the behavior of the ICA on the previous region. The method is illustrated using data from the Collaborative Initial Glaucoma Treatment Study. A newly developed and user-friendly R package Surrogate is provided to carry out the evaluation exercise. © 2016, The International Biometric Society.
Plasma Parameters From Reentry Signal Attenuation
Statom, T. K.
2018-02-27
This study presents the application of a theoretically developed method that provides plasma parameter solution space information from measured RF attenuation that occurs during reentry. The purpose is to provide reentry plasma parameter information from the communication signal attenuation. The theoretical development centers around the attenuation and the complex index of refraction. The methodology uses an imaginary index of the refraction matching algorithm with a tolerance to find suitable solutions that satisfy the theory. The imaginary matching terms are then used to determine the real index of refraction resulting in the complex index of refraction. Then a filter is usedmore » to reject nonphysical solutions. Signal attenuation-based plasma parameter properties investigated include the complex index of refraction, plasma frequency, electron density, collision frequency, propagation constant, attenuation constant, phase constant, complex plasma conductivity, and electron mobility. RF plasma thickness attenuation is investigated and compared to the literature. Finally, similar plasma thickness for a specific signal attenuation can have different plasma properties.« less
Plasma Parameters From Reentry Signal Attenuation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Statom, T. K.
This study presents the application of a theoretically developed method that provides plasma parameter solution space information from measured RF attenuation that occurs during reentry. The purpose is to provide reentry plasma parameter information from the communication signal attenuation. The theoretical development centers around the attenuation and the complex index of refraction. The methodology uses an imaginary index of the refraction matching algorithm with a tolerance to find suitable solutions that satisfy the theory. The imaginary matching terms are then used to determine the real index of refraction resulting in the complex index of refraction. Then a filter is usedmore » to reject nonphysical solutions. Signal attenuation-based plasma parameter properties investigated include the complex index of refraction, plasma frequency, electron density, collision frequency, propagation constant, attenuation constant, phase constant, complex plasma conductivity, and electron mobility. RF plasma thickness attenuation is investigated and compared to the literature. Finally, similar plasma thickness for a specific signal attenuation can have different plasma properties.« less
Research trends and issues in informal science education
NASA Astrophysics Data System (ADS)
Pinthong, Tanwarat; Faikhamta, Chatree
2018-01-01
Research in informal science education (ISE) become more interesting area in science education for a few decades. The main purpose of this research is to analyse research articles in 30 issues of top three international journals in science education; Journal of Research in Science Teaching, Science Education, and the International Journal of Science Education. The research articles during 2007 and 2016 were reviewed and analysed according to the authors' nationality, informal science education's research topics, research paradigms, methods of data collection and data analysis. The research findings indicated that there were 201 published papers related to informal science education, successfully submitted by 469 authors from 27 different countries. In 2008, there was no article related to informal science education. Statistical analyses showed that authors from USA are the most dominant, followed by UK and Israel. The top three ISE's research topics most frequently investigated by the researchers were regarding students' informal learning, public understanding in science, and informal perspectives, policies and paradigms. It is also found that theoretical framework used in informal science education which is becoming more strongly rooted is in a mix of the sociocultural and constructivist paradigms, with a growing acceptance of qualitative research methods and analyses.
NASA Astrophysics Data System (ADS)
Xiong, Yan; Reichenbach, Stephen E.
1999-01-01
Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.
Modeling of information diffusion in Twitter-like social networks under information overload.
Li, Pei; Li, Wei; Wang, Hui; Zhang, Xin
2014-01-01
Due to the existence of information overload in social networks, it becomes increasingly difficult for users to find useful information according to their interests. This paper takes Twitter-like social networks into account and proposes models to characterize the process of information diffusion under information overload. Users are classified into different types according to their in-degrees and out-degrees, and user behaviors are generalized into two categories: generating and forwarding. View scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated by a given type user is adopted to characterize the information diffusion efficiency, which is calculated theoretically. To verify the accuracy of theoretical analysis results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly. These results are of importance to understand the diffusion dynamics in social networks, and this analysis framework can be extended to consider more realistic situations.
Modeling of Information Diffusion in Twitter-Like Social Networks under Information Overload
Li, Wei
2014-01-01
Due to the existence of information overload in social networks, it becomes increasingly difficult for users to find useful information according to their interests. This paper takes Twitter-like social networks into account and proposes models to characterize the process of information diffusion under information overload. Users are classified into different types according to their in-degrees and out-degrees, and user behaviors are generalized into two categories: generating and forwarding. View scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated by a given type user is adopted to characterize the information diffusion efficiency, which is calculated theoretically. To verify the accuracy of theoretical analysis results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly. These results are of importance to understand the diffusion dynamics in social networks, and this analysis framework can be extended to consider more realistic situations. PMID:24795541
From information theory to quantitative description of steric effects.
Alipour, Mojtaba; Safari, Zahra
2016-07-21
Immense efforts have been made in the literature to apply the information theory descriptors for investigating the electronic structure theory of various systems. In the present study, the information theoretic quantities, such as Fisher information, Shannon entropy, Onicescu information energy, and Ghosh-Berkowitz-Parr entropy, have been used to present a quantitative description for one of the most widely used concepts in chemistry, namely the steric effects. Taking the experimental steric scales for the different compounds as benchmark sets, there are reasonable linear relationships between the experimental scales of the steric effects and theoretical values of steric energies calculated from information theory functionals. Perusing the results obtained from the information theoretic quantities with the two representations of electron density and shape function, the Shannon entropy has the best performance for the purpose. On the one hand, the usefulness of considering the contributions of functional groups steric energies and geometries, and on the other hand, dissecting the effects of both global and local information measures simultaneously have also been explored. Furthermore, the utility of the information functionals for the description of steric effects in several chemical transformations, such as electrophilic and nucleophilic reactions and host-guest chemistry, has been analyzed. The functionals of information theory correlate remarkably with the stability of systems and experimental scales. Overall, these findings show that the information theoretic quantities can be introduced as quantitative measures of steric effects and provide further evidences of the quality of information theory toward helping theoreticians and experimentalists to interpret different problems in real systems.
Applications of information theory, genetic algorithms, and neural models to predict oil flow
NASA Astrophysics Data System (ADS)
Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto
2009-07-01
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.
Investigating nurse practitioners in the private sector: a theoretically informed research protocol.
Adams, Margaret; Gardner, Glenn; Yates, Patsy
2017-06-01
To report a study protocol and the theoretical framework normalisation process theory that informs this protocol for a case study investigation of private sector nurse practitioners. Most research evaluating nurse practitioner service is focused on public, mainly acute care environments where nurse practitioner service is well established with strong structures for governance and sustainability. Conversely, there is lack of clarity in governance for emerging models in the private sector. In a climate of healthcare reform, nurse practitioner service is extending beyond the familiar public health sector. Further research is required to inform knowledge of the practice, operational framework and governance of new nurse practitioner models. The proposed research will use a multiple exploratory case study design to examine private sector nurse practitioner service. Data collection includes interviews, surveys and audits. A sequential mixed method approach to analysis of each case will be conducted. Findings from within-case analysis will lead to a meta-synthesis across all four cases to gain a holistic understanding of the cases under study, private sector nurse practitioner service. Normalisation process theory will be used to guide the research process, specifically coding and analysis of data using theory constructs and the relevant components associated with those constructs. This article provides a blueprint for the research and describes a theoretical framework, normalisation process theory in terms of its flexibility as an analytical framework. Consistent with the goals of best research practice, this study protocol will inform the research community in the field of primary health care about emerging research in this field. Publishing a study protocol ensures researcher fidelity to the analysis plan and supports research collaboration across teams. © 2016 John Wiley & Sons Ltd.
The Difference between Uncertainty and Information, and Why This Matters
NASA Astrophysics Data System (ADS)
Nearing, G. S.
2016-12-01
Earth science investigation and arbitration (for decision making) is very often organized around a concept of uncertainty. It seems relatively straightforward that the purpose of our science is to reduce uncertainty about how environmental systems will react and evolve under different conditions. I propose here that approaching a science of complex systems as a process of quantifying and reducing uncertainty is a mistake, and specifically a mistake that is rooted in certain rather hisoric logical errors. Instead I propose that we should be asking questions about information. I argue here that an information-based perspective facilitates almost trivial answers to environmental science questions that are either difficult or theoretically impossible to answer when posed as questions about uncertainty. In particular, I propose that an information-centric perspective leads to: Coherent and non-subjective hypothesis tests for complex system models. Process-level diagnostics for complex systems models. Methods for building complex systems models that allow for inductive inference without the need for a priori specification of likelihood functions or ad hoc error metrics. Asymptotically correct quantification of epistemic uncertainty. To put this in slightly more basic terms, I propose that an information-theoretic philosophy of science has the potential to resolve certain important aspects of the Demarcation Problem and the Duhem-Quine Problem, and that Hydrology and other Earth Systems Sciences can immediately capitalize on this to address some of our most difficult and persistent problems.
Quantum Information Processing with Large Nuclear Spins in GaAs Semiconductors
NASA Astrophysics Data System (ADS)
Leuenberger, Michael N.; Loss, Daniel; Poggio, M.; Awschalom, D. D.
2002-10-01
We propose an implementation for quantum information processing based on coherent manipulations of nuclear spins I=3/2 in GaAs semiconductors. We describe theoretically an NMR method which involves multiphoton transitions and which exploits the nonequidistance of nuclear spin levels due to quadrupolar splittings. Starting from known spin anisotropies we derive effective Hamiltonians in a generalized rotating frame, valid for arbitrary I, which allow us to describe the nonperturbative time evolution of spin states generated by magnetic rf fields. We identify an experimentally observable regime for multiphoton Rabi oscillations. In the nonlinear regime, we find Berry phase interference.
Daly, Louise; McCarron, Mary; Higgins, Agnes; McCallion, Philip
2013-02-01
This paper presents a theory explaining the processes used by informal carers of people with dementia to mange alterations to their, and people with dementias' relationships with and places within their social worlds. Informal carers provide the majority of care to people with dementia. A great deal of international informal dementia care research is available, much of which elucidates the content, impacts and consequences of the informal caring role and the coping mechanisms that carers use. However, the socially situated experiences and processes integral to informal caring in dementia have not yet been robustly accounted for. A classic grounded theory approach was used as it is designed for research enquiries that aim to generate theory illustrating social patterns of action used to address an identified problem. Thirty interviews were conducted with 31 participants between 2006-2008. The theory was conceptualised from the data using the concurrent methods of theoretical sampling, constant comparative analysis, memo writing and theoretical sensitivity. Informal carers' main concern was identified as 'Living on the fringes', which was stimulated by dementia-related stigma and living a different life. The theory of 'Sustaining Place' explains the social pattern of actions employed by informal carers to manage this problem on behalf of themselves and the person with dementia. The theory of 'Sustaining Place' identifies an imperative for nurses, other formal carers and society to engage in actions to support and enable social connectedness, social inclusion and citizenship for informal carers and people with dementia. 'Sustaining Place' facilitates enhanced understanding of the complex and socially situated nature of informal dementia care through its portrayal of informal carers as social agents and can be used to guide nurses to better support those who live with dementia. © 2012 Blackwell Publishing Ltd.
Functional analysis of ultra high information rates conveyed by rat vibrissal primary afferents
Chagas, André M.; Theis, Lucas; Sengupta, Biswa; Stüttgen, Maik C.; Bethge, Matthias; Schwarz, Cornelius
2013-01-01
Sensory receptors determine the type and the quantity of information available for perception. Here, we quantified and characterized the information transferred by primary afferents in the rat whisker system using neural system identification. Quantification of “how much” information is conveyed by primary afferents, using the direct method (DM), a classical information theoretic tool, revealed that primary afferents transfer huge amounts of information (up to 529 bits/s). Information theoretic analysis of instantaneous spike-triggered kinematic stimulus features was used to gain functional insight on “what” is coded by primary afferents. Amongst the kinematic variables tested—position, velocity, and acceleration—primary afferent spikes encoded velocity best. The other two variables contributed to information transfer, but only if combined with velocity. We further revealed three additional characteristics that play a role in information transfer by primary afferents. Firstly, primary afferent spikes show preference for well separated multiple stimuli (i.e., well separated sets of combinations of the three instantaneous kinematic variables). Secondly, neurons are sensitive to short strips of the stimulus trajectory (up to 10 ms pre-spike time), and thirdly, they show spike patterns (precise doublet and triplet spiking). In order to deal with these complexities, we used a flexible probabilistic neuron model fitting mixtures of Gaussians to the spike triggered stimulus distributions, which quantitatively captured the contribution of the mentioned features and allowed us to achieve a full functional analysis of the total information rate indicated by the DM. We found that instantaneous position, velocity, and acceleration explained about 50% of the total information rate. Adding a 10 ms pre-spike interval of stimulus trajectory achieved 80–90%. The final 10–20% were found to be due to non-linear coding by spike bursts. PMID:24367295
Use of Intervention Mapping to Enhance Health Care Professional Practice: A Systematic Review.
Durks, Desire; Fernandez-Llimos, Fernando; Hossain, Lutfun N; Franco-Trigo, Lucia; Benrimoj, Shalom I; Sabater-Hernández, Daniel
2017-08-01
Intervention Mapping is a planning protocol for developing behavior change interventions, the first three steps of which are intended to establish the foundations and rationales of such interventions. This systematic review aimed to identify programs that used Intervention Mapping to plan changes in health care professional practice. Specifically, it provides an analysis of the information provided by the programs in the first three steps of the protocol to determine their foundations and rationales of change. A literature search was undertaken in PubMed, Scopus, SciELO, and DOAJ using "Intervention Mapping" as keyword. Key information was gathered, including theories used, determinants of practice, research methodologies, theory-based methods, and practical applications. Seventeen programs aimed at changing a range of health care practices were included. The social cognitive theory and the theory of planned behavior were the most frequently used frameworks in driving change within health care practices. Programs used a large variety of research methodologies to identify determinants of practice. Specific theory-based methods (e.g., modelling and active learning) and practical applications (e.g., health care professional training and facilitation) were reported to inform the development of practice change interventions and programs. In practice, Intervention Mapping delineates a three-step systematic, theory- and evidence-driven process for establishing the theoretical foundations and rationales underpinning change in health care professional practice. The use of Intervention Mapping can provide health care planners with useful guidelines for the theoretical development of practice change interventions and programs.
A framework for the management of intellectual capital in the health care industry.
Grantham, C E; Nichols, L D; Schonberner, M
1997-01-01
This article proposes a new theoretical model for the effective management of intellectual capital in the health care industry. The evolution of knowledge-based resources as a value-adding characteristic of service industries coupled with mounting environmental pressures on health care necessitates the extension of current models of intellectual capital. Our theoretical model contains an expanded context linking its development to organizational learning theory and extends current theory by proposing a six-term archetype of organizational functioning built on flows of information. Further, our proposal offers a hierarchical dimension to intellectual capital and a method of scientific visualization for the measurement of intellectual capital. In conclusion, we offer some practical suggestions for future development, both for researchers and managers.
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
Frequency domain phase-shifted confocal microscopy (FDPCM) with array detection
NASA Astrophysics Data System (ADS)
Ge, Baoliang; Huang, Yujia; Fang, Yue; Kuang, Cuifang; Xiu, Peng; Liu, Xu
2017-09-01
We proposed a novel method to reconstruct images taken by array detected confocal microscopy without prior knowledge about its detector distribution. The proposed frequency domain phase-shifted confocal microscopy (FDPCM) shifts the image from each detection channel to its corresponding place by substituting the phase information in Fourier domain. Theoretical analysis shows that our method could approach the resolution nearly twofold of wide-field microscopy. Simulation and experiment results are also shown to verify the applicability and effectiveness of our method. Compared to Airyscan, our method holds the advantage of simplicity and convenience to be applied to array detectors with different structure, which makes FDPCM have great potential in the application of biomedical observation in the future.
NASA Astrophysics Data System (ADS)
Cheng, Yayun; Qi, Bo; Liu, Siyuan; Hu, Fei; Gui, Liangqi; Peng, Xiaohui
2016-10-01
Polarimetric measurements can provide additional information as compared to unpolarized ones. In this paper, linear polarization ratio (LPR) is created to be a feature discriminator. The LPR properties of several materials are investigated using Fresnel theory. The theoretical results show that LPR is sensitive to the material type (metal or dielectric). Then a linear polarization ratio-based (LPR-based) method is presented to distinguish between metal and dielectric materials. In order to apply this method to practical applications, the optimal range of incident angle have been discussed. The typical outdoor experiments including various objects such as aluminum plate, grass, concrete, soil and wood, have been conducted to validate the presented classification method.
NASA Astrophysics Data System (ADS)
Sas, E. B.; Cankaya, N.; Kurt, M.
2018-06-01
In this work 2-(bis(cyanomethyl)amino)-2-oxoethyl methacrylate monomer has been synthesized as newly, characterized both experimentally and theoretically. Experimentally, it has been characterized by FT-IR, FT-Raman, 1H and 13C NMR spectroscopy techniques. The theoretical calculations have been performed with Density Functional Theory (DFT) including B3LYP method. The scaled theoretical wavenumbers have been assigned based on total energy distribution (TED). Electronic properties of monomer have been performed using time-dependent TD-DFT/B3LYP/B3LYP/6-311G++(d,p) method. The results of experimental have been compared with theoretical values. Both experimental and theoretical methods have shown that the monomer was suitable for the literature.
A Comparison of Approaches for Solving Hard Graph-Theoretic Problems
2015-05-01
collaborative effort “ Adiabatic Quantum Computing Applications Research” (14-RI-CRADA-02) between the Information Directorate and Lock- 3 Algorithm 3...using Matlab, a quantum annealing approach using the D-Wave computer , and lastly using satisfiability modulo theory (SMT) and corresponding SMT...methods are explored and consist of a parallel computing approach using Matlab, a quantum annealing approach using the D-Wave computer , and lastly using
Computational Sensing and in vitro Classification of GMOs and Biomolecular Events
2008-12-01
COMPUTATIONAL SENSING AND IN VITRO CLASSIFICATION OF GMOs AND BIOMOLECULAR EVENTS Elebeoba May1∗, Miler T. Lee2†, Patricia Dolan1, Paul Crozier1...modified organisms ( GMOs ) in the pres- ence of non-lethal agents. Using an information and coding- theoretic framework we develop a de novo method for...high through- put screening, distinguishing genetically modified organisms ( GMOs ), molecular computing, differentiating biological mark- ers
Data normalization in biosurveillance: an information-theoretic approach.
Peter, William; Najmi, Amir H; Burkom, Howard
2007-10-11
An approach to identifying public health threats by characterizing syndromic surveillance data in terms of its surprisability is discussed. Surprisability in our model is measured by assigning a probability distribution to a time series, and then calculating its entropy, leading to a straightforward designation of an alert. Initial application of our method is to investigate the applicability of using suitably-normalized syndromic counts (i.e., proportions) to improve early event detection.
Visibility Graph Based Time Series Analysis.
Stephen, Mutua; Gu, Changgui; Yang, Huijie
2015-01-01
Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it's microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks.
Noise-free recovery of optodigital encrypted and multiplexed images.
Henao, Rodrigo; Rueda, Edgar; Barrera, John F; Torroba, Roberto
2010-02-01
We present a method that allows storing multiple encrypted data using digital holography and a joint transform correlator architecture with a controllable angle reference wave. In this method, the information is multiplexed by using a key and a different reference wave angle for each object. In the recovering process, the use of different reference wave angles prevents noise produced by the nonrecovered objects from being superimposed on the recovered object; moreover, the position of the recovered object in the exit plane can be fully controlled. We present the theoretical analysis and the experimental results that show the potential and applicability of the method.
Investigation of condensed matter by means of elastic thermal-neutron scattering
NASA Astrophysics Data System (ADS)
Abov, Yu. G.; Dzheparov, F. S.; Elyutin, N. O.; Lvov, D. V.; Tyulyusov, A. N.
2016-07-01
The application of elastic thermal-neutron scattering in investigations of condensed matter that were performed at the Institute for Theoretical and Experimental Physics is described. An account of diffraction studies with weakly absorbing crystals, including studies of the anomalous-absorption effect and coherent effects in diffuse scattering, is given. Particular attention is given to exposing the method of multiple small-angle neutron scattering (MSANS). It is shown how information about matter inhomogeneities can be obtained by this method on the basis of Molière's theory. Prospects of the development of this method are outlined, and MSANS theory is formulated for a high concentration of matter inhomogeneities.
An evidential link prediction method and link predictability based on Shannon entropy
NASA Astrophysics Data System (ADS)
Yin, Likang; Zheng, Haoyang; Bian, Tian; Deng, Yong
2017-09-01
Predicting missing links is of both theoretical value and practical interest in network science. In this paper, we empirically investigate a new link prediction method base on similarity and compare nine well-known local similarity measures on nine real networks. Most of the previous studies focus on the accuracy, however, it is crucial to consider the link predictability as an initial property of networks itself. Hence, this paper has proposed a new link prediction approach called evidential measure (EM) based on Dempster-Shafer theory. Moreover, this paper proposed a new method to measure link predictability via local information and Shannon entropy.
Support Net for Frontline Providers
2016-03-01
influencing members’ continuance intentions in professional virtual communities - a longitudinal study. Journal of Information Science, 33(4), 451-467...of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB...from a scientific and theoretically based manner. Results from this project provide critical prevalence information , theoretical development, and
Brain activity and cognition: a connection from thermodynamics and information theory.
Collell, Guillem; Fauquet, Jordi
2015-01-01
The connection between brain and mind is an important scientific and philosophical question that we are still far from completely understanding. A crucial point to our work is noticing that thermodynamics provides a convenient framework to model brain activity, whereas cognition can be modeled in information-theoretical terms. In fact, several models have been proposed so far from both approaches. A second critical remark is the existence of deep theoretical connections between thermodynamics and information theory. In fact, some well-known authors claim that the laws of thermodynamics are nothing but principles in information theory. Unlike in physics or chemistry, a formalization of the relationship between information and energy is currently lacking in neuroscience. In this paper we propose a framework to connect physical brain and cognitive models by means of the theoretical connections between information theory and thermodynamics. Ultimately, this article aims at providing further insight on the formal relationship between cognition and neural activity.
Amemori, Masamitsu; Michie, Susan; Korhonen, Tellervo; Murtomaa, Heikki; Kinnunen, Taru H
2011-05-26
Tobacco use adversely affects oral health. Clinical guidelines recommend that dental providers promote tobacco abstinence and provide patients who use tobacco with brief tobacco use cessation counselling. Research shows that these guidelines are seldom implemented, however. To improve guideline adherence and to develop effective interventions, it is essential to understand provider behaviour and challenges to implementation. This study aimed to develop a theoretically informed measure for assessing among dental providers implementation difficulties related to tobacco use prevention and cessation (TUPAC) counselling guidelines, to evaluate those difficulties among a sample of dental providers, and to investigate a possible underlying structure of applied theoretical domains. A 35-item questionnaire was developed based on key theoretical domains relevant to the implementation behaviours of healthcare providers. Specific items were drawn mostly from the literature on TUPAC counselling studies of healthcare providers. The data were collected from dentists (n = 73) and dental hygienists (n = 22) in 36 dental clinics in Finland using a web-based survey. Of 95 providers, 73 participated (76.8%). We used Cronbach's alpha to ascertain the internal consistency of the questionnaire. Mean domain scores were calculated to assess different aspects of implementation difficulties and exploratory factor analysis to assess the theoretical domain structure. The authors agreed on the labels assigned to the factors on the basis of their component domains and the broader behavioural and theoretical literature. Internal consistency values for theoretical domains varied from 0.50 ('emotion') to 0.71 ('environmental context and resources'). The domain environmental context and resources had the lowest mean score (21.3%; 95% confidence interval [CI], 17.2 to 25.4) and was identified as a potential implementation difficulty. The domain emotion provided the highest mean score (60%; 95% CI, 55.0 to 65.0). Three factors were extracted that explain 70.8% of the variance: motivation (47.6% of variance, α = 0.86), capability (13.3% of variance, α = 0.83), and opportunity (10.0% of variance, α = 0.71). This study demonstrated a theoretically informed approach to identifying possible implementation difficulties in TUPAC counselling among dental providers. This approach provides a method for moving from diagnosing implementation difficulties to designing and evaluating interventions.
Clinician preferences for verbal communication compared to EHR documentation in the ICU
Collins, S.A.; Bakken, S.; Vawdrey, D.K.; Coiera, E.; Currie, L
2011-01-01
Background Effective communication is essential to safe and efficient patient care. Additionally, many health information technology (HIT) developments, innovations, and standards aim to implement processes to improve data quality and integrity of electronic health records (EHR) for the purpose of clinical information exchange and communication. Objective We aimed to understand the current patterns and perceptions of communication of common goals in the ICU using the distributed cognition and clinical communication space theoretical frameworks. Methods We conducted a focus group and 5 interviews with ICU clinicians and observed 59.5 hours of interdisciplinary ICU morning rounds. Results Clinicians used an EHR system, which included electronic documentation and computerized provider order entry (CPOE), and paper artifacts for documentation; yet, preferred the verbal communication space as a method of information exchange because they perceived that the documentation was often not updated or efficient for information retrieval. These perceptions that the EHR is a “shift behind” may lead to a further reliance on verbal information exchange, which is a valuable clinical communication activity, yet, is subject to information loss. Conclusions Electronic documentation tools that, in real time, capture information that is currently verbally communicated may increase the effectiveness of communication. PMID:23616870
Study on the Reduced Traffic Congestion Method Based on Dynamic Guidance Information
NASA Astrophysics Data System (ADS)
Li, Shu-Bin; Wang, Guang-Min; Wang, Tao; Ren, Hua-Ling; Zhang, Lin
2018-05-01
This paper studies how to generate the reasonable information of travelers’ decision in real network. This problem is very complex because the travelers’ decision is constrained by different human behavior. The network conditions can be predicted by using the advanced dynamic OD (Origin-Destination, OD) estimation techniques. Based on the improved mesoscopic traffic model, the predictable dynamic traffic guidance information can be obtained accurately. A consistency algorithm is designed to investigate the travelers’ decision by simulating the dynamic response to guidance information. The simulation results show that the proposed method can provide the best guidance information. Further, a case study is conducted to verify the theoretical results and to draw managerial insights into the potential of dynamic guidance strategy in improving traffic performance. Supported by National Natural Science Foundation of China under Grant Nos. 71471104, 71771019, 71571109, and 71471167; The University Science and Technology Program Funding Projects of Shandong Province under Grant No. J17KA211; The Project of Public Security Department of Shandong Province under Grant No. GATHT2015-236; The Major Social and Livelihood Special Project of Jinan under Grant No. 20150905
Weir, Charlene; Haar, Maral; Staggers, Nancy; Agutter, Jim; Görges, Matthias; Westenskow, Dwayne
2012-01-01
Objective Fatal errors can occur in intensive care units (ICUs). Researchers claim that information integration at the bedside may improve nurses' situation awareness (SA) of patients and decrease errors. However, it is unclear which information should be integrated and in what form. Our research uses the theory of SA to analyze the type of tasks, and their associated information gaps. We aimed to provide recommendations for integrated, consolidated information displays to improve nurses' SA. Materials and Methods Systematic observations methods were used to follow 19 ICU nurses for 38 hours in 3 clinical practice settings. Storyboard methods and concept mapping helped to categorize the observed tasks, the associated information needs, and the information gaps of the most frequent tasks by SA level. Consensus and discussion of the research team was used to propose recommendations to improve information displays at the bedside based on information deficits. Results Nurses performed 46 different tasks at a rate of 23.4 tasks per hour. The information needed to perform the most common tasks was often inaccessible, difficult to see at a distance or located on multiple monitoring devices. Current devices at the ICU bedside do not adequately support a nurse's information-gathering activities. Medication management was the most frequent category of tasks. Discussion Information gaps were present at all levels of SA and across most of the tasks. Using a theoretical model to understand information gaps can aid in designing functional requirements. Conclusion Integrated information that enhances nurses' Situation Awareness may decrease errors and improve patient safety in the future. PMID:22437074
Falk, Kristin; Falk, Hanna; Jakobsson Ung, Eva
2016-01-01
A key area for consideration is determining how optimal conditions for learning can be created. Higher education in nursing aims to prepare students to develop their capabilities to become independent professionals. The aim of this study was to evaluate the effects of sequencing clinical practice prior to theoretical studies on student's experiences of self-directed learning readiness and students' approach to learning in the second year of a three-year undergraduate study program in nursing. 123 nursing students was included in the study and divided in two groups. In group A (n = 60) clinical practice preceded theoretical studies. In group (n = 63) theoretical studies preceded clinical practice. Learning readiness was measured using the Directed Learning Readiness Scale for Nursing Education (SDLRSNE), and learning process was measured using the revised two-factor version of the Study Process Questionnaire (R-SPQ-2F). Students were also asked to write down their personal reflections throughout the course. By using a mixed method design, the qualitative component focused on the students' personal experiences in relation to the sequencing of theoretical studies and clinical practice. The quantitative component provided information about learning readiness before and after the intervention. Our findings confirm that students are sensitive and adaptable to their learning contexts, and that the sequencing of courses is subordinate to a pedagogical style enhancing students' deep learning approaches, which needs to be incorporated in the development of undergraduate nursing programs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Alcohol Warning Label Awareness and Attention: A Multi-method Study.
Pham, Cuong; Rundle-Thiele, Sharyn; Parkinson, Joy; Li, Shanshi
2018-01-01
Evaluation of alcohol warning labels requires careful consideration ensuring that research captures more than awareness given that labels may not be prominent enough to attract attention. This study investigates attention of current in market alcohol warning labels and examines whether attention can be enhanced through theoretically informed design. Attention scores obtained through self-report methods are compared to objective measures (eye-tracking). A multi-method experimental design was used delivering four conditions, namely control, colour, size and colour and size. The first study (n = 559) involved a self-report survey to measure attention. The second study (n = 87) utilized eye-tracking to measure fixation count and duration and time to first fixation. Analysis of Variance (ANOVA) was utilized. Eye-tracking identified that 60% of participants looked at the current in market alcohol warning label while 81% looked at the optimized design (larger and red). In line with observed attention self-reported attention increased for the optimized design. The current study casts doubt on dominant practices (largely self-report), which have been used to evaluate alcohol warning labels. Awareness cannot be used to assess warning label effectiveness in isolation in cases where attention does not occur 100% of the time. Mixed methods permit objective data collection methodologies to be triangulated with surveys to assess warning label effectiveness. Attention should be incorporated as a measure in warning label effectiveness evaluations. Colour and size changes to the existing Australian warning labels aided by theoretically informed design increased attention. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.
Hao, Chen; LiJun, Chen; Albright, Thomas P.
2007-01-01
Invasive exotic species pose a growing threat to the economy, public health, and ecological integrity of nations worldwide. Explaining and predicting the spatial distribution of invasive exotic species is of great importance to prevention and early warning efforts. We are investigating the potential distribution of invasive exotic species, the environmental factors that influence these distributions, and the ability to predict them using statistical and information-theoretic approaches. For some species, detailed presence/absence occurrence data are available, allowing the use of a variety of standard statistical techniques. However, for most species, absence data are not available. Presented with the challenge of developing a model based on presence-only information, we developed an improved logistic regression approach using Information Theory and Frequency Statistics to produce a relative suitability map. This paper generated a variety of distributions of ragweed (Ambrosia artemisiifolia L.) from logistic regression models applied to herbarium specimen location data and a suite of GIS layers including climatic, topographic, and land cover information. Our logistic regression model was based on Akaike's Information Criterion (AIC) from a suite of ecologically reasonable predictor variables. Based on the results we provided a new Frequency Statistical method to compartmentalize habitat-suitability in the native range. Finally, we used the model and the compartmentalized criterion developed in native ranges to "project" a potential distribution onto the exotic ranges to build habitat-suitability maps. ?? Science in China Press 2007.
Outcome of the First wwPDB Hybrid/Integrative Methods Task Force Workshop
Sali, Andrej; Berman, Helen M.; Schwede, Torsten; Trewhella, Jill; Kleywegt, Gerard; Burley, Stephen K.; Markley, John; Nakamura, Haruki; Adams, Paul; Bonvin, Alexandre M.J.J.; Chiu, Wah; Dal Peraro, Matteo; Di Maio, Frank; Ferrin, Thomas E.; Grünewald, Kay; Gutmanas, Aleksandras; Henderson, Richard; Hummer, Gerhard; Iwasaki, Kenji; Johnson, Graham; Lawson, Catherine L.; Meiler, Jens; Marti-Renom, Marc A.; Montelione, Gaetano T.; Nilges, Michael; Nussinov, Ruth; Patwardhan, Ardan; Rappsilber, Juri; Read, Randy J.; Saibil, Helen; Schröder, Gunnar F.; Schwieters, Charles D.; Seidel, Claus A. M.; Svergun, Dmitri; Topf, Maya; Ulrich, Eldon L.; Velankar, Sameer; Westbrook, John D.
2016-01-01
Summary Structures of biomolecular systems are increasingly computed by integrative modeling that relies on varied types of experimental data and theoretical information. We describe here the proceedings and conclusions from the first wwPDB Hybrid/Integrative Methods Task Force Workshop held at the European Bioinformatics Institute in Hinxton, UK, October 6 and 7, 2014. At the workshop, experts in various experimental fields of structural biology, experts in integrative modeling and visualization, and experts in data archiving addressed a series of questions central to the future of structural biology. How should integrative models be represented? How should the data and integrative models be validated? What data should be archived? How should the data and models be archived? What information should accompany the publication of integrative models? PMID:26095030
A Generalized Information Theoretical Model for Quantum Secret Sharing
NASA Astrophysics Data System (ADS)
Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming
2016-11-01
An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.
Distinguishing prognostic and predictive biomarkers: An information theoretic approach.
Sechidis, Konstantinos; Papangelou, Konstantinos; Metcalfe, Paul D; Svensson, David; Weatherall, James; Brown, Gavin
2018-05-02
The identification of biomarkers to support decision-making is central to personalised medicine, in both clinical and research scenarios. The challenge can be seen in two halves: identifying predictive markers, which guide the development/use of tailored therapies; and identifying prognostic markers, which guide other aspects of care and clinical trial planning, i.e. prognostic markers can be considered as covariates for stratification. Mistakenly assuming a biomarker to be predictive, when it is in fact largely prognostic (and vice-versa) is highly undesirable, and can result in financial, ethical and personal consequences. We present a framework for data-driven ranking of biomarkers on their prognostic/predictive strength, using a novel information theoretic method. This approach provides a natural algebra to discuss and quantify the individual predictive and prognostic strength, in a self-consistent mathematical framework. Our contribution is a novel procedure, INFO+, which naturally distinguishes the prognostic vs predictive role of each biomarker and handles higher order interactions. In a comprehensive empirical evaluation INFO+ outperforms more complex methods, most notably when noise factors dominate, and biomarkers are likely to be falsely identified as predictive, when in fact they are just strongly prognostic. Furthermore, we show that our methods can be 1-3 orders of magnitude faster than competitors, making it useful for biomarker discovery in 'big data' scenarios. Finally, we apply our methods to identify predictive biomarkers on two real clinical trials, and introduce a new graphical representation that provides greater insight into the prognostic and predictive strength of each biomarker. R implementations of the suggested methods are available at https://github.com/sechidis. konstantinos.sechidis@manchester.ac.uk. Supplementary data are available at Bioinformatics online.
Information-Theoretical Complexity Analysis of Selected Elementary Chemical Reactions
NASA Astrophysics Data System (ADS)
Molina-Espíritu, M.; Esquivel, R. O.; Dehesa, J. S.
We investigate the complexity of selected elementary chemical reactions (namely, the hydrogenic-abstraction reaction and the identity SN2 exchange reaction) by means of the following single and composite information-theoretic measures: disequilibrium (D), exponential entropy(L), Fisher information (I), power entropy (J), I-D, D-L and I-J planes and Fisher-Shannon (FS) and Lopez-Mancini-Calbet (LMC) shape complexities. These quantities, which are functionals of the one-particle density, are computed in both position (r) and momentum (p) spaces. The analysis revealed that the chemically significant regions of these reactions can be identified through most of the single information-theoretic measures and the two-component planes, not only the ones which are commonly revealed by the energy, such as the reactant/product (R/P) and the transition state (TS), but also those that are not present in the energy profile such as the bond cleavage energy region (BCER), the bond breaking/forming regions (B-B/F) and the charge transfer process (CT). The analysis of the complexities shows that the energy profile of the abstraction reaction bears the same information-theoretical features of the LMC and FS measures, however for the identity SN2 exchange reaction does not hold a simple behavior with respect to the LMC and FS measures. Most of the chemical features of interest (BCER, B-B/F and CT) are only revealed when particular information-theoretic aspects of localizability (L or J), uniformity (D) and disorder (I) are considered.
Dean, Marleah; Scherr, Courtney L; Clements, Meredith; Koruo, Rachel; Martinez, Jennifer; Ross, Amy
2017-09-01
To investigate BRCA-positive, unaffected patients' - referred to as previvors - information needs after testing positive for a deleterious BRCA genetic mutation. 25 qualitative interviews were conducted with previvors. Data were analyzed using the constant comparison method of grounded theory. Analysis revealed a theoretical model of previvors' information needs related to the stage of their health journey. Specifically, a four-stage model was developed based on the data: (1) pre-testing information needs, (2) post-testing information needs, (3) pre-management information needs, and (4) post-management information needs. Two recurring dimensions of desired knowledge also emerged within the stages-personal/social knowledge and medical knowledge. While previvors may be genetically predisposed to develop cancer, they have not been diagnosed with cancer, and therefore have different information needs than cancer patients and cancer survivors. This model can serve as a framework for assisting healthcare providers in meeting the specific information needs of cancer previvors. Copyright © 2017 Elsevier B.V. All rights reserved.
Nickels, Lyndsey; Howard, David; Best, Wendy
2012-01-01
Cognitive neuropsychology has championed the use of single-case research design. Recently, however, case series designs that employ multiple single cases have been increasingly utilized to address theoretical issues using data from neuropsychological populations. In this paper, we examine these methodologies, focusing on a number of points in particular. First we discuss the use of dissociations and associations, often thought of as a defining feature of cognitive neuropsychology, and argue that they are better viewed as part of a spectrum of methods that aim to explain and predict behaviour. We also raise issues regarding case series design in particular, arguing that selection of an appropriate sample, including controlling degree of homogeneity, is critical and constrains the theoretical claims that can be made on the basis of the data. We discuss the possible interpretation of “outliers” in a case series, suggesting that while they may reflect “noise” caused by variability in performance due to factors that are not of relevance to the theoretical claims, they may also reflect the presence of patterns that are critical to test, refine, and potentially falsify our theories. The role of case series in treatment research is also raised, in light of the fact that, despite their status as gold standard, randomized controlled trials cannot provide answers to many crucial theoretical and clinical questions. Finally, we stress the importance of converging evidence: We propose that it is conclusions informed by multiple sources of evidence that are likely to best inform theory and stand the test of time. PMID:22746689
Bayesian randomized clinical trials: From fixed to adaptive design.
Yin, Guosheng; Lam, Chi Kin; Shi, Haolun
2017-08-01
Randomized controlled studies are the gold standard for phase III clinical trials. Using α-spending functions to control the overall type I error rate, group sequential methods are well established and have been dominating phase III studies. Bayesian randomized design, on the other hand, can be viewed as a complement instead of competitive approach to the frequentist methods. For the fixed Bayesian design, the hypothesis testing can be cast in the posterior probability or Bayes factor framework, which has a direct link to the frequentist type I error rate. Bayesian group sequential design relies upon Bayesian decision-theoretic approaches based on backward induction, which is often computationally intensive. Compared with the frequentist approaches, Bayesian methods have several advantages. The posterior predictive probability serves as a useful and convenient tool for trial monitoring, and can be updated at any time as the data accrue during the trial. The Bayesian decision-theoretic framework possesses a direct link to the decision making in the practical setting, and can be modeled more realistically to reflect the actual cost-benefit analysis during the drug development process. Other merits include the possibility of hierarchical modeling and the use of informative priors, which would lead to a more comprehensive utilization of information from both historical and longitudinal data. From fixed to adaptive design, we focus on Bayesian randomized controlled clinical trials and make extensive comparisons with frequentist counterparts through numerical studies. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Clemens, Joshua William
Game theory has application across multiple fields, spanning from economic strategy to optimal control of an aircraft and missile on an intercept trajectory. The idea of game theory is fascinating in that we can actually mathematically model real-world scenarios and determine optimal decision making. It may not always be easy to mathematically model certain real-world scenarios, nonetheless, game theory gives us an appreciation for the complexity involved in decision making. This complexity is especially apparent when the players involved have access to different information upon which to base their decision making (a nonclassical information pattern). Here we will focus on the class of adversarial two-player games (sometimes referred to as pursuit-evasion games) with nonclassical information pattern. We present a two-sided (simultaneous) optimization solution method for the two-player linear quadratic Gaussian (LQG) multistage game. This direct solution method allows for further interpretation of each player's decision making (strategy) as compared to previously used formal solution methods. In addition to the optimal control strategies, we present a saddle point proof and we derive an expression for the optimal performance index value. We provide some numerical results in order to further interpret the optimal control strategies and to highlight real-world application of this game-theoretic optimal solution.
Williamson, J; Ranyard, R; Cuthbert, L
2000-05-01
This study is an evaluation of a process tracing method developed for naturalistic decisions, in this case a consumer choice task. The method is based on Huber et al.'s (1997) Active Information Search (AIS) technique, but develops it by providing spoken rather than written answers to respondents' questions, and by including think aloud instructions. The technique is used within a conversation-based situation, rather than the respondent thinking aloud 'into an empty space', as is conventionally the case in think aloud techniques. The method results in a concurrent verbal protocol as respondents make their decisions, and a retrospective report in the form of a post-decision summary. The method was found to be virtually non-reactive in relation to think aloud, although the variable of Preliminary Attribute Elicitation showed some evidence of reactivity. This was a methodological evaluation, and as such the data reported are essentially descriptive. Nevertheless, the data obtained indicate that the method is capable of producing information about decision processes which could have theoretical importance in terms of evaluating models of decision-making.
Calculation of ground vibration spectra from heavy military vehicles
NASA Astrophysics Data System (ADS)
Krylov, V. V.; Pickup, S.; McNuff, J.
2010-07-01
The demand for reliable autonomous systems capable to detect and identify heavy military vehicles becomes an important issue for UN peacekeeping forces in the current delicate political climate. A promising method of detection and identification is the one using the information extracted from ground vibration spectra generated by heavy military vehicles, often termed as their seismic signatures. This paper presents the results of the theoretical investigation of ground vibration spectra generated by heavy military vehicles, such as tanks and armed personnel carriers. A simple quarter car model is considered to identify the resulting dynamic forces applied from a vehicle to the ground. Then the obtained analytical expressions for vehicle dynamic forces are used for calculations of generated ground vibrations, predominantly Rayleigh surface waves, using Green's function method. A comparison of the obtained theoretical results with the published experimental data shows that analytical techniques based on the simplified quarter car vehicle model are capable of producing ground vibration spectra of heavy military vehicles that reproduce basic properties of experimental spectra.
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-01-01
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-03-04
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.
A Combined Theoretical and Experimental Study for Silver Electroplating
Liu, Anmin; Ren, Xuefeng; An, Maozhong; Zhang, Jinqiu; Yang, Peixia; Wang, Bo; Zhu, Yongming; Wang, Chong
2014-01-01
A novel method combined theoretical and experimental study for environmental friendly silver electroplating was introduced. Quantum chemical calculations and molecular dynamic (MD) simulations were employed for predicting the behaviour and function of the complexing agents. Electronic properties, orbital information, and single point energies of the 5,5-dimethylhydantoin (DMH), nicotinic acid (NA), as well as their silver(I)-complexes were provided by quantum chemical calculations based on density functional theory (DFT). Adsorption behaviors of the agents on copper and silver surfaces were investigated using MD simulations. Basing on the data of quantum chemical calculations and MD simulations, we believed that DMH and NA could be the promising complexing agents for silver electroplating. The experimental results, including of electrochemical measurement and silver electroplating, further confirmed the above prediction. This efficient and versatile method thus opens a new window to study or design complexing agents for generalized metal electroplating and will vigorously promote the level of this research region. PMID:24452389
[Research advances in eco-toxicological diagnosis of soil pollution].
Liu, Feng; Teng, Hong-Hui; Ren, Bai-Xiang; Shi, Shu-Yun
2014-09-01
Soil eco-toxicology provides a theoretical basis for ecological risk assessment of contaminated soils and soil pollution control. Research on eco-toxicological effects and molecular mechanisms of toxic substances in soil environment is the central content of the soil eco-toxicology. Eco-toxicological diagnosis not only gathers all the information of soil pollution, but also provides the overall toxic effects of soil. Therefore, research on the eco-toxicological diagnosis of soil pollution has important theoretical and practical significance. Based on the research of eco-toxicological diagnosis of soil pollution, this paper introduced some common toxicological methods and indicators, with the advantages and disadvantages of various methods discussed. However, conventional biomarkers can only indicate the class of stress, but fail to explain the molecular mechanism of damage or response happened. Biomarkers and molecular diagnostic techniques, which are used to evaluate toxicity of contaminated soil, can explore deeply detoxification mechanisms of organisms under exogenous stress. In this paper, these biomarkers and techniques were introduced systematically, and the future research trends were prospected.
Yang, Xinsong; Feng, Zhiguo; Feng, Jianwen; Cao, Jinde
2017-01-01
In this paper, synchronization in an array of discrete-time neural networks (DTNNs) with time-varying delays coupled by Markov jump topologies is considered. It is assumed that the switching information can be collected by a tracker with a certain probability and transmitted from the tracker to controller precisely. Then the controller selects suitable control gains based on the received switching information to synchronize the network. This new control scheme makes full use of received information and overcomes the shortcomings of mode-dependent and mode-independent control schemes. Moreover, the proposed control method includes both the mode-dependent and mode-independent control techniques as special cases. By using linear matrix inequality (LMI) method and designing new Lyapunov functionals, delay-dependent conditions are derived to guarantee that the DTNNs with Markov jump topologies to be asymptotically synchronized. Compared with existing results on Markov systems which are obtained by separately using mode-dependent and mode-independent methods, our result has great flexibility in practical applications. Numerical simulations are finally given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Affine Isoperimetry and Information Theoretic Inequalities
ERIC Educational Resources Information Center
Lv, Songjun
2012-01-01
There are essential connections between the isoperimetric theory and information theoretic inequalities. In general, the Brunn-Minkowski inequality and the entropy power inequality, as well as the classical isoperimetric inequality and the classical entropy-moment inequality, turn out to be equivalent in some certain sense, respectively. Based on…
A Computational Tool to Detect and Avoid Redundancy in Selected Reaction Monitoring
Röst, Hannes; Malmström, Lars; Aebersold, Ruedi
2012-01-01
Selected reaction monitoring (SRM), also called multiple reaction monitoring, has become an invaluable tool for targeted quantitative proteomic analyses, but its application can be compromised by nonoptimal selection of transitions. In particular, complex backgrounds may cause ambiguities in SRM measurement results because peptides with interfering transitions similar to those of the target peptide may be present in the sample. Here, we developed a computer program, the SRMCollider, that calculates nonredundant theoretical SRM assays, also known as unique ion signatures (UIS), for a given proteomic background. We show theoretically that UIS of three transitions suffice to conclusively identify 90% of all yeast peptides and 85% of all human peptides. Using predicted retention times, the SRMCollider also simulates time-scheduled SRM acquisition, which reduces the number of interferences to consider and leads to fewer transitions necessary to construct an assay. By integrating experimental fragment ion intensities from large scale proteome synthesis efforts (SRMAtlas) with the information content-based UIS, we combine two orthogonal approaches to create high quality SRM assays ready to be deployed. We provide a user friendly, open source implementation of an algorithm to calculate UIS of any order that can be accessed online at http://www.srmcollider.org to find interfering transitions. Finally, our tool can also simulate the specificity of novel data-independent MS acquisition methods in Q1–Q3 space. This allows us to predict parameters for these methods that deliver a specificity comparable with that of SRM. Using SRM interference information in addition to other sources of information can increase the confidence in an SRM measurement. We expect that the consideration of information content will become a standard step in SRM assay design and analysis, facilitated by the SRMCollider. PMID:22535207
NASA Astrophysics Data System (ADS)
Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
Zhou, Ronggang; Chan, Alan H S
2017-01-01
In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process.
Lüdecke, Daniel
2014-01-01
Introduction Health care providers seek to improve patient-centred care. Due to fragmentation of services, this can only be achieved by establishing integrated care partnerships. The challenge is both to control costs while enhancing the quality of care and to coordinate this process in a setting with many organisations involved. The problem is to establish control mechanisms, which ensure sufficiently consideration of patient centredness. Theory and methods Seventeen qualitative interviews have been conducted in hospitals of metropolitan areas in northern Germany. The documentary method, embedded into a systems theoretical framework, was used to describe and analyse the data and to provide an insight into the specific perception of organisational behaviour in integrated care. Results The findings suggest that integrated care partnerships rely on networks based on professional autonomy in the context of reliability. The relationships of network partners are heavily based on informality. This correlates with a systems theoretical conception of organisations, which are assumed autonomous in their decision-making. Conclusion and discussion Networks based on formal contracts may restrict professional autonomy and competition. Contractual bindings that suppress the competitive environment have negative consequences for patient-centred care. Drawbacks remain due to missing self-regulation of the network. To conclude, less regimentation of integrated care partnerships is recommended. PMID:25411573
Property Values as a Measure of Neighborhoods: An Application of Hedonic Price Theory.
Leonard, Tammy; Powell-Wiley, Tiffany M; Ayers, Colby; Murdoch, James C; Yin, Wenyuan; Pruitt, Sandi L
2016-07-01
Researchers measuring relationships between neighborhoods and health have begun using property appraisal data as a source of information about neighborhoods. Economists have developed a rich tool kit to understand how neighborhood characteristics are quantified in appraisal values. This tool kit principally relies on hedonic (implicit) price models and has much to offer regarding the interpretation and operationalization of property appraisal data-derived neighborhood measures, which goes beyond the use of appraisal data as a measure of neighborhood socioeconomic status. We develop a theoretically informed hedonic-based neighborhood measure using residuals of a hedonic price regression applied to appraisal data in a single metropolitan area. We describe its characteristics, reliability in different types of neighborhoods, and correlation with other neighborhood measures (i.e., raw neighborhood appraisal values, census block group poverty, and observed property characteristics). We examine the association between all neighborhood measures and body mass index. The hedonic-based neighborhood measure was correlated in the expected direction with block group poverty rate and observed property characteristics. The neighborhood measure and average raw neighborhood appraisal value, but not census block group poverty, were associated with individual body mass index. We draw theoretically consistent methodology from the economics literature on hedonic price models to demonstrate how to leverage the implicit valuation of neighborhoods contained in publicly available appraisal data. Consistent measurement and application of the hedonic-based neighborhood measures in epidemiology will improve understanding of the relationships between neighborhoods and health. Researchers should proceed with a careful use of appraisal values utilizing theoretically informed methods such as this one.
Path-space variational inference for non-equilibrium coarse-grained systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics; Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr
In this paper we discuss information-theoretic tools for obtaining optimized coarse-grained molecular models for both equilibrium and non-equilibrium molecular simulations. The latter are ubiquitous in physicochemical and biological applications, where they are typically associated with coupling mechanisms, multi-physics and/or boundary conditions. In general the non-equilibrium steady states are not known explicitly as they do not necessarily have a Gibbs structure. The presented approach can compare microscopic behavior of molecular systems to parametric and non-parametric coarse-grained models using the relative entropy between distributions on the path space and setting up a corresponding path-space variational inference problem. The methods can become entirelymore » data-driven when the microscopic dynamics are replaced with corresponding correlated data in the form of time series. Furthermore, we present connections and generalizations of force matching methods in coarse-graining with path-space information methods. We demonstrate the enhanced transferability of information-based parameterizations to different observables, at a specific thermodynamic point, due to information inequalities. We discuss methodological connections between information-based coarse-graining of molecular systems and variational inference methods primarily developed in the machine learning community. However, we note that the work presented here addresses variational inference for correlated time series due to the focus on dynamics. The applicability of the proposed methods is demonstrated on high-dimensional stochastic processes given by overdamped and driven Langevin dynamics of interacting particles.« less
Two dimensional Fourier transform methods for fringe pattern analysis
NASA Astrophysics Data System (ADS)
Sciammarella, C. A.; Bhat, G.
An overview of the use of FFTs for fringe pattern analysis is presented, with emphasis on fringe patterns containing displacement information. The techniques are illustrated via analysis of the displacement and strain distributions in the direction perpendicular to the loading, in a disk under diametral compression. The experimental strain distribution is compared to the theoretical, and the agreement is found to be excellent in regions where the elasticity solution models well the actual problem.
Analytical Implications of Using Practice Theory in Workplace Information Literacy Research
ERIC Educational Resources Information Center
Moring, Camilla; Lloyd, Annemaree
2013-01-01
Introduction: This paper considers practice theory and the analytical implications of using this theoretical approach in information literacy research. More precisely the aim of the paper is to discuss the translation of practice theoretical assumptions into strategies that frame the analytical focus and interest when researching workplace…
2008-02-01
Information Theoretic Proceedures Frank Mufalli Rakesh Nagi Jim Llinas Sumita Mishra SUNY at Buffalo— CUBRC 4455 Genessee Street Buffalo...5f. WORK UNIT NUMBER NY 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) SUNY at Buffalo— CUBRC * Paine College ** 4455 Genessee
NLPIR: A Theoretical Framework for Applying Natural Language Processing to Information Retrieval.
ERIC Educational Resources Information Center
Zhou, Lina; Zhang, Dongsong
2003-01-01
Proposes a theoretical framework called NLPIR that integrates natural language processing (NLP) into information retrieval (IR) based on the assumption that there exists representation distance between queries and documents. Discusses problems in traditional keyword-based IR, including relevance, and describes some existing NLP techniques.…
Automatic Spike Sorting Using Tuning Information
Ventura, Valérie
2011-01-01
Current spike sorting methods focus on clustering neurons’ characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes’ identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only. PMID:19548802
Automatic spike sorting using tuning information.
Ventura, Valérie
2009-09-01
Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only.
Quantitative characterisation of audio data by ordinal symbolic dynamics
NASA Astrophysics Data System (ADS)
Aschenbrenner, T.; Monetti, R.; Amigó, J. M.; Bunk, W.
2013-06-01
Ordinal symbolic dynamics has developed into a valuable method to describe complex systems. Recently, using the concept of transcripts, the coupling behaviour of systems was assessed, combining the properties of the symmetric group with information theoretic ideas. In this contribution, methods from the field of ordinal symbolic dynamics are applied to the characterisation of audio data. Coupling complexity between frequency bands of solo violin music, as a fingerprint of the instrument, is used for classification purposes within a support vector machine scheme. Our results suggest that coupling complexity is able to capture essential characteristics, sufficient to distinguish among different violins.
Polarization holograms allow highly efficient generation of complex light beams.
Ruiz, U; Pagliusi, P; Provenzano, C; Volke-Sepúlveda, K; Cipparrone, Gabriella
2013-03-25
We report a viable method to generate complex beams, such as the non-diffracting Bessel and Weber beams, which relies on the encoding of amplitude information, in addition to phase and polarization, using polarization holography. The holograms are recorded in polarization sensitive films by the interference of a reference plane wave with a tailored complex beam, having orthogonal circular polarizations. The high efficiency, the intrinsic achromaticity and the simplicity of use of the polarization holograms make them competitive with respect to existing methods and attractive for several applications. Theoretical analysis, based on the Jones formalism, and experimental results are shown.
Parameter as a Switch Between Dynamical States of a Network in Population Decoding.
Yu, Jiali; Mao, Hua; Yi, Zhang
2017-04-01
Population coding is a method to represent stimuli using the collective activities of a number of neurons. Nevertheless, it is difficult to extract information from these population codes with the noise inherent in neuronal responses. Moreover, it is a challenge to identify the right parameter of the decoding model, which plays a key role for convergence. To address the problem, a population decoding model is proposed for parameter selection. Our method successfully identified the key conditions for a nonzero continuous attractor. Both the theoretical analysis and the application studies demonstrate the correctness and effectiveness of this strategy.
Two-point method uncertainty during control and measurement of cylindrical element diameters
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Shalay, V. V.; Radev, H.
2018-04-01
The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.
NASA Astrophysics Data System (ADS)
Rey, Michael; Nikitin, Andrei V.; Tyuterev, Vladimir G.
2017-10-01
Modeling atmospheres of hot exoplanets and brown dwarfs requires high-T databases that include methane as the major hydrocarbon. We report a complete theoretical line list of 12CH4 in the infrared range 0-13,400 cm-1 up to T max = 3000 K computed via a full quantum-mechanical method from ab initio potential energy and dipole moment surfaces. Over 150 billion transitions were generated with the lower rovibrational energy cutoff 33,000 cm-1 and intensity cutoff down to 10-33 cm/molecule to ensure convergent opacity predictions. Empirical corrections for 3.7 million of the strongest transitions permitted line position accuracies of 0.001-0.01 cm-1. Full data are partitioned into two sets. “Light lists” contain strong and medium transitions necessary for an accurate description of sharp features in absorption/emission spectra. For a fast and efficient modeling of quasi-continuum cross sections, billions of tiny lines are compressed in “super-line” libraries according to Rey et al. These combined data will be freely accessible via the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru), which provides a user-friendly interface for simulations of absorption coefficients, cross-sectional transmittance, and radiance. Comparisons with cold, room, and high-T experimental data show that the data reported here represent the first global theoretical methane lists suitable for high-resolution astrophysical applications.
A Framework for Characterizing eHealth Literacy Demands and Barriers
Chan, Connie V
2011-01-01
Background Consumer eHealth interventions are of a growing importance in the individual management of health and health behaviors. However, a range of access, resources, and skills barriers prevent health care consumers from fully engaging in and benefiting from the spectrum of eHealth interventions. Consumers may engage in a range of eHealth tasks, such as participating in health discussion forums and entering information into a personal health record. eHealth literacy names a set of skills and knowledge that are essential for productive interactions with technology-based health tools, such as proficiency in information retrieval strategies, and communicating health concepts effectively. Objective We propose a theoretical and methodological framework for characterizing complexity of eHealth tasks, which can be used to diagnose and describe literacy barriers and inform the development of solution strategies. Methods We adapted and integrated two existing theoretical models relevant to the analysis of eHealth literacy into a single framework to systematically categorize and describe task demands and user performance on tasks needed by health care consumers in the information age. The method derived from the framework is applied to (1) code task demands using a cognitive task analysis, and (2) code user performance on tasks. The framework and method are applied to the analysis of a Web-based consumer eHealth task with information-seeking and decision-making demands. We present the results from the in-depth analysis of the task performance of a single user as well as of 20 users on the same task to illustrate both the detailed analysis and the aggregate measures obtained and potential analyses that can be performed using this method. Results The analysis shows that the framework can be used to classify task demands as well as the barriers encountered in user performance of the tasks. Our approach can be used to (1) characterize the challenges confronted by participants in performing the tasks, (2) determine the extent to which application of the framework to the cognitive task analysis can predict and explain the problems encountered by participants, and (3) inform revisions to the framework to increase accuracy of predictions. Conclusions The results of this illustrative application suggest that the framework is useful for characterizing task complexity and for diagnosing and explaining barriers encountered in task completion. The framework and analytic approach can be a potentially powerful generative research platform to inform development of rigorous eHealth examination and design instruments, such as to assess eHealth competence, to design and evaluate consumer eHealth tools, and to develop an eHealth curriculum. PMID:22094891
Sokol, Serguei; Portais, Jean-Charles
2015-01-01
The dynamics of label propagation in a stationary metabolic network during an isotope labeling experiment can provide highly valuable information on the network topology, metabolic fluxes, and on the size of metabolite pools. However, major issues, both in the experimental set-up and in the accompanying numerical methods currently limit the application of this approach. Here, we propose a method to apply novel types of label inputs, sinusoidal or more generally periodic label inputs, to address both the practical and numerical challenges of dynamic labeling experiments. By considering a simple metabolic system, i.e. a linear, non-reversible pathway of arbitrary length, we develop mathematical descriptions of label propagation for both classical and novel label inputs. Theoretical developments and computer simulations show that the application of rectangular periodic pulses has both numerical and practical advantages over other approaches. We applied the strategy to estimate fluxes in a simulated experiment performed on a complex metabolic network (the central carbon metabolism of Escherichia coli), to further demonstrate its value in conditions which are close to those in real experiments. This study provides a theoretical basis for the rational interpretation of label propagation curves in real experiments, and will help identify the strengths, pitfalls and limitations of such experiments. The cases described here can also be used as test cases for more general numerical methods aimed at identifying network topology, analyzing metabolic fluxes or measuring concentrations of metabolites. PMID:26641860
Managing Pacific salmon escapements: The gaps between theory and reality
Knudsen, E. Eric; Knudsen, E. Eric; Steward, Cleveland R.; MacDonald, Donald D.; Williams, Jack E.; Reiser, Dudley W.
1999-01-01
There are myriad challenges to estimating intrinsic production capacity for Pacific salmon populations that are heavily exploited and/or suffering from habitat alteration. Likewise, it is difficult to determine whether perceived decreases in production are due to harvest, habitat, or hatchery influences, natural variation, or some combination of all four. There are dramatic gaps between the true nature of the salmon spawner/recruit relationship and the theoretical basis for describing and understanding the relationship. Importantly, there are also extensive practical difficulties associated with gathering and interpreting accurate escapement and run-size information and applying it to population management. Paradoxically, certain aspects of salmon management may well be contributing to losses in abundance and biodiversity, including harvesting salmon in mixed population fisheries, grouping populations into management units subject to a common harvest rate, and fully exploiting all available hatchery fish at the expense of wild fish escapements. Information on U.S. Pacific salmon escapement goal-setting methods, escapement data collection methods and estimation types, and the degree to which stocks are subjected to mixed stock fisheries was summarized and categorized for 1,025 known management units consisting of 9,430 known populations. Using criteria developed in this study, only 1% of U.S. escapement goals are by methods rated as excellent. Escapement goals for 16% of management units were rated as good. Over 60% of escapement goals have been set by methods rated as either fair or poor and 22% of management units have no escapement goals at all. Of the 9,430 populations for which any information was available, 6,614 (70%) had sufficient information to categorize the method by which escapement data are collected. Of those, data collection methods were rated as excellent for 1%, good for 1%, fair for 2%, and poor for 52%. Escapement estimates are not made for 44% of populations. Escapement estimation type (quality of the data resulting from survey methods) was rated as excellent for <1%, good for 30%, fair for 3%, poor for 22%, and nonexistent for 45%. Numerous recommendations for improvements in escapement mangement are made in this chapter. In general, improvements are needed on theoretical escapement management techniques, escapement goal setting methods, and escapement and run size data quality. There is also a need to change managers' and harvesters' expectations to coincide with the natural variation and uncertainty in the abundance of salmon populations. All the recommendations are aimed at optimizing the number of spawners-healthy escapements ensure salmon sustainability by providing eggs for future production, nutrients to the system, and genetic diversity.
Theoretical foundations for information representation and constraint specification
NASA Technical Reports Server (NTRS)
Menzel, Christopher P.; Mayer, Richard J.
1991-01-01
Research accomplished at the Knowledge Based Systems Laboratory of the Department of Industrial Engineering at Texas A&M University is described. Outlined here are the theoretical foundations necessary to construct a Neutral Information Representation Scheme (NIRS), which will allow for automated data transfer and translation between model languages, procedural programming languages, database languages, transaction and process languages, and knowledge representation and reasoning control languages for information system specification.
An algorithmic approach to crustal deformation analysis
NASA Technical Reports Server (NTRS)
Iz, Huseyin Baki
1987-01-01
In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.
Sheehan, B.; Yen, P.; Velez, O.; Nobile-Hernandez, D.; Tiase, V.
2014-01-01
Summary Objectives We describe an innovative community-centered participatory design approach, Consumer-centered Participatory Design (C2PD), and the results of applying C2PD to design and develop a web-based fall prevention system. Methods We conducted focus groups and design sessions with English- and Spanish-speaking community-dwelling older adults. Focus group data were summarized and used to inform the context of the design sessions. Descriptive content analysis methods were used to develop categorical descriptions of design session informant’s needs related to information technology. Results The C2PD approach enabled the assessment and identification of informant’s needs of health information technology (HIT) that informed the development of a falls prevention system. We learned that our informants needed a system that provides variation in functions/content; differentiates between actionable/non-actionable information/structures; and contains sensory cues that support wide-ranging and complex tasks in a varied, simple, and clear interface to facilitate self-management. Conclusions The C2PD approach provides community-based organizations, academic researchers, and commercial entities with a systematic theoretically informed approach to develop HIT innovations. Our community-centered participatory design approach focuses on consumer’s technology needs while taking into account core public health functions. PMID:25589909
Spiritual Diversity and Living with Early-Stage Dementia.
McGee, Jocelyn Shealy; Zhao, Holly Carlson; Myers, Dennis R; Seela Eaton, Hannah
2018-01-01
Attention to spiritual diversity is necessary for the provision of culturally informed clinical care for people with early-stage dementia and their family members. In this article, an evidence-based theoretical framework for conceptualizing spiritual diversity is described in detail (Pargament, 2011). The framework is then applied to two clinical case studies of people living with early-stage dementia to elucidate the multilayered components of spiritual diversity in this population. The case studies were selected from a larger mixed-methods study on spirituality, positive psychological factors, health, and well-being in people living with early-stage dementia and their family members. To our knowledge this is the first systematic attempt to apply a theoretical framework for understanding spiritual diversity in this population. Implications for clinical practice are provided.
Accurate sparse-projection image reconstruction via nonlocal TV regularization.
Zhang, Yi; Zhang, Weihua; Zhou, Jiliu
2014-01-01
Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.
Comparison of information theoretic divergences for sensor management
NASA Astrophysics Data System (ADS)
Yang, Chun; Kadar, Ivan; Blasch, Erik; Bakich, Michael
2011-06-01
In this paper, we compare the information-theoretic metrics of the Kullback-Leibler (K-L) and Renyi (α) divergence formulations for sensor management. Information-theoretic metrics have been well suited for sensor management as they afford comparisons between distributions resulting from different types of sensors under different actions. The difference in distributions can also be measured as entropy formulations to discern the communication channel capacity (i.e., Shannon limit). In this paper, we formulate a sensor management scenario for target tracking and compare various metrics for performance evaluation as a function of the design parameter (α) so as to determine which measures might be appropriate for sensor management given the dynamics of the scenario and design parameter.
Quantum Information Processing with Large Nuclear Spins in GaAs Semiconductors
NASA Astrophysics Data System (ADS)
Leuenberger, Michael N.; Loss, Daniel; Poggio, M.; Awschalom, D. D.
2003-03-01
We propose an implementation for quantum information processing based on coherent manipulations of nuclear spins I=3/2 in GaAs semiconductors. We describe theoretically an NMR method which involves multiphoton transitions and which exploits the nonequidistance of nuclear spin levels due to quadrupolar splittings. Starting from known spin anisotropies we derive effective Hamiltonians in a generalized rotating frame, valid for arbitrary I, which allow us to describe the nonperturbative time evolution of spin states generated by magnetic rf fields. We identify an experimentally observable regime for multiphoton Rabi oscillations. In the nonlinear regime, we find Berry phase interference. Ref: PRL 89, 207601 (2002).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christiansen, R.L.; Kalbus, J.S.; Howarth, S.M.
This report documents, demonstrates, evaluates, and provides theoretical justification for methods used to convert experimental data into relative permeability relationships. The report facilities accurate determination of relative permeabilities of anhydride rock samples from the Salado Formation at the Waste Isolation Pilot Plant (WIPP). Relative permeability characteristic curves are necessary for WIPP Performance Assessment (PA) predictions of the potential for flow of waste-generated gas from the repository and brine flow into repository. This report follows Christiansen and Howarth (1995), a comprehensive literature review of methods for measuring relative permeability. It focuses on unsteady-state experiments and describes five methods for obtaining relativemore » permeability relationships from unsteady-state experiments. Unsteady-state experimental methods were recommended for relative permeability measurements of low-permeability anhydrite rock samples form the Salado Formation because these tests produce accurate relative permeability information and take significantly less time to complete than steady-state tests. Five methods for obtaining relative permeability relationships from unsteady-state experiments are described: the Welge method, the Johnson-Bossler-Naumann method, the Jones-Roszelle method, the Ramakrishnan-Cappiello method, and the Hagoort method. A summary, an example of the calculations, and a theoretical justification are provided for each of the five methods. Displacements in porous media are numerically simulated for the calculation examples. The simulated product data were processed using the methods, and the relative permeabilities obtained were compared with those input to the numerical model. A variety of operating conditions were simulated to show sensitivity of production behavior to rock-fluid properties.« less
Brain activity and cognition: a connection from thermodynamics and information theory
Collell, Guillem; Fauquet, Jordi
2015-01-01
The connection between brain and mind is an important scientific and philosophical question that we are still far from completely understanding. A crucial point to our work is noticing that thermodynamics provides a convenient framework to model brain activity, whereas cognition can be modeled in information-theoretical terms. In fact, several models have been proposed so far from both approaches. A second critical remark is the existence of deep theoretical connections between thermodynamics and information theory. In fact, some well-known authors claim that the laws of thermodynamics are nothing but principles in information theory. Unlike in physics or chemistry, a formalization of the relationship between information and energy is currently lacking in neuroscience. In this paper we propose a framework to connect physical brain and cognitive models by means of the theoretical connections between information theory and thermodynamics. Ultimately, this article aims at providing further insight on the formal relationship between cognition and neural activity. PMID:26136709
Esquivel, Rodolfo O; Molina-Espíritu, Moyocoyani; López-Rosa, Sheila; Soriano-Correa, Catalina; Barrientos-Salcedo, Carolina; Kohout, Miroslav; Dehesa, Jesús S
2015-08-24
In this work we undertake a pioneer information-theoretical analysis of 18 selected amino acids extracted from a natural protein, bacteriorhodopsin (1C3W). The conformational structures of each amino acid are analyzed by use of various quantum chemistry methodologies at high levels of theory: HF, M062X and CISD(Full). The Shannon entropy, Fisher information and disequilibrium are determined to grasp the spatial spreading features of delocalizability, order and uniformity of the optimized structures. These three entropic measures uniquely characterize all amino acids through a predominant information-theoretic quality scheme (PIQS), which gathers all chemical families by means of three major spreading features: delocalization, narrowness and uniformity. This scheme recognizes four major chemical families: aliphatic (delocalized), aromatic (delocalized), electro-attractive (narrowed) and tiny (uniform). All chemical families recognized by the existing energy-based classifications are embraced by this entropic scheme. Finally, novel chemical patterns are shown in the information planes associated with the PIQS entropic measures. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Understanding managerial behaviour during initial steps of a clinical information system adoption
2011-01-01
Background While the study of the information technology (IT) implementation process and its outcomes has received considerable attention, the examination of pre-adoption and pre-implementation stages of configurable IT uptake appear largely under-investigated. This paper explores managerial behaviour during the periods prior the effective implementation of a clinical information system (CIS) by two Canadian university multi-hospital centers. Methods Adopting a structurationist theoretical stance and a case study research design, the processes by which CIS managers' patterns of discourse contribute to the configuration of the new technology in their respective organizational contexts were longitudinally examined over 33 months. Results Although managers seemed to be aware of the risks and organizational impact of the adoption of a new clinical information system, their decisions and actions over the periods examined appeared rather to be driven by financial constraints and power struggles between different groups involved in the process. Furthermore, they largely emphasized technological aspects of the implementation, with organizational dimensions being put aside. In view of these results, the notion of 'rhetorical ambivalence' is proposed. Results are further discussed in relation to the significance of initial decisions and actions for the subsequent implementation phases of the technology being configured. Conclusions Theoretical and empirically grounded, the paper contributes to the underdeveloped body of literature on information system pre-implementation processes by revealing the crucial role played by managers during the initial phases of a CIS adoption. PMID:21682885
Study of optimum methods of optical communication
NASA Technical Reports Server (NTRS)
Harger, R. O.
1972-01-01
Optimum methods of optical communication accounting for the effects of the turbulent atmosphere and quantum mechanics, both by the semi-classical method and the full-fledged quantum theoretical model are described. A concerted effort to apply the techniques of communication theory to the novel problems of optical communication by a careful study of realistic models and their statistical descriptions, the finding of appropriate optimum structures and the calculation of their performance and, insofar as possible, comparing them to conventional and other suboptimal systems are discussed. In this unified way the bounds on performance and the structure of optimum communication systems for transmission of information, imaging, tracking, and estimation can be determined for optical channels.
Outcome of the First wwPDB Hybrid/Integrative Methods Task Force Workshop.
Sali, Andrej; Berman, Helen M; Schwede, Torsten; Trewhella, Jill; Kleywegt, Gerard; Burley, Stephen K; Markley, John; Nakamura, Haruki; Adams, Paul; Bonvin, Alexandre M J J; Chiu, Wah; Peraro, Matteo Dal; Di Maio, Frank; Ferrin, Thomas E; Grünewald, Kay; Gutmanas, Aleksandras; Henderson, Richard; Hummer, Gerhard; Iwasaki, Kenji; Johnson, Graham; Lawson, Catherine L; Meiler, Jens; Marti-Renom, Marc A; Montelione, Gaetano T; Nilges, Michael; Nussinov, Ruth; Patwardhan, Ardan; Rappsilber, Juri; Read, Randy J; Saibil, Helen; Schröder, Gunnar F; Schwieters, Charles D; Seidel, Claus A M; Svergun, Dmitri; Topf, Maya; Ulrich, Eldon L; Velankar, Sameer; Westbrook, John D
2015-07-07
Structures of biomolecular systems are increasingly computed by integrative modeling that relies on varied types of experimental data and theoretical information. We describe here the proceedings and conclusions from the first wwPDB Hybrid/Integrative Methods Task Force Workshop held at the European Bioinformatics Institute in Hinxton, UK, on October 6 and 7, 2014. At the workshop, experts in various experimental fields of structural biology, experts in integrative modeling and visualization, and experts in data archiving addressed a series of questions central to the future of structural biology. How should integrative models be represented? How should the data and integrative models be validated? What data should be archived? How should the data and models be archived? What information should accompany the publication of integrative models? Copyright © 2015 Elsevier Ltd. All rights reserved.
Zamora-López, Gorka; Zhou, Changsong; Kurths, Jürgen
2009-01-01
Sensory stimuli entering the nervous system follow particular paths of processing, typically separated (segregated) from the paths of other modal information. However, sensory perception, awareness and cognition emerge from the combination of information (integration). The corticocortical networks of cats and macaque monkeys display three prominent characteristics: (i) modular organisation (facilitating the segregation), (ii) abundant alternative processing paths and (iii) the presence of highly connected hubs. Here, we study in detail the organisation and potential function of the cortical hubs by graph analysis and information theoretical methods. We find that the cortical hubs form a spatially delocalised, but topologically central module with the capacity to integrate multisensory information in a collaborative manner. With this, we resolve the underlying anatomical substrate that supports the simultaneous capacity of the cortex to segregate and to integrate multisensory information. PMID:20428515
Print advertisements for Alzheimer's disease drugs: informational and transformational features.
Gooblar, Jonathan; Carpenter, Brian D
2013-06-01
We examined print advertisements for Alzheimer's disease drugs published in journals and magazines between January 2008 and February 2012, using an informational versus transformational theoretical framework to identify objective and persuasive features. In 29 unique advertisements, we used qualitative methods to code and interpret identifying information, charts, benefit and side effect language, and persuasive appeals embedded in graphics and narratives. Most elements contained a mixture of informational and transformational features. Charts were used infrequently, but when they did appear the accompanying text often exaggerated the data. Benefit statements covered an array of symptoms, drug properties, and caregiver issues. Side effect statements often used positive persuasive appeals. Graphics and narrative features emphasized positive emotions and outcomes. We found subtle and sophisticated attempts both to educate and to persuade readers. It is important for consumers and prescribing physicians to read print advertisements critically so that they can make informed treatment choices.
Alignment method for solar collector arrays
Driver, Jr., Richard B
2012-10-23
The present invention is directed to an improved method for establishing camera fixture location for aligning mirrors on a solar collector array (SCA) comprising multiple mirror modules. The method aligns the mirrors on a module by comparing the location of the receiver image in photographs with the predicted theoretical receiver image location. To accurately align an entire SCA, a common reference is used for all of the individual module images within the SCA. The improved method can use relative pixel location information in digital photographs along with alignment fixture inclinometer data to calculate relative locations of the fixture between modules. The absolute locations are determined by minimizing alignment asymmetry for the SCA. The method inherently aligns all of the mirrors in an SCA to the receiver, even with receiver position and module-to-module alignment errors.
Integrative Potential of Architectural Activities
NASA Astrophysics Data System (ADS)
Davydova, O. V.
2017-11-01
The architectural activity integrative potential is considered through the combination as well as the organization of necessary universal human and professional, artificial and natural, social and individual architectural activities in the multidimensional unity of its components reflecting and influencing the public thinking with the artistic-figurative language of international communication using experimental form-building, interactive presentations, theatrical and gaming expressiveness to organize an easier contact with the consumer, methods of design and advertising. The methodology is used to reflect the mutual influence of personal and social problems through globalization and identification of their problem in the public, to study the existing methods of the problem solving, to analyze their effectiveness, to search for actual problems and new solutions to them using the latest achievements of technological progress, artistic patterns, creation of a holistic architectural image reflecting the author’s worldview in the general picture of the modern world with its inherent tendencies “Surah” and “entertainment”. The operative communication means in the chain of social experience are developed - the teacher - the trainee - the new educational result used to transmit the updated information in a generalized form, the current and final control through the use of feedback sheets, supporting summaries, info cards, its decisions. The paper considers the study time efficiency due to the organization of the research activity which allows students to obtain a theoretical generalized information (the creator’s limitation) in the process of filling or compiling informative and diagnostic maps that provide the theoretical framework for the creative activity through gaming activity that turns into a work activity which has a diagnosed result.
Accurate line intensities of methane from first-principles calculations
NASA Astrophysics Data System (ADS)
Nikitin, Andrei V.; Rey, Michael; Tyuterev, Vladimir G.
2017-10-01
In this work, we report first-principle theoretical predictions of methane spectral line intensities that are competitive with (and complementary to) the best laboratory measurements. A detailed comparison with the most accurate data shows that discrepancies in integrated polyad intensities are in the range of 0.4%-2.3%. This corresponds to estimations of the best available accuracy in laboratory Fourier Transform spectra measurements for this quantity. For relatively isolated strong lines the individual intensity deviations are in the same range. A comparison with the most precise laser measurements of the multiplet intensities in the 2ν3 band gives an agreement within the experimental error margins (about 1%). This is achieved for the first time for five-atomic molecules. In the Supplementary Material we provide the lists of theoretical intensities at 269 K for over 5000 strongest transitions in the range below 6166 cm-1. The advantage of the described method is that this offers a possibility to generate fully assigned exhaustive line lists at various temperature conditions. Extensive calculations up to 12,000 cm-1 including high-T predictions will be made freely available through the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru) that contains ab initio born line lists and provides a user-friendly graphical interface for a fast simulation of the absorption cross-sections and radiance.
Making a Traditional Study-Abroad Program Geographic: A Theoretically Informed Regional Approach
ERIC Educational Resources Information Center
Jokisch, Brad
2009-01-01
Geographers have been active in numerous focused study-abroad programs, but few have created or led language-based programs overseas. This article describes the development of a Spanish language program in Ecuador and how it was made geographic primarily through a theoretically informed regional geography course. The approach employs theoretical…
Content Based Image Retrieval and Information Theory: A General Approach.
ERIC Educational Resources Information Center
Zachary, John; Iyengar, S. S.; Barhen, Jacob
2001-01-01
Proposes an alternative real valued representation of color based on the information theoretic concept of entropy. A theoretical presentation of image entropy is accompanied by a practical description of the merits and limitations of image entropy compared to color histograms. Results suggest that image entropy is a promising approach to image…
ERIC Educational Resources Information Center
Fleer, Marilyn
2016-01-01
The concept of "perezhivanie" has received increasing attention in recent years. However, a clear understanding of this term has not yet been established. Mostly what is highlighted is the need for more informed theoretical discussion. In this paper, discussions centre on what "perezhivanie" means for research in early…
The Public Library User and the Charter Tourist: Two Travellers, One Analogy
ERIC Educational Resources Information Center
Eriksson, Catarina A. M.; Michnik, Katarina E.; Nordeborg, Yoshiko
2013-01-01
Introduction: A new theoretical model, relevant to library and information science, is implemented in this paper. The aim of this study is to contribute to the theoretical concepts of library and information science by introducing an ethnological model developed for investigating charter tourist styles thereby increasing our knowledge of users'…
Towards Improved Student Experiences in Service Learning in Information Systems Courses
ERIC Educational Resources Information Center
Petkova, Olga
2017-01-01
The paper explores relevant past research on service-learning in Information Systems courses since 2000. One of the conclusions from this is that most of the publications are not founded on specific theoretical models and are mainly about sharing instructor or student experiences. Then several theoretical frameworks from Education and other…
ERIC Educational Resources Information Center
Park, Young Ki
2011-01-01
This study explains the role of information technologies in enabling organizations to successfully sense and manage opportunities and threats and achieve competitive advantage in turbulent environments. I use two approaches, a set-theoretic configurational theory approach and a variance theory approach, which are theoretically and methodologically…
ERIC Educational Resources Information Center
Schalock, Robert L.; Luckasson, Ruth; Tassé, Marc J.; Verdugo, Miguel Angel
2018-01-01
This article describes a holistic theoretical framework that can be used to explain intellectual disability (ID) and organize relevant information into a usable roadmap to guide understanding and application. Developing the framework involved analyzing the four current perspectives on ID and synthesizing this information into a holistic…
NASA Technical Reports Server (NTRS)
Kim, Hakil; Swain, Philip H.
1990-01-01
An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.
A Method of Signal Scrambling to Secure Data Storage for Healthcare Applications.
Bao, Shu-Di; Chen, Meng; Yang, Guang-Zhong
2017-11-01
A body sensor network that consists of wearable and/or implantable biosensors has been an important front-end for collecting personal health records. It is expected that the full integration of outside-hospital personal health information and hospital electronic health records will further promote preventative health services as well as global health. However, the integration and sharing of health information is bound to bring with it security and privacy issues. With extensive development of healthcare applications, security and privacy issues are becoming increasingly important. This paper addresses the potential security risks of healthcare data in Internet-based applications and proposes a method of signal scrambling as an add-on security mechanism in the application layer for a variety of healthcare information, where a piece of tiny data is used to scramble healthcare records. The former is kept locally and the latter, along with security protection, is sent for cloud storage. The tiny data can be derived from a random number generator or even a piece of healthcare data, which makes the method more flexible. The computational complexity and security performance in terms of theoretical and experimental analysis has been investigated to demonstrate the efficiency and effectiveness of the proposed method. The proposed method is applicable to all kinds of data that require extra security protection within complex networks.
Leveraging graph topology and semantic context for pharmacovigilance through twitter-streams.
Eshleman, Ryan; Singh, Rahul
2016-10-06
Adverse drug events (ADEs) constitute one of the leading causes of post-therapeutic death and their identification constitutes an important challenge of modern precision medicine. Unfortunately, the onset and effects of ADEs are often underreported complicating timely intervention. At over 500 million posts per day, Twitter is a commonly used social media platform. The ubiquity of day-to-day personal information exchange on Twitter makes it a promising target for data mining for ADE identification and intervention. Three technical challenges are central to this problem: (1) identification of salient medical keywords in (noisy) tweets, (2) mapping drug-effect relationships, and (3) classification of such relationships as adverse or non-adverse. We use a bipartite graph-theoretic representation called a drug-effect graph (DEG) for modeling drug and side effect relationships by representing the drugs and side effects as vertices. We construct individual DEGs on two data sources. The first DEG is constructed from the drug-effect relationships found in FDA package inserts as recorded in the SIDER database. The second DEG is constructed by mining the history of Twitter users. We use dictionary-based information extraction to identify medically-relevant concepts in tweets. Drugs, along with co-occurring symptoms are connected with edges weighted by temporal distance and frequency. Finally, information from the SIDER DEG is integrate with the Twitter DEG and edges are classified as either adverse or non-adverse using supervised machine learning. We examine both graph-theoretic and semantic features for the classification task. The proposed approach can identify adverse drug effects with high accuracy with precision exceeding 85 % and F1 exceeding 81 %. When compared with leading methods at the state-of-the-art, which employ un-enriched graph-theoretic analysis alone, our method leads to improvements ranging between 5 and 8 % in terms of the aforementioned measures. Additionally, we employ our method to discover several ADEs which, though present in medical literature and Twitter-streams, are not represented in the SIDER databases. We present a DEG integration model as a powerful formalism for the analysis of drug-effect relationships that is general enough to accommodate diverse data sources, yet rigorous enough to provide a strong mechanism for ADE identification.
A Markov game theoretic data fusion approach for cyber situational awareness
NASA Astrophysics Data System (ADS)
Shen, Dan; Chen, Genshe; Cruz, Jose B., Jr.; Haynes, Leonard; Kruger, Martin; Blasch, Erik
2007-04-01
This paper proposes an innovative data-fusion/ data-mining game theoretic situation awareness and impact assessment approach for cyber network defense. Alerts generated by Intrusion Detection Sensors (IDSs) or Intrusion Prevention Sensors (IPSs) are fed into the data refinement (Level 0) and object assessment (L1) data fusion components. High-level situation/threat assessment (L2/L3) data fusion based on Markov game model and Hierarchical Entity Aggregation (HEA) are proposed to refine the primitive prediction generated by adaptive feature/pattern recognition and capture new unknown features. A Markov (Stochastic) game method is used to estimate the belief of each possible cyber attack pattern. Game theory captures the nature of cyber conflicts: determination of the attacking-force strategies is tightly coupled to determination of the defense-force strategies and vice versa. Also, Markov game theory deals with uncertainty and incompleteness of available information. A software tool is developed to demonstrate the performance of the high level information fusion for cyber network defense situation and a simulation example shows the enhanced understating of cyber-network defense.
Shiroff, Jennifer J; Gregoski, Mathew J
2017-06-01
Measurement of recessive carrier screening attitudes related to conception and pregnancy is necessary to determine current acceptance, and whether behavioral intervention strategies are needed in clinical practice. To evaluate quantitative survey instruments to measure patient attitudes regarding genetic carrier testing prior to conception and pregnancy databases examining patient attitudes regarding genetic screening prior to conception and pregnancy from 2003-2013 were searched yielding 344 articles; eight studies with eight instruments met criteria for inclusion. Data abstraction on theoretical framework, subjects, instrument description, scoring, method of measurement, reliability, validity, feasibility, level of evidence, and outcomes was completed. Reliability information was provided in five studies with an internal consistency of Cronbach's α >0.70. Information pertaining to validity was presented in three studies and included construct validity via factor analysis. Despite limited psychometric information, these questionnaires are self-administered and can be briefly completed, making them a feasible method of evaluation.
Zhou, Ronggang; Chan, Alan H. S.
2016-01-01
BACKGROUND: In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. OBJECTIVE: This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. METHODS: With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. RESULTS AND CONCLUSIONS: Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process. PMID:28035943
Jacobs, Robin J; Caballero, Joshua; Ownby, Raymond L; Kane, Michael N
2014-11-30
Low health literacy is associated with poor medication adherence in persons with human immunodeficiency virus (HIV), which can lead to poor health outcomes. As linguistic minorities, Spanish-dominant Hispanics (SDH) face challenges such as difficulties in obtaining and understanding accurate information about HIV and its treatment. Traditional health educational methods (e.g., pamphlets, talking) may not be as effective as delivering through alternate venues. Technology-based health information interventions have the potential for being readily available on desktop computers or over the Internet. The purpose of this research was to adapt a theoretically-based computer application (initially developed for English-speaking HIV-positive persons) that will provide linguistically and culturally appropriate tailored health education to Spanish-dominant Hispanics with HIV (HIV + SDH). A mixed methods approach using quantitative and qualitative interviews with 25 HIV + SDH and 5 key informants guided by the Information-Motivation-Behavioral (IMB) Skills model was used to investigate cultural factors influencing medication adherence in HIV + SDH. We used a triangulation approach to identify major themes within cultural contexts relevant to understanding factors related to motivation to adhere to treatment. From this data we adapted an automated computer-based health literacy intervention to be delivered in Spanish. Culture-specific motivational factors for treatment adherence in HIV + SDH persons that emerged from the data were stigma, familismo (family), mood, and social support. Using this data, we developed a culturally and linguistically adapted a tailored intervention that provides information about HIV infection, treatment, and medication related problem solving skills (proven effective in English-speaking populations) that can be delivered using touch-screen computers, tablets, and smartphones to be tested in a future study. Using a theoretically-grounded Internet-based eHealth education intervention that builds on knowledge and also targets core cultural determinants of adherence may prove a highly effective approach to improve health literacy and medication decision-making in this group.
Structure and Randomness of Continuous-Time, Discrete-Event Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2017-10-01
Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.
Development of the Environmental Technical Information System
1975-04-01
official Department oi the Army position, unless so designated by other authorized documents. «CC:,;!;I?N tw HIS . ■ " ftIM B c ii a UN, " □ j...regulations that may concern the Army. CHLUS is complete tor six states and lor areas ot federal jurisdiction, and datu tor another ten stales are...Theoretical Analysis 20 Isolating The Export Industries: Direct Methods Description of the Models 24 Economic Impact on Local Businesses Change in
Phase object imaging inside the airy disc
NASA Astrophysics Data System (ADS)
Tychinsky, Vladimir P.
1991-03-01
The possibility of phase objects superresoluton imaging is theoretically justifieth The measurements with CPM " AIRYSCAN" showed the reality of O structures observations when the Airy disc di ameter i s 0 86 j. . m SUMMARY It has been known that the amount of information contained in the image of any object is mostly determined by the number of points measured i ndependentl y or by spati al resol uti on of the system. From the classic theory of the optical systems it follows that for noncoherent sources the -spatial resolution is limited by the aperture dd 6LX/N. A. ( Rayleigh criterion where X is wave length NA numerical aperture. ) The use of this criterion is equivalent tO the statement that any object inside the Airy disc of radius d that is the difraction image of a point is practical ly unresolved. However at the coherent illumination the intensity distribution in the image plane depends also upon the phase iq (r) of the wave scattered by the object and this is the basis of the Zernike method of phasecontrast microscopy differential interference contrast (DIC) and computer phase microscopy ( CPM ). In theoretical foundation of these methods there was no doubt in the correctness of Rayleigh criterion since the phase information is derived out of intensity distribution and as we know there were no experiments that disproved this
García-Cabezas, Miguel Ángel; Barbas, Helen
2018-01-01
Noninvasive imaging and tractography methods have yielded information on broad communication networks but lack resolution to delineate intralaminar cortical and subcortical pathways in humans. An important unanswered question is whether we can use the wealth of precise information on pathways from monkeys to understand connections in humans. We addressed this question within a theoretical framework of systematic cortical variation and used identical high-resolution methods to compare the architecture of cortical gray matter and the white matter beneath, which gives rise to short- and long-distance pathways in humans and rhesus monkeys. We used the prefrontal cortex as a model system because of its key role in attention, emotions, and executive function, which are processes often affected in brain diseases. We found striking parallels and consistent trends in the gray and white matter architecture in humans and monkeys and between the architecture and actual connections mapped with neural tracers in rhesus monkeys and, by extension, in humans. Using the novel architectonic portrait as a base, we found significant changes in pathways between nearby prefrontal and distant areas in autism. Our findings reveal that a theoretical framework allows study of normal neural communication in humans at high resolution and specific disruptions in diverse psychiatric and neurodegenerative diseases. PMID:29401206
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, M.H.; Su, M.X.; Dong, L.L.
2010-07-01
Particle size distribution and concentration in particulate two-phase flow are important parameters in a wide variety of industrial areas. For the purpose of online characterization in dense coal-water slurries, ultrasonic methods have many advantages such as avoiding dilution, the capability for being used in real time, and noninvasive testing, while light-based techniques are not capable of providing information because optical methods often require the slurry to be diluted. In this article, the modified Urick equation including temperature modification, which can be used to determine the concentration by means of the measurement of ultrasonic velocity in a coal-water slurry, is evaluatedmore » on the basis of theoretical analysis and experimental study. A combination of the coupled-phase model and the Bouguer-Lambert-Beer law is employed in this work, and the attenuation spectrum is measured within the frequency region from 3 to 12 MHz. Particle size distributions of the coal-water slurry at different volume fractions are obtained with the optimum regularization technique. Therefore, the ultrasonic technique presented in this work brings the possibility of using ultrasound for online measurements of dense slurries.« less
Number of perceptually distinct surface colors in natural scenes.
Marín-Franch, Iván; Foster, David H
2010-09-30
The ability to perceptually identify distinct surfaces in natural scenes by virtue of their color depends not only on the relative frequency of surface colors but also on the probabilistic nature of observer judgments. Previous methods of estimating the number of discriminable surface colors, whether based on theoretical color gamuts or recorded from real scenes, have taken a deterministic approach. Thus, a three-dimensional representation of the gamut of colors is divided into elementary cells or points which are spaced at one discrimination-threshold unit intervals and which are then counted. In this study, information-theoretic methods were used to take into account both differing surface-color frequencies and observer response uncertainty. Spectral radiances were calculated from 50 hyperspectral images of natural scenes and were represented in a perceptually almost uniform color space. The average number of perceptually distinct surface colors was estimated as 7.3 × 10(3), much smaller than that based on counting methods. This number is also much smaller than the number of distinct points in a scene that are, in principle, available for reliable identification under illuminant changes, suggesting that color constancy, or the lack of it, does not generally determine the limit on the use of color for surface identification.
Aoki, Kenichi; Feldman, Marcus W.
2013-01-01
The theoretical literature from 1985 to the present on the evolution of learning strategies in variable environments is reviewed, with the focus on deterministic dynamical models that are amenable to local stability analysis, and on deterministic models yielding evolutionarily stable strategies. Individual learning, unbiased and biased social learning, mixed learning, and learning schedules are considered. A rapidly changing environment or frequent migration in a spatially heterogeneous environment favors individual learning over unbiased social learning. However, results are not so straightforward in the context of learning schedules or when biases in social learning are introduced. The three major methods of modeling temporal environmental change – coevolutionary, two-timescale, and information decay – are compared and shown to sometimes yield contradictory results. The so-called Rogers’ paradox is inherent in the two-timescale method as originally applied to the evolution of pure strategies, but is often eliminated when the other methods are used. Moreover, Rogers’ paradox is not observed for the mixed learning strategies and learning schedules that we review. We believe that further theoretical work is necessary on learning schedules and biased social learning, based on models that are logically consistent and empirically pertinent. PMID:24211681
Aoki, Kenichi; Feldman, Marcus W
2014-02-01
The theoretical literature from 1985 to the present on the evolution of learning strategies in variable environments is reviewed, with the focus on deterministic dynamical models that are amenable to local stability analysis, and on deterministic models yielding evolutionarily stable strategies. Individual learning, unbiased and biased social learning, mixed learning, and learning schedules are considered. A rapidly changing environment or frequent migration in a spatially heterogeneous environment favors individual learning over unbiased social learning. However, results are not so straightforward in the context of learning schedules or when biases in social learning are introduced. The three major methods of modeling temporal environmental change--coevolutionary, two-timescale, and information decay--are compared and shown to sometimes yield contradictory results. The so-called Rogers' paradox is inherent in the two-timescale method as originally applied to the evolution of pure strategies, but is often eliminated when the other methods are used. Moreover, Rogers' paradox is not observed for the mixed learning strategies and learning schedules that we review. We believe that further theoretical work is necessary on learning schedules and biased social learning, based on models that are logically consistent and empirically pertinent. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ercan, İlke; Suyabatmaz, Enes
2018-06-01
The saturation in the efficiency and performance scaling of conventional electronic technologies brings about the development of novel computational paradigms. Brownian circuits are among the promising alternatives that can exploit fluctuations to increase the efficiency of information processing in nanocomputing. A Brownian cellular automaton, where signals propagate randomly and are driven by local transition rules, can be made computationally universal by embedding arbitrary asynchronous circuits on it. One of the potential realizations of such circuits is via single electron tunneling (SET) devices since SET technology enable simulation of noise and fluctuations in a fashion similar to Brownian search. In this paper, we perform a physical-information-theoretic analysis on the efficiency limitations in a Brownian NAND and half-adder circuits implemented using SET technology. The method we employed here establishes a solid ground that enables studying computational and physical features of this emerging technology on an equal footing, and yield fundamental lower bounds that provide valuable insights into how far its efficiency can be improved in principle. In order to provide a basis for comparison, we also analyze a NAND gate and half-adder circuit implemented in complementary metal oxide semiconductor technology to show how the fundamental bound of the Brownian circuit compares against a conventional paradigm.
A discrete Fourier-encoded, diagonal-free experiment to simplify homonuclear 2D NMR correlations.
Huang, Zebin; Guan, Quanshuai; Chen, Zhong; Frydman, Lucio; Lin, Yulan
2017-07-21
Nuclear magnetic resonance (NMR) spectroscopy has long served as an irreplaceable, versatile tool in physics, chemistry, biology, and materials sciences, owing to its ability to study molecular structure and dynamics in detail. In particular, the connectivity of chemical sites within molecules, and thereby molecular structure, becomes visible by multi-dimensional NMR. Homonuclear correlation experiments are a powerful tool for identifying coupled spins. Generally, diagonal peaks in these correlation spectra display the strongest intensities and do not offer any new information beyond the standard one-dimensional spectrum, whereas weaker, symmetrically placed cross peaks contain most of the coupling information. The cross peaks near the diagonal are often affected by the tails of strong diagonal peaks or even obscured entirely by the diagonal. In this paper, we demonstrate a homonuclear encoding approach based on imparting a discrete phase modulation of the targeted cross peaks and combine it with a site-selective sculpting scheme, capable of simplifying the patterns arising in these 2D correlation spectra. The theoretical principles of the new methods are laid out, and experimental observations are rationalized on the basis of theoretical analyses. The ensuing techniques provide a new way to retrieve 2D coupling information within homonuclear spin systems, with enhanced sensitivity, speed, and clarity.
NASA Astrophysics Data System (ADS)
Oates, T. W. H.; Wormeester, H.; Arwin, H.
2011-12-01
In this article, spectroscopic ellipsometry studies of plasmon resonances at metal-dielectric interfaces of thin films are reviewed. We show how ellipsometry provides valuable non-invasive amplitude and phase information from which one can determine the effective dielectric functions, and how these relate to the material nanostructure and define exactly the plasmonic characteristics of the system. There are three related plasmons that are observable using spectroscopic ellipsometry; volume plasmon resonances, surface plasmon polaritons and particle plasmon resonances. We demonstrate that the established method of exploiting surface plasmon polaritons for chemical and biological sensing may be enhanced using the ellipsometric phase information and provide a comprehensive theoretical basis for the technique. We show how the particle and volume plasmon resonances in the ellipsometric spectra of nanoparticle films are directly related to size, surface coverage and constituent dielectric functions of the nanoparticles. The regularly observed splitting of the particle plasmon resonance is theoretically described using modified effective medium theories within the framework of ellipsometry. We demonstrate the wealth of information available from real-time in situ spectroscopic ellipsometry measurements of metal film deposition, including the evolution of the plasmon resonances and percolation events. Finally, we discuss how generalized and Mueller matrix ellipsometry hold great potential for characterizing plasmonic metamaterials and sub-wavelength hole arrays.
The Sonic Altimeter for Aircraft
NASA Technical Reports Server (NTRS)
Draper, C S
1937-01-01
Discussed here are results already achieved with sonic altimeters in light of the theoretical possibilities of such instruments. From the information gained in this investigation, a procedure is outlined to determine whether or not a further development program is justified by the value of the sonic altimeter as an aircraft instrument. The information available in the literature is reviewed and condensed into a summary of sonic altimeter developments. Various methods of receiving the echo and timing the interval between the signal and the echo are considered. A theoretical discussion is given of sonic altimeter errors due to uncertainties in timing, variations in sound velocity, aircraft speed, location of the sending and receiving units, and inclinations of the flight path with respect to the ground surface. Plots are included which summarize the results in each case. An analysis is given of the effect of an inclined flight path on the frequency of the echo. A brief study of the acoustical phases of the sonic altimeter problem is carried through. The results of this analysis are used to predict approximately the maximum operating altitudes of a reasonably designed sonic altimeter under very good and very bad conditions. A final comparison is made between the estimated and experimental maximum operating altitudes which shows good agreement where quantitative information is available.
A discrete Fourier-encoded, diagonal-free experiment to simplify homonuclear 2D NMR correlations
NASA Astrophysics Data System (ADS)
Huang, Zebin; Guan, Quanshuai; Chen, Zhong; Frydman, Lucio; Lin, Yulan
2017-07-01
Nuclear magnetic resonance (NMR) spectroscopy has long served as an irreplaceable, versatile tool in physics, chemistry, biology, and materials sciences, owing to its ability to study molecular structure and dynamics in detail. In particular, the connectivity of chemical sites within molecules, and thereby molecular structure, becomes visible by multi-dimensional NMR. Homonuclear correlation experiments are a powerful tool for identifying coupled spins. Generally, diagonal peaks in these correlation spectra display the strongest intensities and do not offer any new information beyond the standard one-dimensional spectrum, whereas weaker, symmetrically placed cross peaks contain most of the coupling information. The cross peaks near the diagonal are often affected by the tails of strong diagonal peaks or even obscured entirely by the diagonal. In this paper, we demonstrate a homonuclear encoding approach based on imparting a discrete phase modulation of the targeted cross peaks and combine it with a site-selective sculpting scheme, capable of simplifying the patterns arising in these 2D correlation spectra. The theoretical principles of the new methods are laid out, and experimental observations are rationalized on the basis of theoretical analyses. The ensuing techniques provide a new way to retrieve 2D coupling information within homonuclear spin systems, with enhanced sensitivity, speed, and clarity.
Analysis of entropy extraction efficiencies in random number generation systems
NASA Astrophysics Data System (ADS)
Wang, Chao; Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Han, Zheng-Fu
2016-05-01
Random numbers (RNs) have applications in many areas: lottery games, gambling, computer simulation, and, most importantly, cryptography [N. Gisin et al., Rev. Mod. Phys. 74 (2002) 145]. In cryptography theory, the theoretical security of the system calls for high quality RNs. Therefore, developing methods for producing unpredictable RNs with adequate speed is an attractive topic. Early on, despite the lack of theoretical support, pseudo RNs generated by algorithmic methods performed well and satisfied reasonable statistical requirements. However, as implemented, those pseudorandom sequences were completely determined by mathematical formulas and initial seeds, which cannot introduce extra entropy or information. In these cases, “random” bits are generated that are not at all random. Physical random number generators (RNGs), which, in contrast to algorithmic methods, are based on unpredictable physical random phenomena, have attracted considerable research interest. However, the way that we extract random bits from those physical entropy sources has a large influence on the efficiency and performance of the system. In this manuscript, we will review and discuss several randomness extraction schemes that are based on radiation or photon arrival times. We analyze the robustness, post-processing requirements and, in particular, the extraction efficiency of those methods to aid in the construction of efficient, compact and robust physical RNG systems.
An information theoretic approach of designing sparse kernel adaptive filters.
Liu, Weifeng; Park, Il; Principe, José C
2009-12-01
This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented.
ERIC Educational Resources Information Center
Prado, Javier Calzada; Marzal, Miguel Angel
2013-01-01
Introduction: The role of library and information science professionals as knowledge facilitators is solidly grounded in the profession's theoretical foundations as much as connected with its social relevance. Knowledge science is presented in this paper as a convenient theoretical framework for this mission, and knowledge engagement…
NASA Astrophysics Data System (ADS)
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.
Siyah Mansoory, Meysam; Oghabian, Mohammad Ali; Jafari, Amir Homayoun; Shahbabaie, Alireza
2017-01-01
Graph theoretical analysis of functional Magnetic Resonance Imaging (fMRI) data has provided new measures of mapping human brain in vivo. Of all methods to measure the functional connectivity between regions, Linear Correlation (LC) calculation of activity time series of the brain regions as a linear measure is considered the most ubiquitous one. The strength of the dependence obligatory for graph construction and analysis is consistently underestimated by LC, because not all the bivariate distributions, but only the marginals are Gaussian. In a number of studies, Mutual Information (MI) has been employed, as a similarity measure between each two time series of the brain regions, a pure nonlinear measure. Owing to the complex fractal organization of the brain indicating self-similarity, more information on the brain can be revealed by fMRI Fractal Dimension (FD) analysis. In the present paper, Box-Counting Fractal Dimension (BCFD) is introduced for graph theoretical analysis of fMRI data in 17 methamphetamine drug users and 18 normal controls. Then, BCFD performance was evaluated compared to those of LC and MI methods. Moreover, the global topological graph properties of the brain networks inclusive of global efficiency, clustering coefficient and characteristic path length in addict subjects were investigated too. Compared to normal subjects by using statistical tests (P<0.05), topological graph properties were postulated to be disrupted significantly during the resting-state fMRI. Based on the results, analyzing the graph topological properties (representing the brain networks) based on BCFD is a more reliable method than LC and MI.
Network selection, Information filtering and Scalable computation
NASA Astrophysics Data System (ADS)
Ye, Changqing
This dissertation explores two application scenarios of sparsity pursuit method on large scale data sets. The first scenario is classification and regression in analyzing high dimensional structured data, where predictors corresponds to nodes of a given directed graph. This arises in, for instance, identification of disease genes for the Parkinson's diseases from a network of candidate genes. In such a situation, directed graph describes dependencies among the genes, where direction of edges represent certain causal effects. Key to high-dimensional structured classification and regression is how to utilize dependencies among predictors as specified by directions of the graph. In this dissertation, we develop a novel method that fully takes into account such dependencies formulated through certain nonlinear constraints. We apply the proposed method to two applications, feature selection in large margin binary classification and in linear regression. We implement the proposed method through difference convex programming for the cost function and constraints. Finally, theoretical and numerical analyses suggest that the proposed method achieves the desired objectives. An application to disease gene identification is presented. The second application scenario is personalized information filtering which extracts the information specifically relevant to a user, predicting his/her preference over a large number of items, based on the opinions of users who think alike or its content. This problem is cast into the framework of regression and classification, where we introduce novel partial latent models to integrate additional user-specific and content-specific predictors, for higher predictive accuracy. In particular, we factorize a user-over-item preference matrix into a product of two matrices, each representing a user's preference and an item preference by users. Then we propose a likelihood method to seek a sparsest latent factorization, from a class of over-complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.
Varn, D P; Crutchfield, J P
2016-03-13
Erwin Schrödinger famously and presciently ascribed the vehicle transmitting the hereditary information underlying life to an 'aperiodic crystal'. We compare and contrast this, only later discovered to be stored in the linear biomolecule DNA, with the information-bearing, layered quasi-one-dimensional materials investigated by the emerging field of chaotic crystallography. Despite differences in functionality, the same information measures capture structure and novelty in both, suggesting an intimate coherence between the information character of biotic and abiotic matter-a broadly applicable physics of information. We review layered solids and consider three examples of how information- and computation-theoretic techniques are being applied to understand their structure. In particular, (i) we review recent efforts to apply new kinds of information measures to quantify disordered crystals; (ii) we discuss the structure of ice I in information-theoretic terms; and (iii) we recount recent investigations into the structure of tris(bicyclo[2.1.1]hexeno)benzene, showing how an information-theoretic analysis yields additional insight into its structure. We then illustrate a new Second Law of Thermodynamics that describes information processing in active low-dimensional materials, reviewing Maxwell's Demon and a new class of molecular devices that act as information catalysts. Lastly, we conclude by speculating on how these ideas from informational materials science may impact biology. © 2016 The Author(s).
Gates, Bob; Statham, Mark
2013-10-01
In England, the numbers of learning disability nurses are declining; a need for urgent attention to workforce planning issues has been advocated. This paper considers views of lecturers, students and potential students as legitimate stakeholders for future education commissioning for this field of nursing. This project aimed to undertake a strategic review of learning disability nursing educational commissioning, to provide an 'evidence based' evaluation to inform future strategic commissioning of learning disability nursing for one Health Authority, UK. The project adopted a structured multiple methods approach to generate evidence from a number of data sources, this paper reports on the findings from one method [focus groups] used for two groups of stakeholders. Informants comprised 10 learning disability nursing students studying at a Higher Education Institution, 25 health and social care students studying at a Further Education College, and 6 academic staff from 5 universities; all informants were from the south of England. The method reported on in this paper is focus group methodology. Once completed, transcripts made were read in full, and subjected to content analysis. The process of content analysis led to the development of 11 theoretical categories that describe the multiplicity of views of informants, as to issues of importance for this element of the health workforce. The paper concludes by identifying key messages from these informants. It is suggested that both method and findings have national and international resonance, as stakeholder engagement is a universal issue in health care education commissioning. Copyright © 2013 Elsevier Ltd. All rights reserved.
Schalk, Stefan G; Demi, Libertario; Bouhouch, Nabil; Kuenen, Maarten P J; Postema, Arnoud W; de la Rosette, Jean J M C H; Wijkstra, Hessel; Tjalkens, Tjalling J; Mischi, Massimo
2017-03-01
The role of angiogenesis in cancer growth has stimulated research aimed at noninvasive cancer detection by blood perfusion imaging. Recently, contrast ultrasound dispersion imaging was proposed as an alternative method for angiogenesis imaging. After the intravenous injection of an ultrasound-contrast-agent bolus, dispersion can be indirectly estimated from the local similarity between neighboring time-intensity curves (TICs) measured by ultrasound imaging. Up until now, only linear similarity measures have been investigated. Motivated by the promising results of this approach in prostate cancer (PCa), we developed a novel dispersion estimation method based on mutual information, thus including nonlinear similarity, to further improve its ability to localize PCa. First, a simulation study was performed to establish the theoretical link between dispersion and mutual information. Next, the method's ability to localize PCa was validated in vivo in 23 patients (58 datasets) referred for radical prostatectomy by comparison with histology. A monotonic relationship between dispersion and mutual information was demonstrated. The in vivo study resulted in a receiver operating characteristic (ROC) curve area equal to 0.77, which was superior (p = 0.21-0.24) to that obtained by linear similarity measures (0.74-0.75) and (p <; 0.05) to that by conventional perfusion parameters (≤0.70). Mutual information between neighboring time-intensity curves can be used to indirectly estimate contrast dispersion and can lead to more accurate PCa localization. An improved PCa localization method can possibly lead to better grading and staging of tumors, and support focal-treatment guidance. Moreover, future employment of the method in other types of angiogenic cancer can be considered.
Visibility Graph Based Time Series Analysis
Stephen, Mutua; Gu, Changgui; Yang, Huijie
2015-01-01
Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it’s microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks. PMID:26571115
Light Microscopy at Maximal Precision
NASA Astrophysics Data System (ADS)
Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.
2017-10-01
Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.
Method of passive ranging from infrared image sequence based on equivalent area
NASA Astrophysics Data System (ADS)
Yang, Weiping; Shen, Zhenkang
2007-11-01
The information of range between missile and targets is important not only to missile controlling component, but also to automatic target recognition, so studying the technique of passive ranging from infrared images has important theoretic and practical meanings. Here we tried to get the range between guided missile and target and help to identify targets or dodge a hit. The issue of distance between missile and target is currently a hot and difficult research content. As all know, infrared imaging detector can not range so that it restricts the functions of the guided information processing system based on infrared images. In order to break through the technical puzzle, we investigated the principle of the infrared imaging, after analysing the imaging geometric relationship between the guided missile and the target, we brought forward the method of passive ranging based on equivalent area and provided mathematical analytic formulas. Validating Experiments demonstrate that the presented method has good effect, the lowest relative error can reach 10% in some circumstances.
Python for Information Theoretic Analysis of Neural Data
Ince, Robin A. A.; Petersen, Rasmus S.; Swan, Daniel C.; Panzeri, Stefano
2008-01-01
Information theory, the mathematical theory of communication in the presence of noise, is playing an increasingly important role in modern quantitative neuroscience. It makes it possible to treat neural systems as stochastic communication channels and gain valuable, quantitative insights into their sensory coding function. These techniques provide results on how neurons encode stimuli in a way which is independent of any specific assumptions on which part of the neuronal response is signal and which is noise, and they can be usefully applied even to highly non-linear systems where traditional techniques fail. In this article, we describe our work and experiences using Python for information theoretic analysis. We outline some of the algorithmic, statistical and numerical challenges in the computation of information theoretic quantities from neural data. In particular, we consider the problems arising from limited sampling bias and from calculation of maximum entropy distributions in the presence of constraints representing the effects of different orders of interaction in the system. We explain how and why using Python has allowed us to significantly improve the speed and domain of applicability of the information theoretic algorithms, allowing analysis of data sets characterized by larger numbers of variables. We also discuss how our use of Python is facilitating integration with collaborative databases and centralised computational resources. PMID:19242557
Hershberger, Patricia E; Finnegan, Lorna; Altfeld, Susan; Lake, Sara; Hirshfeld-Cytron, Jennifer
2013-01-01
Young women with cancer now face the complex decision about whether to undergo fertility preservation. Yet little is known about how these women process information involved in making this decision. The purpose of this article is to expand theoretical understanding of the decision-making process by examining aspects of information processing among young women diagnosed with cancer. Using a grounded theory approach, 27 women with cancer participated in individual, semistructured interviews. Data were coded and analyzed using constant-comparison techniques that were guided by 5 dimensions within the Contemplate phase of the decision-making process framework. In the first dimension, young women acquired information primarily from clinicians and Internet sources. Experiential information, often obtained from peers, occurred in the second dimension. Preferences and values were constructed in the third dimension as women acquired factual, moral, and ethical information. Women desired tailored, personalized information that was specific to their situation in the fourth dimension; however, women struggled with communicating these needs to clinicians. In the fifth dimension, women offered detailed descriptions of clinician behaviors that enhance or impede decisional debriefing. Better understanding of theoretical underpinnings surrounding women's information processes can facilitate decision support and improve clinical care.
Role of information theoretic uncertainty relations in quantum theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jizba, Petr, E-mail: p.jizba@fjfi.cvut.cz; ITP, Freie Universität Berlin, Arnimallee 14, D-14195 Berlin; Dunningham, Jacob A., E-mail: J.Dunningham@sussex.ac.uk
2015-04-15
Uncertainty relations based on information theory for both discrete and continuous distribution functions are briefly reviewed. We extend these results to account for (differential) Rényi entropy and its related entropy power. This allows us to find a new class of information-theoretic uncertainty relations (ITURs). The potency of such uncertainty relations in quantum mechanics is illustrated with a simple two-energy-level model where they outperform both the usual Robertson–Schrödinger uncertainty relation and Shannon entropy based uncertainty relation. In the continuous case the ensuing entropy power uncertainty relations are discussed in the context of heavy tailed wave functions and Schrödinger cat states. Again,more » improvement over both the Robertson–Schrödinger uncertainty principle and Shannon ITUR is demonstrated in these cases. Further salient issues such as the proof of a generalized entropy power inequality and a geometric picture of information-theoretic uncertainty relations are also discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rey, Michael; Tyuterev, Vladimir G.; Nikitin, Andrei V., E-mail: michael.rey@univ-reims.fr
Modeling atmospheres of hot exoplanets and brown dwarfs requires high- T databases that include methane as the major hydrocarbon. We report a complete theoretical line list of {sup 12}CH{sub 4} in the infrared range 0–13,400 cm{sup −1} up to T {sub max} = 3000 K computed via a full quantum-mechanical method from ab initio potential energy and dipole moment surfaces. Over 150 billion transitions were generated with the lower rovibrational energy cutoff 33,000 cm{sup −1} and intensity cutoff down to 10{sup −33} cm/molecule to ensure convergent opacity predictions. Empirical corrections for 3.7 million of the strongest transitions permitted line positionmore » accuracies of 0.001–0.01 cm{sup −1}. Full data are partitioned into two sets. “Light lists” contain strong and medium transitions necessary for an accurate description of sharp features in absorption/emission spectra. For a fast and efficient modeling of quasi-continuum cross sections, billions of tiny lines are compressed in “super-line” libraries according to Rey et al. These combined data will be freely accessible via the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru), which provides a user-friendly interface for simulations of absorption coefficients, cross-sectional transmittance, and radiance. Comparisons with cold, room, and high- T experimental data show that the data reported here represent the first global theoretical methane lists suitable for high-resolution astrophysical applications.« less
A Theoretical Sketch of Medical Professionalism as a Normative Complex
ERIC Educational Resources Information Center
Holtman, Matthew C.
2008-01-01
Validity arguments for assessment tools intended to measure medical professionalism suffer for lack of a clear theoretical statement of what professionalism is and how it should behave. Drawing on several decades of field research addressing deviance and informal social control among physicians, a theoretical sketch of professionalism is presented…
Fingelkurts, Andrew A; Fingelkurts, Alexander A; Neves, Carlos F H
2012-01-05
Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical operational architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the phenomenal level of brain organization. In this context the problem of producing man-made "machine" consciousness and "artificial" thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. Copyright © 2010 Elsevier B.V. All rights reserved.
Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2005-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.
A global optimization perspective on molecular clusters.
Marques, J M C; Pereira, F B; Llanio-Trujillo, J L; Abreu, P E; Albertí, M; Aguilar, A; Pirani, F; Bartolomei, M
2017-04-28
Although there is a long history behind the idea of chemical structure, this is a key concept that continues to challenge chemists. Chemical structure is fundamental to understanding most of the properties of matter and its knowledge for complex systems requires the use of state-of-the-art techniques, either experimental or theoretical. From the theoretical view point, one needs to establish the interaction potential among the atoms or molecules of the system, which contains all the information regarding the energy landscape, and employ optimization algorithms to discover the relevant stationary points. In particular, global optimization methods are of major importance to search for the low-energy structures of molecular aggregates. We review the application of global optimization techniques to several molecular clusters; some new results are also reported. Emphasis is given to evolutionary algorithms and their application in the study of the microsolvation of alkali-metal and Ca 2+ ions with various types of solvents.This article is part of the themed issue 'Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces'. © 2017 The Author(s).
A global optimization perspective on molecular clusters
Pereira, F. B.; Llanio-Trujillo, J. L.; Abreu, P. E.; Albertí, M.; Aguilar, A.; Pirani, F.; Bartolomei, M.
2017-01-01
Although there is a long history behind the idea of chemical structure, this is a key concept that continues to challenge chemists. Chemical structure is fundamental to understanding most of the properties of matter and its knowledge for complex systems requires the use of state-of-the-art techniques, either experimental or theoretical. From the theoretical view point, one needs to establish the interaction potential among the atoms or molecules of the system, which contains all the information regarding the energy landscape, and employ optimization algorithms to discover the relevant stationary points. In particular, global optimization methods are of major importance to search for the low-energy structures of molecular aggregates. We review the application of global optimization techniques to several molecular clusters; some new results are also reported. Emphasis is given to evolutionary algorithms and their application in the study of the microsolvation of alkali-metal and Ca2+ ions with various types of solvents. This article is part of the themed issue ‘Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces’. PMID:28320902
Lung Cancer Screening Participation: Developing a Conceptual Model to Guide Research
Carter-Harris, Lisa; Davis, Lorie L.; Rawl, Susan M.
2017-01-01
Purpose To describe the development of a conceptual model to guide research focused on lung cancer screening participation from the perspective of the individual in the decision-making process. Methods Based on a comprehensive review of empirical and theoretical literature, a conceptual model was developed linking key psychological variables (stigma, medical mistrust, fatalism, worry, and fear) to the health belief model and precaution adoption process model. Results Proposed model concepts have been examined in prior research of either lung or other cancer screening behavior. To date, a few studies have explored a limited number of variables that influence screening behavior in lung cancer specifically. Therefore, relationships among concepts in the model have been proposed and future research directions presented. Conclusion This proposed model is an initial step to support theoretically based research. As lung cancer screening becomes more widely implemented, it is critical to theoretically guide research to understand variables that may be associated with lung cancer screening participation. Findings from future research guided by the proposed conceptual model can be used to refine the model and inform tailored intervention development. PMID:28304262
Lundh, Lena; Hylander, Ingrid; Törnkvist, Lena
2012-09-01
To investigate why some patients with chronic obstructive pulmonary disease (COPD) have difficulty quitting smoking and to develop a theoretical model that describes their perspectives on these difficulties. Grounded theory method was used from the selection of participants to the analyses of semi-structured interviews with 14 patients with COPD. Four additional interviews were conducted to ensure relevance. The analysis resulted in a theoretical model that illustrates the process of 'Patients with COPD trying to quit smoking'. The model illuminates factors related to the decision to try to quit smoking, including pressure-filled mental states and constructive or destructive pressure-relief strategies. The constructive strategies lead either to success in quitting or to continuing to try to quit. The destructive strategies can lead to losing hope and becoming resigned to continuing to smoke. The theoretical model 'Patients trying to quit smoking' contributes to a better understanding of the pressure-filled mental states and destructive strategies experienced by some patients with COPD in the process of trying to quit. This better understanding can help nurses individualise counselling. Moreover, patients' own awareness of these states and strategies may facilitate their efforts to quit. The information in the model can also be used as a supplement to methods such as motivational interviewing (MI). © 2011 The Authors. Scandinavian Journal of Caring Sciences © 2011 Nordic College of Caring Science.
Thomson, Oliver P; Petty, Nicola J; Moore, Ann P
2014-02-01
How practitioners conceive clinical practice influences many aspects of their clinical work including how they view knowledge, clinical decision-making, and their actions. Osteopaths have relied upon the philosophical and theoretical foundations upon which the profession was built to guide clinical practice. However, it is currently unknown how osteopaths conceive clinical practice, and how these conceptions develop and influence their clinical work. This paper reports the conceptions of practice of experienced osteopaths in the UK. A constructivist grounded theory approach was taken in this study. The constant comparative method of analysis was used to code and analyse data. Purposive sampling was employed to initially select participants. Subsequent theoretical sampling, informed by data analysis, allowed specific participants to be sampled. Data collection methods involved semi-structured interviews and non-participant observation of practitioners during a patient appointment, which was video-recorded and followed by a video-prompted reflective interview. Participants' conception of practice lay on a continuum, from technical rationality to professional artistry and the development of which was influenced by their educational experience, view of health and disease, epistemology of practice knowledge, theory-practice relationship and their perceived therapeutic role. The findings from this study provide the first theoretical insight of osteopaths' conceptions of clinical practice and the factors which influence such conceptions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Guennoun, L; Zaydoun, S; El Jastimi, J; Marakchi, K; Komiha, N; Kabbaj, O K; El Hajji, A; Guédira, F
2012-11-01
The purpose of this manuscript is to discuss our investigations of diprotonated guanazolium chloride using vibrational spectroscopy and quantum chemical methods. The solid phase FT-IR and FT-Raman spectra were recorded in the regions 4000-400cm(-1) and 3600-50cm(-1) respectively, and the band assignments were supported by deuteration effects. Different sites of diprotonation have been theoretically examined at the B3LYP/6-31G level. The results of energy calculations show that the diprotonation process occurs with the two pyridine-like nitrogen N2 and N4 of the triazole ring. The molecular structure, harmonic vibrational wave numbers, infrared intensities and Raman activities were calculated for this form by DFT/B3LYP methods, using a 6-31G basis set. Both the optimized geometries and the theoretical and experimental spectra for diprotonated guanazolium under a stable form are compared with theoretical and experimental data of the neutral molecule reported in our previous work. This comparison reveals that the diprotonation occurs on the triazolic nucleus, and provide information about the hydrogen bonding in the crystal. The scaled vibrational wave number values of the diprotonated form are in close agreement with the experimental data. The normal vibrations were characterized in terms of potential energy distribution (PED) using the VEDA 4 program. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Guennoun, L.; Zaydoun, S.; El jastimi, J.; Marakchi, K.; Komiha, N.; Kabbaj, O. K.; El Hajji, A.; Guédira, F.
2012-11-01
The purpose of this manuscript is to discuss our investigations of diprotonated guanazolium chloride using vibrational spectroscopy and quantum chemical methods. The solid phase FT-IR and FT-Raman spectra were recorded in the regions 4000-400 cm-1 and 3600-50 cm-1 respectively, and the band assignments were supported by deuteration effects. Different sites of diprotonation have been theoretically examined at the B3LYP/6-31G∗ level. The results of energy calculations show that the diprotonation process occurs with the two pyridine-like nitrogen N2 and N4 of the triazole ring. The molecular structure, harmonic vibrational wave numbers, infrared intensities and Raman activities were calculated for this form by DFT/B3LYP methods, using a 6-31G∗ basis set. Both the optimized geometries and the theoretical and experimental spectra for diprotonated guanazolium under a stable form are compared with theoretical and experimental data of the neutral molecule reported in our previous work. This comparison reveals that the diprotonation occurs on the triazolic nucleus, and provide information about the hydrogen bonding in the crystal. The scaled vibrational wave number values of the diprotonated form are in close agreement with the experimental data. The normal vibrations were characterized in terms of potential energy distribution (PED) using the VEDA 4 program.
Observable measure of quantum coherence in finite dimensional systems.
Girolami, Davide
2014-10-24
Quantum coherence is the key resource for quantum technology, with applications in quantum optics, information processing, metrology, and cryptography. Yet, there is no universally efficient method for quantifying coherence either in theoretical or in experimental practice. I introduce a framework for measuring quantum coherence in finite dimensional systems. I define a theoretical measure which satisfies the reliability criteria established in the context of quantum resource theories. Then, I present an experimental scheme implementable with current technology which evaluates the quantum coherence of an unknown state of a d-dimensional system by performing two programmable measurements on an ancillary qubit, in place of the O(d2) direct measurements required by full state reconstruction. The result yields a benchmark for monitoring quantum effects in complex systems, e.g., certifying nonclassicality in quantum protocols and probing the quantum behavior of biological complexes.
Fast angular synchronization for phase retrieval via incomplete information
NASA Astrophysics Data System (ADS)
Viswanathan, Aditya; Iwen, Mark
2015-08-01
We consider the problem of recovering the phase of an unknown vector, x ∈ ℂd, given (normalized) phase difference measurements of the form xjxk*/|xjxk*|, j,k ∈ {1,...,d}, and where xj* denotes the complex conjugate of xj. This problem is sometimes referred to as the angular synchronization problem. This paper analyzes a linear-time-in-d eigenvector-based angular synchronization algorithm and studies its theoretical and numerical performance when applied to a particular class of highly incomplete and possibly noisy phase difference measurements. Theoretical results are provided for perfect (noiseless) measurements, while numerical simulations demonstrate the robustness of the method to measurement noise. Finally, we show that this angular synchronization problem and the specific form of incomplete phase difference measurements considered arise in the phase retrieval problem - where we recover an unknown complex vector from phaseless (or magnitude) measurements.
Variable screening via quantile partial correlation
Ma, Shujie; Tsai, Chih-Ling
2016-01-01
In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683
Secure multiparty computation of a comparison problem.
Liu, Xin; Li, Shundong; Liu, Jian; Chen, Xiubo; Xu, Gang
2016-01-01
Private comparison is fundamental to secure multiparty computation. In this study, we propose novel protocols to privately determine [Formula: see text], or [Formula: see text] in one execution. First, a 0-1-vector encoding method is introduced to encode a number into a vector, and the Goldwasser-Micali encryption scheme is used to compare integers privately. Then, we propose a protocol by using a geometric method to compare rational numbers privately, and the protocol is information-theoretical secure. Using the simulation paradigm, we prove the privacy-preserving property of our protocols in the semi-honest model. The complexity analysis shows that our protocols are more efficient than previous solutions.
Electron Capture in Slow Collisions of Si4+ With Atomic Hydrogen
NASA Astrophysics Data System (ADS)
Joseph, D. C.; Gu, J. P.; Saha, B. C.
2009-10-01
In recent years the charge transfer involving Si4+ and H at low energies has drawn considerable attention both theoretically and experimentally due to its importance not only in astronomical environments but also in modern semiconductor industries. Accurate information regarding its molecular structures and interactions are essential to understand the low energy collision dynamics. Ab initio calculations are performed using the multireference single- and double-excitation configuration-interaction (MRD-CI) method to evaluate potential energies. State selective cross sections are calculate using fully quantum and semi-classical molecular-orbital close coupling (MOCC) methods in the adiabatic representation. Detail results will be presented in the conference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Hongwei; High Magnetic Field Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031; Kong Xi
The method of quantum annealing (QA) is a promising way for solving many optimization problems in both classical and quantum information theory. The main advantage of this approach, compared with the gate model, is the robustness of the operations against errors originated from both external controls and the environment. In this work, we succeed in demonstrating experimentally an application of the method of QA to a simplified version of the traveling salesman problem by simulating the corresponding Schroedinger evolution with a NMR quantum simulator. The experimental results unambiguously yielded the optimal traveling route, in good agreement with the theoretical prediction.
Promoting mental wellbeing: developing a theoretically and empirically sound complex intervention.
Millar, S L; Donnelly, M
2014-06-01
This paper describes the development of a complex intervention to promote mental wellbeing using the revised framework for developing and evaluating complex interventions produced by the UK Medical Research Council (UKMRC). Application of the first two phases of the framework is described--development and feasibility and piloting. The theoretical case and evidence base were examined analytically to explicate the theoretical and empirical foundations of the intervention. These findings informed the design of a 12-week mental wellbeing promotion programme providing early intervention for people showing signs of mental health difficulties. The programme is based on the theoretical constructs of self-efficacy, self-esteem, purpose in life, resilience and social support and comprises 10 steps. A mixed methods approach was used to conduct a feasibility study with community and voluntary sector service users and in primary care. A significant increase in mental wellbeing was observed following participation in the intervention. Qualitative data corroborated this finding and suggested that the intervention was feasible to deliver and acceptable to participants, facilitators and health professionals. The revised UKMRC framework can be successfully applied to the development of public health interventions. © The Author 2013. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Encryption method based on pseudo random spatial light modulation for single-fibre data transmission
NASA Astrophysics Data System (ADS)
Kowalski, Marcin; Zyczkowski, Marek
2017-11-01
Optical cryptosystems can provide encryption and sometimes compression simultaneously. They are increasingly attractive for information securing especially for image encryption. Our studies shown that the optical cryptosystems can be used to encrypt optical data transmission. We propose and study a new method for securing fibre data communication. The paper presents a method for optical encryption of data transmitted with a single optical fibre. The encryption process relies on pseudo-random spatial light modulation, combination of two encryption keys and the Compressed Sensing framework. A linear combination of light pulses with pseudo-random patterns provides a required encryption performance. We propose an architecture to transmit the encrypted data through the optical fibre. The paper describes the method, presents the theoretical analysis, design of physical model and results of experiment.
Liaw, Siaw-Teng; Pearce, Christopher; Liyanage, Harshana; Liaw, Gladys S S; de Lusignan, Simon
2014-01-01
Increasing investment in eHealth aims to improve cost effectiveness and safety of care. Data extraction and aggregation can create new data products to improve professional practice and provide feedback to improve the quality of source data. A previous systematic review concluded that locally relevant clinical indicators and use of clinical record systems could support clinical governance. We aimed to extend and update the review with a theoretical framework. We searched PubMed, Medline, Web of Science, ABI Inform (Proquest) and Business Source Premier (EBSCO) using the terms curation, information ecosystem, data quality management (DQM), data governance, information governance (IG) and data stewardship. We focused on and analysed the scope of DQM and IG processes, theoretical frameworks, and determinants of the processing, quality assurance, presentation and sharing of data across the enterprise. There are good theoretical reasons for integrated governance, but there is variable alignment of DQM, IG and health system objectives across the health enterprise. Ethical constraints exist that require health information ecosystems to process data in ways that are aligned with improving health and system efficiency and ensuring patient safety. Despite an increasingly 'big-data' environment, DQM and IG in health services are still fragmented across the data production cycle. We extend current work on DQM and IG with a theoretical framework for integrated IG across the data cycle. The dimensions of this theory-based framework would require testing with qualitative and quantitative studies to examine the applicability and utility, along with an evaluation of its impact on data quality across the health enterprise.
Davis, Thomas D
2017-01-01
Practice evaluation strategies range in style from the formal-analytic tools of single-subject designs, rapid assessment instruments, algorithmic steps in evidence-informed practice, and computer software applications, to the informal-interactive tools of clinical supervision, consultation with colleagues, use of client feedback, and clinical experience. The purpose of this article is to provide practice researchers in social work with an evidence-informed theory that is capable of explaining both how and why social workers use practice evaluation strategies to self-monitor the effectiveness of their interventions in terms of client change. The author delineates the theoretical contours and consequences of what is called dual-process theory. Drawing on evidence-informed advances in the cognitive and social neurosciences, the author identifies among everyday social workers a theoretically stable, informal-interactive tool preference that is a cognitively necessary, sufficient, and stand-alone preference that requires neither the supplementation nor balance of formal-analytic tools. The author's delineation of dual-process theory represents a theoretical contribution in the century-old attempt to understand how and why social workers evaluate their practice the way they do.
Information-theoretic decomposition of embodied and situated systems.
Da Rold, Federico
2018-07-01
The embodied and situated view of cognition stresses the importance of real-time and nonlinear bodily interaction with the environment for developing concepts and structuring knowledge. In this article, populations of robots controlled by an artificial neural network learn a wall-following task through artificial evolution. At the end of the evolutionary process, time series are recorded from perceptual and motor neurons of selected robots. Information-theoretic measures are estimated on pairings of variables to unveil nonlinear interactions that structure the agent-environment system. Specifically, the mutual information is utilized to quantify the degree of dependence and the transfer entropy to detect the direction of the information flow. Furthermore, the system is analyzed with the local form of such measures, thus capturing the underlying dynamics of information. Results show that different measures are interdependent and complementary in uncovering aspects of the robots' interaction with the environment, as well as characteristics of the functional neural structure. Therefore, the set of information-theoretic measures provides a decomposition of the system, capturing the intricacy of nonlinear relationships that characterize robots' behavior and neural dynamics. Copyright © 2018 Elsevier Ltd. All rights reserved.
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
Research on fiber Bragg grating heart sound sensing and wavelength demodulation method
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Miao, Chang-Yun; Gao, Hua; Gan, Jing-Meng; Li, Hong-Qiang
2010-11-01
Heart sound includes a lot of physiological and pathological information of heart and blood vessel. Heart sound detecting is an important method to gain the heart status, and has important significance to early diagnoses of cardiopathy. In order to improve sensitivity and reduce noise, a heart sound measurement method based on fiber Bragg grating was researched. By the vibration principle of plane round diaphragm, a heart sound sensor structure of fiber Bragg grating was designed and a heart sound sensing mathematical model was established. A formula of heart sound sensitivity was deduced and the theoretical sensitivity of the designed sensor is 957.11pm/KPa. Based on matched grating method, the experiment system was built, by which the excursion of reflected wavelength of the sensing grating was detected and the information of heart sound was obtained. Experiments show that the designed sensor can detect the heart sound and the reflected wavelength variety range is about 70pm. When the sampling frequency is 1 KHz, the extracted heart sound waveform by using the db4 wavelet has the same characteristics with a standard heart sound sensor.
ERIC Educational Resources Information Center
Alsadoon, Abeer; Prasad, P. W. C.; Beg, Azam
2017-01-01
Making the students understand the theoretical concepts of digital logic design concepts is one of the major issues faced by the academics, therefore the teachers have tried different techniques to link the theoretical information to the practical knowledge. Use of software simulations is a technique for learning and practice that can be applied…
Signal location using generalized linear constraints
NASA Astrophysics Data System (ADS)
Griffiths, Lloyd J.; Feldman, D. D.
1992-01-01
This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.
Practical characterization of quantum devices without tomography
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Flammia, Steven; Silva, Marcus; Liu, Yi-Kai; Poulin, David
2012-02-01
Quantum tomography is the main method used to assess the quality of quantum information processing devices, but its complexity presents a major obstacle for the characterization of even moderately large systems. Part of the reason for this complexity is that tomography generates much more information than is usually sought. Taking a more targeted approach, we develop schemes that enable (i) estimating the ?delity of an experiment to a theoretical ideal description, (ii) learning which description within a reduced subset best matches the experimental data. Both these approaches yield a signi?cant reduction in resources compared to tomography. In particular, we show how to estimate the ?delity between a predicted pure state and an arbitrary experimental state using only a constant number of Pauli expectation values selected at random according to an importance-weighting rule. In addition, we propose methods for certifying quantum circuits and learning continuous-time quantum dynamics that are described by local Hamiltonians or Lindbladians.
A simple and accurate method for calculation of the structure factor of interacting charged spheres.
Wu, Chu; Chan, Derek Y C; Tabor, Rico F
2014-07-15
Calculation of the structure factor of a system of interacting charged spheres based on the Ginoza solution of the Ornstein-Zernike equation has been developed and implemented on a stand-alone spreadsheet. This facilitates direct interactive numerical and graphical comparisons between experimental structure factors with the pioneering theoretical model of Hayter-Penfold that uses the Hansen-Hayter renormalisation correction. The method is used to fit example experimental structure factors obtained from the small-angle neutron scattering of a well-characterised charged micelle system, demonstrating that this implementation, available in the supplementary information, gives identical results to the Hayter-Penfold-Hansen approach for the structure factor, S(q) and provides direct access to the pair correlation function, g(r). Additionally, the intermediate calculations and outputs can be readily accessed and modified within the familiar spreadsheet environment, along with information on the normalisation procedure. Copyright © 2014 Elsevier Inc. All rights reserved.
Fernández, Maria E; Gonzales, Alicia; Tortolero-Luna, Guillermo; Partida, Sylvia; Bartholomew, L Kay
2005-10-01
This article describes the development of the Cultivando La Salud program, an intervention to increase breast and cervical cancer screening for Hispanic farmworker women. Processes and findings of intervention mapping (IM), a planning process for development of theory and evidence-informed program are discussed. The six IM steps are presented: needs assessment, preparation of planning matrices, election of theoretic methods and practical strategies, program design, implementation planning, and evaluation. The article also describes how qualitative and quantitative findings informed intervention development. IM helped ensure that theory and evidence guided (a) the identification of behavioral and environmental factors related to a target health problem and (b) the selection of the most appropriate methods and strategies to address the identified determinants. IM also guided the development of program materials and implementation by lay health workers. Also reported are findings of the pilot study and effectiveness trial.
A Nonlinear Diffusion Equation-Based Model for Ultrasound Speckle Noise Removal
NASA Astrophysics Data System (ADS)
Zhou, Zhenyu; Guo, Zhichang; Zhang, Dazhi; Wu, Boying
2018-04-01
Ultrasound images are contaminated by speckle noise, which brings difficulties in further image analysis and clinical diagnosis. In this paper, we address this problem in the view of nonlinear diffusion equation theories. We develop a nonlinear diffusion equation-based model by taking into account not only the gradient information of the image, but also the information of the gray levels of the image. By utilizing the region indicator as the variable exponent, we can adaptively control the diffusion type which alternates between the Perona-Malik diffusion and the Charbonnier diffusion according to the image gray levels. Furthermore, we analyze the proposed model with respect to the theoretical and numerical properties. Experiments show that the proposed method achieves much better speckle suppression and edge preservation when compared with the traditional despeckling methods, especially in the low gray level and low-contrast regions.
Lee, Jennifer F.; Hesselberth, Jay R.; Meyers, Lauren Ancel; Ellington, Andrew D.
2004-01-01
The aptamer database is designed to contain comprehensive sequence information on aptamers and unnatural ribozymes that have been generated by in vitro selection methods. Such data are not normally collected in ‘natural’ sequence databases, such as GenBank. Besides serving as a storehouse of sequences that may have diagnostic or therapeutic utility, the database serves as a valuable resource for theoretical biologists who describe and explore fitness landscapes. The database is updated monthly and is publicly available at http://aptamer.icmb.utexas.edu/. PMID:14681367
Generalized Tavis-Cummings models and quantum networks
NASA Astrophysics Data System (ADS)
Gorokhov, A. V.
2018-04-01
The properties of quantum networks based on generalized Tavis-Cummings models are theoretically investigated. We have calculated the information transfer success rate from one node to another in a simple model of a quantum network realized with two-level atoms placed in the cavities and interacting with an external laser field and cavity photons. The method of dynamical group of the Hamiltonian and technique of corresponding coherent states were used for investigation of the temporal dynamics of the two nodes model.
Cramer, Christopher J.; Tolman, William B.
2008-01-01
Using interwoven experimental and theoretical methods, detailed studies of several structurally defined 1:1 Cu/O2 complexes have provided important fundamental chemical information useful for understanding the nature of intermediates involved in aerobic oxidations in synthetic and enzymatic copper-mediated catalysis. In particular, these studies have shed new light onto the factors that influence the mode of O2 coordination (end-on vs. side-on) and the electronic structure, which can vary between Cu(II)-superoxo and Cu(III)-peroxo extremes. PMID:17458929
NASA Technical Reports Server (NTRS)
Lenoble, Jacqueline (Editor); Remer, Lorraine (Editor); Tanre, Didier (Editor)
2012-01-01
This book gives a much needed explanation of the basic physical principles of radia5tive transfer and remote sensing, and presents all the instruments and retrieval algorithms in a homogenous manner. For the first time, an easy path from theory to practical algorithms is available in one easily accessible volume, making the connection between theoretical radiative transfer and individual practical solutions to retrieve aerosol information from remote sensing. In addition, the specifics and intercomparison of all current and historical methods are explained and clarified.
Buchner, Ginka S; Murphy, Ronan D; Buchete, Nicolae-Viorel; Kubelka, Jan
2011-08-01
The problem of spontaneous folding of amino acid chains into highly organized, biologically functional three-dimensional protein structures continues to challenge the modern science. Understanding how proteins fold requires characterization of the underlying energy landscapes as well as the dynamics of the polypeptide chains in all stages of the folding process. In recent years, important advances toward these goals have been achieved owing to the rapidly growing interdisciplinary interest and significant progress in both experimental techniques and theoretical methods. Improvements in the experimental time resolution led to determination of the timescales of the important elementary events in folding, such as formation of secondary structure and tertiary contacts. Sensitive single molecule methods made possible probing the distributions of the unfolded and folded states and following the folding reaction of individual protein molecules. Discovery of proteins that fold in microseconds opened the possibility of atomic-level theoretical simulations of folding and their direct comparisons with experimental data, as well as of direct experimental observation of the barrier-less folding transition. The ultra-fast folding also brought new questions, concerning the intrinsic limits of the folding rates and experimental signatures of barrier-less "downhill" folding. These problems will require novel approaches for even more detailed experimental investigations of the folding dynamics as well as for the analysis of the folding kinetic data. For theoretical simulations of folding, a main challenge is how to extract the relevant information from overwhelmingly detailed atomistic trajectories. New theoretical methods have been devised to allow a systematic approach towards a quantitative analysis of the kinetic network of folding-unfolding transitions between various configuration states of a protein, revealing the transition states and the associated folding pathways at multiple levels, from atomistic to coarse-grained representations. This article is part of a Special Issue entitled: Protein Dynamics: Experimental and Computational Approaches. Copyright © 2010 Elsevier B.V. All rights reserved.
2012-01-01
Background There is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF) to advance the science of implementation research. Methods The intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s) of delivery) could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood? Results A complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change. Conclusions We have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be maintained as the primary framework to guide researchers through a comprehensive intervention development process. PMID:22531013
A continuous-wave ultrasound system for displacement amplitude and phase measurement.
Finneran, James J; Hastings, Mardi C
2004-06-01
A noninvasive, continuous-wave ultrasonic technique was developed to measure the displacement amplitude and phase of mechanical structures. The measurement system was based on a method developed by Rogers and Hastings ["Noninvasive vibration measurement system and method for measuring amplitude of vibration of tissue in an object being investigated," U.S. Patent No. 4,819,643 (1989)] and expanded to include phase measurement. A low-frequency sound source was used to generate harmonic vibrations in a target of interest. The target was simultaneously insonified by a low-power, continuous-wave ultrasonic source. Reflected ultrasound was phase modulated by the target motion and detected with a separate ultrasonic transducer. The target displacement amplitude was obtained directly from the received ultrasound frequency spectrum by comparing the carrier and sideband amplitudes. Phase information was obtained by demodulating the received signal using a double-balanced mixer and low-pass filter. A theoretical model for the ultrasonic receiver field is also presented. This model coupled existing models for focused piston radiators and for pulse-echo ultrasonic fields. Experimental measurements of the resulting receiver fields compared favorably with theoretical predictions.
Group-theoretic models of the inversion process in bacterial genomes.
Egri-Nagy, Attila; Gebhardt, Volker; Tanaka, Mark M; Francis, Andrew R
2014-07-01
The variation in genome arrangements among bacterial taxa is largely due to the process of inversion. Recent studies indicate that not all inversions are equally probable, suggesting, for instance, that shorter inversions are more frequent than longer, and those that move the terminus of replication are less probable than those that do not. Current methods for establishing the inversion distance between two bacterial genomes are unable to incorporate such information. In this paper we suggest a group-theoretic framework that in principle can take these constraints into account. In particular, we show that by lifting the problem from circular permutations to the affine symmetric group, the inversion distance can be found in polynomial time for a model in which inversions are restricted to acting on two regions. This requires the proof of new results in group theory, and suggests a vein of new combinatorial problems concerning permutation groups on which group theorists will be needed to collaborate with biologists. We apply the new method to inferring distances and phylogenies for published Yersinia pestis data.
Cárdenas, Walter HZ; Mamani, Javier B; Sibov, Tatiana T; Caous, Cristofer A; Amaro, Edson; Gamarra, Lionel F
2012-01-01
Background Nanoparticles in suspension are often utilized for intracellular labeling and evaluation of toxicity in experiments conducted in vitro. The purpose of this study was to undertake a computational modeling analysis of the deposition kinetics of a magnetite nanoparticle agglomerate in cell culture medium. Methods Finite difference methods and the Crank–Nicolson algorithm were used to solve the equation of mass transport in order to analyze concentration profiles and dose deposition. Theoretical data were confirmed by experimental magnetic resonance imaging. Results Different behavior in the dose fraction deposited was found for magnetic nanoparticles up to 50 nm in diameter when compared with magnetic nanoparticles of a larger diameter. Small changes in the dispersion factor cause variations of up to 22% in the dose deposited. The experimental data confirmed the theoretical results. Conclusion These findings are important in planning for nanomaterial absorption, because they provide valuable information for efficient intracellular labeling and control toxicity. This model enables determination of the in vitro transport behavior of specific magnetic nanoparticles, which is also relevant to other models that use cellular components and particle absorption processes. PMID:22745539
NASA Astrophysics Data System (ADS)
Naidoo, Kara
2017-12-01
This study examines the transformation and dynamic nature of one teacher candidate's (Susan) identity as a learner and teacher of science throughout an innovative science methods course. The goal of this paper is to use theoretically derived themes grounded in cultural-historical activity theory (CHAT) and situated learning theory to determine the ways in which Susan's identity as a learner and teacher of science was influenced by her experiences in the course, and to describe how she made meaning of her transformative process. The following are the three theoretical themes: (1) learning contributes to identity development, (2) identity development is a dialogical process that occurs between individuals, not within individuals, and (3) social practice leads to transformations and transformations lead to the creation of new social practices. Within each theme, specific experiences in the science methods course are identified that influenced Susan's identity development as a teacher of science. Knowing how context and experiences influence identity development can inform design decisions concerning teacher education programs, courses, and experiences for candidates.
Electron capture rates in stars studied with heavy ion charge exchange reactions
NASA Astrophysics Data System (ADS)
Bertulani, C. A.
2018-01-01
Indirect methods using nucleus-nucleus reactions at high energies (here, high energies mean ~ 50 MeV/nucleon and higher) are now routinely used to extract information of interest for nuclear astrophysics. This is of extreme relevance as many of the nuclei involved in stellar evolution are short-lived. Therefore, indirect methods became the focus of recent studies carried out in major nuclear physics facilities. Among such methods, heavy ion charge exchange is thought to be a useful tool to infer Gamow-Teller matrix elements needed to describe electron capture rates in stars and also double beta-decay experiments. In this short review, I provide a theoretical guidance based on a simple reaction model for charge exchange reactions.
Edge-augmented Fourier partial sums with applications to Magnetic Resonance Imaging (MRI)
NASA Astrophysics Data System (ADS)
Larriva-Latt, Jade; Morrison, Angela; Radgowski, Alison; Tobin, Joseph; Iwen, Mark; Viswanathan, Aditya
2017-08-01
Certain applications such as Magnetic Resonance Imaging (MRI) require the reconstruction of functions from Fourier spectral data. When the underlying functions are piecewise-smooth, standard Fourier approximation methods suffer from the Gibbs phenomenon - with associated oscillatory artifacts in the vicinity of edges and an overall reduced order of convergence in the approximation. This paper proposes an edge-augmented Fourier reconstruction procedure which uses only the first few Fourier coefficients of an underlying piecewise-smooth function to accurately estimate jump information and then incorporate it into a Fourier partial sum approximation. We provide both theoretical and empirical results showing the improved accuracy of the proposed method, as well as comparisons demonstrating superior performance over existing state-of-the-art sparse optimization-based methods.
Continuum Electrostatics Approaches to Calculating pKas and Ems in Proteins
Gunner, MR; Baker, Nathan A.
2017-01-01
Proteins change their charge state through protonation and redox reactions as well as through binding charged ligands. The free energy of these reactions are dominated by solvation and electrostatic energies and modulated by protein conformational relaxation in response to the ionization state changes. Although computational methods for calculating these interactions can provide very powerful tools for predicting protein charge states, they include several critical approximations of which users should be aware. This chapter discusses the strengths, weaknesses, and approximations of popular computational methods for predicting charge states and understanding their underlying electrostatic interactions. The goal of this chapter is to inform users about applications and potential caveats of these methods as well as outline directions for future theoretical and computational research. PMID:27497160
Collaborative Manufacturing Management in Networked Supply Chains
NASA Astrophysics Data System (ADS)
Pouly, Michel; Naciri, Souleiman; Berthold, Sébastien
ERP systems provide information management and analysis to industrial companies and support their planning activities. They are currently mostly based on theoretical values (averages) of parameters and not on the actual, real shop floor data, leading to disturbance of the planning algorithms. On the other hand, sharing data between manufacturers, suppliers and customers becomes very important to ensure reactivity towards markets variability. This paper proposes software solutions to address these requirements and methods to automatically capture the necessary corresponding shop floor information. In order to share data produced by different legacy systems along the collaborative networked supply chain, we propose to use the Generic Product Model developed by Hitachi to extract, translate and store heterogeneous ERP data.
Resolving phase information of the optical local density of state with scattering near-field probes
NASA Astrophysics Data System (ADS)
Prasad, R.; Vincent, R.
2016-10-01
We theoretically discuss the link between the phase measured using a scattering optical scanning near-field microscopy (s-SNOM) and the local density of optical states (LDOS). A remarkable result is that the LDOS information is directly included in the phase of the probe. Therefore by monitoring the spatial variation of the trans-scattering phase, we locally measure the phase modulation associated with the probe and the optical paths. We demonstrate numerically that a technique involving two-phase imaging of a sample with two different sized tips should allow to obtain the image the pLDOS. For this imaging method, numerical comparison with extinction probe measurement shows crucial qualitative and quantitative improvement.
Benford's law and the FSD distribution of economic behavioral micro data
NASA Astrophysics Data System (ADS)
Villas-Boas, Sofia B.; Fu, Qiuzi; Judge, George
2017-11-01
In this paper, we focus on the first significant digit (FSD) distribution of European micro income data and use information theoretic-entropy based methods to investigate the degree to which Benford's FSD law is consistent with the nature of these economic behavioral systems. We demonstrate that Benford's law is not an empirical phenomenon that occurs only in important distributions in physical statistics, but that it also arises in self-organizing dynamic economic behavioral systems. The empirical likelihood member of the minimum divergence-entropy family, is used to recover country based income FSD probability density functions and to demonstrate the implications of using a Benford prior reference distribution in economic behavioral system information recovery.
Rheology, tectonics, and the structure of the Venus lithosphere
NASA Technical Reports Server (NTRS)
Zuber, M. T.
1994-01-01
Given the absence of ground truth information on seismic structure, heat flow, and rock strength, or short wavelength gravity or magnetic data for Venus, information on the thermal, mechanical and compositional nature of the shallow interior must be obtained by indirect methods. Using pre-Magellan data, theoretical models constrained by the depths of impact craters and the length scales of tectonic features yielded estimates on the thickness of Venus' brittle-elastic lithosphere and the allowable range of crustal thickness and surface thermal gradient. The purpose of this study is to revisit the question of the shallow structure of Venus based on Magellan observations of the surface and recent experiments that address Venus' crustal rheology.
A Novel Application for Text Watermarking in Digital Reading
NASA Astrophysics Data System (ADS)
Zhang, Jin; Li, Qing-Cheng; Wang, Cong; Fang, Ji
Although watermarking research has made great strides in theoretical aspect, its lack of application in business could not be covered. It is due to few people pays attention to usage of the information carried by watermarking. This paper proposes a new watermarking application method. After digital document being reorganized with advertisement together, watermarking is designed to carry this structure of new document. It will release advertisement as interference information under attack. On the one hand, reducing the quality of digital works could inhabit unauthorized distribution. On the other hand, advertisement can benefit copyright holders as compensation. Moreover implementation detail, attack evaluation and watermarking algorithm correlation are also discussed through an experiment based on txt file.
A project management system for the X-29A flight test program
NASA Technical Reports Server (NTRS)
Stewart, J. F.; Bauer, C. A.
1983-01-01
The project-management system developed for NASA's participation in the X-29A aircraft development program is characterized from a theoretical perspective, as an example of a system appropriate to advanced, highly integrated technology projects. System-control theory is applied to the analysis of classical project-management techniques and structures, which are found to be of closed-loop multivariable type; and the effects of increasing project complexity and integration are evaluated. The importance of information flow, sampling frequency, information holding, and delays is stressed. The X-29A system is developed in four stages: establishment of overall objectives and requirements, determination of information processes (block diagrams) definition of personnel functional roles and relationships, and development of a detailed work-breakdown structure. The resulting system is shown to require a greater information flow to management than conventional methods. Sample block diagrams are provided.
Eschler, Jordan; O’Leary, Katie; Kendall, Logan; Ralston, James D.; Pratt, Wanda
2017-01-01
The electronic health record (EHR) has evolved as a tool primarily dictated by the needs of health care clinicians and organizations, providing important functions supporting day to day work in health care. However, the EHR and supporting information systems contain the potential to incorporate patient workflows and tasks as well. Integrating patient needs into existing EHR and health management systems will require understanding of patients as direct stakeholders, necessitating observation and exploration of in situ EHR use by patients to envision new opportunities for future systems. In this paper, we describe the application of a theoretical framework (Vicente, 1999) to organize qualitative data during a multi-stage research study into patient engagement with EHRs. By using this method of systematic inquiry, we have more effectively elicited patient stakeholder needs and goals to inform the design of future health care information systems. PMID:29056874
Harris, Janet L; Booth, Andrew; Cargo, Margaret; Hannes, Karin; Harden, Angela; Flemming, Kate; Garside, Ruth; Pantoja, Tomas; Thomas, James; Noyes, Jane
2018-05-01
This paper updates previous Cochrane guidance on question formulation, searching, and protocol development, reflecting recent developments in methods for conducting qualitative evidence syntheses to inform Cochrane intervention reviews. Examples are used to illustrate how decisions about boundaries for a review are formed via an iterative process of constructing lines of inquiry and mapping the available information to ascertain whether evidence exists to answer questions related to effectiveness, implementation, feasibility, appropriateness, economic evidence, and equity. The process of question formulation allows reviewers to situate the topic in relation to how it informs and explains effectiveness, using the criterion of meaningfulness, appropriateness, feasibility, and implementation. Questions related to complex questions and interventions can be structured by drawing on an increasingly wide range of question frameworks. Logic models and theoretical frameworks are useful tools for conceptually mapping the literature to illustrate the complexity of the phenomenon of interest. Furthermore, protocol development may require iterative question formulation and searching. Consequently, the final protocol may function as a guide rather than a prescriptive route map, particularly in qualitative reviews that ask more exploratory and open-ended questions. Copyright © 2017 Elsevier Inc. All rights reserved.
Using qualitative methods to develop a contextually tailored instrument: Lessons learned
Lee, Haeok; Kiang, Peter; Kim, Minjin; Semino-Asaro, Semira; Colten, Mary Ellen; Tang, Shirley S.; Chea, Phala; Peou, Sonith; Grigg-Saito, Dorcas C.
2015-01-01
Objective: To develop a population-specific instrument to inform hepatitis B virus (HBV) and human papilloma virus (HPV) prevention education and intervention based on data and evidence obtained from the targeted population of Khmer mothers reflecting their socio-cultural and health behaviors. Methods: The principles of community-based participatory research (CBPR) guided the development of a standardized survey interview. Four stages of development and testing of the survey instrument took place in order to inform the quantitative health survey used to collect data in stage five of the project. This article reports only on Stages 1-4. Results: This process created a new quantitative measure of HBV and HPV prevention behavior based on the revised Network Episode Model and informed by the targeted population. The CBPR method facilitated the application and translation of abstract theoretical ideas of HBV and HPV prevention behavior into culturally-relevant words and expressions of Cambodian Americans (CAs). Conclusions: The design of an instrument development process that accounts for distinctive socio-cultural backgrounds of CA refugee/immigrant women provides a model for use in developing future health surveys that are intended to aid minority-serving health care professionals and researchers as well as targeted minority populations. PMID:27981114
Concerns regarding a call for pluralism of information theory and hypothesis testing
Lukacs, P.M.; Thompson, W.L.; Kendall, W.L.; Gould, W.R.; Doherty, P.F.; Burnham, K.P.; Anderson, D.R.
2007-01-01
1. Stephens et al . (2005) argue for `pluralism? in statistical analysis, combining null hypothesis testing and information-theoretic (I-T) methods. We show that I-T methods are more informative even in single variable problems and we provide an ecological example. 2. I-T methods allow inferences to be made from multiple models simultaneously. We believe multimodel inference is the future of data analysis, which cannot be achieved with null hypothesis-testing approaches. 3. We argue for a stronger emphasis on critical thinking in science in general and less reliance on exploratory data analysis and data dredging. Deriving alternative hypotheses is central to science; deriving a single interesting science hypothesis and then comparing it to a default null hypothesis (e.g. `no difference?) is not an efficient strategy for gaining knowledge. We think this single-hypothesis strategy has been relied upon too often in the past. 4. We clarify misconceptions presented by Stephens et al . (2005). 5. We think inference should be made about models, directly linked to scientific hypotheses, and their parameters conditioned on data, Prob(Hj| data). I-T methods provide a basis for this inference. Null hypothesis testing merely provides a probability statement about the data conditioned on a null model, Prob(data |H0). 6. Synthesis and applications. I-T methods provide a more informative approach to inference. I-T methods provide a direct measure of evidence for or against hypotheses and a means to consider simultaneously multiple hypotheses as a basis for rigorous inference. Progress in our science can be accelerated if modern methods can be used intelligently; this includes various I-T and Bayesian methods.
NASA Astrophysics Data System (ADS)
Goodwell, Allison E.; Kumar, Praveen
2017-07-01
Information theoretic measures can be used to identify nonlinear interactions between source and target variables through reductions in uncertainty. In information partitioning, multivariate mutual information is decomposed into synergistic, unique, and redundant components. Synergy is information shared only when sources influence a target together, uniqueness is information only provided by one source, and redundancy is overlapping shared information from multiple sources. While this partitioning has been applied to provide insights into complex dependencies, several proposed partitioning methods overestimate redundant information and omit a component of unique information because they do not account for source dependencies. Additionally, information partitioning has only been applied to time-series data in a limited context, using basic pdf estimation techniques or a Gaussian assumption. We develop a Rescaled Redundancy measure (Rs) to solve the source dependency issue, and present Gaussian, autoregressive, and chaotic test cases to demonstrate its advantages over existing techniques in the presence of noise, various source correlations, and different types of interactions. This study constitutes the first rigorous application of information partitioning to environmental time-series data, and addresses how noise, pdf estimation technique, or source dependencies can influence detected measures. We illustrate how our techniques can unravel the complex nature of forcing and feedback within an ecohydrologic system with an application to 1 min environmental signals of air temperature, relative humidity, and windspeed. The methods presented here are applicable to the study of a broad range of complex systems composed of interacting variables.
Tavender, Emma J; Bosch, Marije; Gruen, Russell L; Green, Sally E; Michie, Susan; Brennan, Sue E; Francis, Jill J; Ponsford, Jennie L; Knott, Jonathan C; Meares, Sue; Smyth, Tracy; O'Connor, Denise A
2015-05-25
Despite the availability of evidence-based guidelines for the management of mild traumatic brain injury in the emergency department (ED), variations in practice exist. Interventions designed to implement recommended behaviours can reduce this variation. Using theory to inform intervention development is advocated; however, there is no consensus on how to select or apply theory. Integrative theoretical frameworks, based on syntheses of theories and theoretical constructs relevant to implementation, have the potential to assist in the intervention development process. This paper describes the process of applying two theoretical frameworks to investigate the factors influencing recommended behaviours and the choice of behaviour change techniques and modes of delivery for an implementation intervention. A stepped approach was followed: (i) identification of locally applicable and actionable evidence-based recommendations as targets for change, (ii) selection and use of two theoretical frameworks for identifying barriers to and enablers of change (Theoretical Domains Framework and Model of Diffusion of Innovations in Service Organisations) and (iii) identification and operationalisation of intervention components (behaviour change techniques and modes of delivery) to address the barriers and enhance the enablers, informed by theory, evidence and feasibility/acceptability considerations. We illustrate this process in relation to one recommendation, prospective assessment of post-traumatic amnesia (PTA) by ED staff using a validated tool. Four recommendations for managing mild traumatic brain injury were targeted with the intervention. The intervention targeting the PTA recommendation consisted of 14 behaviour change techniques and addressed 6 theoretical domains and 5 organisational domains. The mode of delivery was informed by six Cochrane reviews. It was delivered via five intervention components : (i) local stakeholder meetings, (ii) identification of local opinion leader teams, (iii) a train-the-trainer workshop for appointed local opinion leaders, (iv) local training workshops for delivery by trained local opinion leaders and (v) provision of tools and materials to prompt recommended behaviours. Two theoretical frameworks were used in a complementary manner to inform intervention development in managing mild traumatic brain injury in the ED. The effectiveness and cost-effectiveness of the developed intervention is being evaluated in a cluster randomised trial, part of the Neurotrauma Evidence Translation (NET) program.
Peavey, Erin; Vander Wyst, Kiley B
2017-10-01
This article provides critical examination and comparison of the conceptual meaning and underlying assumptions of the concepts evidence-based design (EBD) and research-informed design (RID) in order to facilitate practical use and theoretical development. In recent years, EBD has experienced broad adoption, yet it has been simultaneously critiqued for rigidity and misapplication. Many practitioners are gravitating to the term RID to describe their method of integrating knowledge into the design process. However, the term RID lacks a clear definition and the blurring of terms has the potential to weaken advances made integrating research into practice. Concept analysis methods from Walker and Avant were used to define the concepts for comparison. Conceptual definitions, process descriptions, examples (i.e., model cases), and methods of evaluation are offered for EBD and RID. Although EBD and RID share similarities in meaning, the two terms are distinct. When comparing evidence based (EB) and research informed, EB is a broad base of information types (evidence) that are narrowly applied (based), while the latter references a narrow slice of information (research) that is broadly applied (informed) to create an end product of design. Much of the confusion between the use of the concepts EBD and RID arises out of differing perspectives between the way practitioners and academics understand the underlying terms. The authors hope this article serves to generate thoughtful dialogue, which is essential to the development of a discipline, and look forward to the contribution of the readership.
Polarization Remote Sensing Physical Mechanism, Key Methods and Application
NASA Astrophysics Data System (ADS)
Yang, B.; Wu, T.; Chen, W.; Li, Y.; Knjazihhin, J.; Asundi, A.; Yan, L.
2017-09-01
China's long-term planning major projects "high-resolution earth observation system" has been invested nearly 100 billion and the satellites will reach 100 to 2020. As to 2/3 of China's area covered by mountains it has a higher demand for remote sensing. In addition to light intensity, frequency, phase, polarization is also the main physical characteristics of remote sensing electromagnetic waves. Polarization is an important component of the reflected information from the surface and the atmospheric information, and the polarization effect of the ground object reflection is the basis of the observation of polarization remote sensing. Therefore, the effect of eliminating the polarization effect is very important for remote sensing applications. The main innovations of this paper is as follows: (1) Remote sensing observation method. It is theoretically deduced and verified that the polarization can weaken the light in the strong light region, and then provide the polarization effective information. In turn, the polarization in the low light region can strengthen the weak light, the same can be obtained polarization effective information. (2) Polarization effect of vegetation. By analyzing the structure characteristics of vegetation, polarization information is obtained, then the vegetation structure information directly affects the absorption of biochemical components of leaves. (3) Atmospheric polarization neutral point observation method. It is proved to be effective to achieve the ground-gas separation, which can achieve the effect of eliminating the atmospheric polarization effect and enhancing the polarization effect of the object.
Informational analysis for compressive sampling in radar imaging.
Zhang, Jingxiong; Yang, Ke
2015-03-24
Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.
The smooth entropy formalism for von Neumann algebras
NASA Astrophysics Data System (ADS)
Berta, Mario; Furrer, Fabian; Scholz, Volkher B.
2016-01-01
We discuss information-theoretic concepts on infinite-dimensional quantum systems. In particular, we lift the smooth entropy formalism as introduced by Renner and collaborators for finite-dimensional systems to von Neumann algebras. For the smooth conditional min- and max-entropy, we recover similar characterizing properties and information-theoretic operational interpretations as in the finite-dimensional case. We generalize the entropic uncertainty relation with quantum side information of Tomamichel and Renner and discuss applications to quantum cryptography. In particular, we prove the possibility to perform privacy amplification and classical data compression with quantum side information modeled by a von Neumann algebra.
The smooth entropy formalism for von Neumann algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berta, Mario, E-mail: berta@caltech.edu; Furrer, Fabian, E-mail: furrer@eve.phys.s.u-tokyo.ac.jp; Scholz, Volkher B., E-mail: scholz@phys.ethz.ch
2016-01-15
We discuss information-theoretic concepts on infinite-dimensional quantum systems. In particular, we lift the smooth entropy formalism as introduced by Renner and collaborators for finite-dimensional systems to von Neumann algebras. For the smooth conditional min- and max-entropy, we recover similar characterizing properties and information-theoretic operational interpretations as in the finite-dimensional case. We generalize the entropic uncertainty relation with quantum side information of Tomamichel and Renner and discuss applications to quantum cryptography. In particular, we prove the possibility to perform privacy amplification and classical data compression with quantum side information modeled by a von Neumann algebra.
Wixted, John T; Mickes, Laura
2018-01-01
Receiver operating characteristic (ROC) analysis was introduced to the field of eyewitness identification 5 years ago. Since that time, it has been both influential and controversial, and the debate has raised an issue about measuring discriminability that is rarely considered. The issue concerns the distinction between empirical discriminability (measured by area under the ROC curve) vs. underlying/theoretical discriminability (measured by d' or variants of it). Under most circumstances, the two measures will agree about a difference between two conditions in terms of discriminability. However, it is possible for them to disagree, and that fact can lead to confusion about which condition actually yields higher discriminability. For example, if the two conditions have implications for real-world practice (e.g., a comparison of competing lineup formats), should a policymaker rely on the area-under-the-curve measure or the theory-based measure? Here, we illustrate the fact that a given empirical ROC yields as many underlying discriminability measures as there are theories that one is willing to take seriously. No matter which theory is correct, for practical purposes, the singular area-under-the-curve measure best identifies the diagnostically superior procedure. For that reason, area under the ROC curve informs policy in a way that underlying theoretical discriminability never can. At the same time, theoretical measures of discriminability are equally important, but for a different reason. Without an adequate theoretical understanding of the relevant task, the field will be in no position to enhance empirical discriminability.
The direct cooling tail method for X-ray burst analysis to constrain neutron star masses and radii
NASA Astrophysics Data System (ADS)
Suleimanov, Valery F.; Poutanen, Juri; Nättilä, Joonas; Kajava, Jari J. E.; Revnivtsev, Mikhail G.; Werner, Klaus
2017-04-01
Determining neutron star (NS) radii and masses can help to understand the properties of matter at supra-nuclear densities. Thermal emission during thermonuclear X-ray bursts from NSs in low-mass X-ray binaries provides a unique opportunity to study NS parameters, because of the high fluxes, large luminosity variations and the related changes in the spectral properties. The standard cooling tail method uses hot NS atmosphere models to convert the observed spectral evolution during cooling stages of X-ray bursts to the Eddington flux FEdd and the stellar angular size Ω. These are then translated to the constraints on the NS mass M and radius R. Here we present the improved, direct cooling tail method that generalizes the standard approach. First, we adjust the cooling tail method to account for the bolometric correction to the flux. Then, we fit the observed dependence of the blackbody normalization on flux with a theoretical model directly on the M-R plane by interpolating theoretical dependences to a given gravity, hence ensuring only weakly informative priors for M and R instead of FEdd and Ω. The direct cooling method is demonstrated using a photospheric radius expansion burst from SAX J1810.8-2609, which has happened when the system was in the hard state. Comparing to the standard cooling tail method, the confidence regions are shifted by 1σ towards larger radii, giving R = 11.5-13.0 km at M = 1.3-1.8 M⊙ for this NS.
A three-way approach for protein function classification
2017-01-01
The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy. PMID:28234929
A three-way approach for protein function classification.
Ur Rehman, Hafeez; Azam, Nouman; Yao, JingTao; Benso, Alfredo
2017-01-01
The knowledge of protein functions plays an essential role in understanding biological cells and has a significant impact on human life in areas such as personalized medicine, better crops and improved therapeutic interventions. Due to expense and inherent difficulty of biological experiments, intelligent methods are generally relied upon for automatic assignment of functions to proteins. The technological advancements in the field of biology are improving our understanding of biological processes and are regularly resulting in new features and characteristics that better describe the role of proteins. It is inevitable to neglect and overlook these anticipated features in designing more effective classification techniques. A key issue in this context, that is not being sufficiently addressed, is how to build effective classification models and approaches for protein function prediction by incorporating and taking advantage from the ever evolving biological information. In this article, we propose a three-way decision making approach which provides provisions for seeking and incorporating future information. We considered probabilistic rough sets based models such as Game-Theoretic Rough Sets (GTRS) and Information-Theoretic Rough Sets (ITRS) for inducing three-way decisions. An architecture of protein functions classification with probabilistic rough sets based three-way decisions is proposed and explained. Experiments are carried out on Saccharomyces cerevisiae species dataset obtained from Uniprot database with the corresponding functional classes extracted from the Gene Ontology (GO) database. The results indicate that as the level of biological information increases, the number of deferred cases are reduced while maintaining similar level of accuracy.
Spot the match – wildlife photo-identification using information theory
Speed, Conrad W; Meekan, Mark G; Bradshaw, Corey JA
2007-01-01
Background Effective approaches for the management and conservation of wildlife populations require a sound knowledge of population demographics, and this is often only possible through mark-recapture studies. We applied an automated spot-recognition program (I3S) for matching natural markings of wildlife that is based on a novel information-theoretic approach to incorporate matching uncertainty. Using a photo-identification database of whale sharks (Rhincodon typus) as an example case, the information criterion (IC) algorithm we developed resulted in a parsimonious ranking of potential matches of individuals in an image library. Automated matches were compared to manual-matching results to test the performance of the software and algorithm. Results Validation of matched and non-matched images provided a threshold IC weight (approximately 0.2) below which match certainty was not assured. Most images tested were assigned correctly; however, scores for the by-eye comparison were lower than expected, possibly due to the low sample size. The effect of increasing horizontal angle of sharks in images reduced matching likelihood considerably. There was a negative linear relationship between the number of matching spot pairs and matching score, but this relationship disappeared when using the IC algorithm. Conclusion The software and use of easily applied information-theoretic scores of match parsimony provide a reliable and freely available method for individual identification of wildlife, with wide applications and the potential to improve mark-recapture studies without resorting to invasive marking techniques. PMID:17227581
Using Riemannian geometry to obtain new results on Dikin and Karmarkar methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira, P.; Joao, X.; Piaui, T.
1994-12-31
We are motivated by a 1990 Karmarkar paper on Riemannian geometry and Interior Point Methods. In this talk we show 3 results. (1) Karmarkar direction can be derived from the Dikin one. This is obtained by constructing a certain Z(x) representation of the null space of the unitary simplex (e, x) = 1; then the projective direction is the image under Z(x) of the affine-scaling one, when it is restricted to that simplex. (2) Second order information on Dikin and Karmarkar methods. We establish computable Hessians for each of the metrics corresponding to both directions, thus permitting the generation ofmore » {open_quotes}second order{close_quotes} methods. (3) Dikin and Karmarkar geodesic descent methods. For those directions, we make computable the theoretical Luenberger geodesic descent method, since we are able to explicit very accurate expressions of the corresponding geodesics. Convergence results are given.« less
A Maximum Entropy Test for Evaluating Higher-Order Correlations in Spike Counts
Onken, Arno; Dragoi, Valentin; Obermayer, Klaus
2012-01-01
Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly. PMID:22685392
Information theoretic analysis of canny edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2011-06-01
In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.
A framework for characterizing eHealth literacy demands and barriers.
Chan, Connie V; Kaufman, David R
2011-11-17
Consumer eHealth interventions are of a growing importance in the individual management of health and health behaviors. However, a range of access, resources, and skills barriers prevent health care consumers from fully engaging in and benefiting from the spectrum of eHealth interventions. Consumers may engage in a range of eHealth tasks, such as participating in health discussion forums and entering information into a personal health record. eHealth literacy names a set of skills and knowledge that are essential for productive interactions with technology-based health tools, such as proficiency in information retrieval strategies, and communicating health concepts effectively. We propose a theoretical and methodological framework for characterizing complexity of eHealth tasks, which can be used to diagnose and describe literacy barriers and inform the development of solution strategies. We adapted and integrated two existing theoretical models relevant to the analysis of eHealth literacy into a single framework to systematically categorize and describe task demands and user performance on tasks needed by health care consumers in the information age. The method derived from the framework is applied to (1) code task demands using a cognitive task analysis, and (2) code user performance on tasks. The framework and method are applied to the analysis of a Web-based consumer eHealth task with information-seeking and decision-making demands. We present the results from the in-depth analysis of the task performance of a single user as well as of 20 users on the same task to illustrate both the detailed analysis and the aggregate measures obtained and potential analyses that can be performed using this method. The analysis shows that the framework can be used to classify task demands as well as the barriers encountered in user performance of the tasks. Our approach can be used to (1) characterize the challenges confronted by participants in performing the tasks, (2) determine the extent to which application of the framework to the cognitive task analysis can predict and explain the problems encountered by participants, and (3) inform revisions to the framework to increase accuracy of predictions. The results of this illustrative application suggest that the framework is useful for characterizing task complexity and for diagnosing and explaining barriers encountered in task completion. The framework and analytic approach can be a potentially powerful generative research platform to inform development of rigorous eHealth examination and design instruments, such as to assess eHealth competence, to design and evaluate consumer eHealth tools, and to develop an eHealth curriculum.
ERIC Educational Resources Information Center
All-Union Inst. for Scientific and Technical Information, Moscow (USSR).
Reports given before the Committee on "Research on the Theoretical Basis of Information" of the International Federation for Documentation (FID/RI) are presented unaltered and unabridged in English or in Russian -- the language of their presentation. Each report is accompanied by an English or Russian resume. Generally, only original…
Information Diffusion in Facebook-Like Social Networks Under Information Overload
NASA Astrophysics Data System (ADS)
Li, Pei; Xing, Kai; Wang, Dapeng; Zhang, Xin; Wang, Hui
2013-07-01
Research on social networks has received remarkable attention, since many people use social networks to broadcast information and stay connected with their friends. However, due to the information overload in social networks, it becomes increasingly difficult for users to find useful information. This paper takes Facebook-like social networks into account, and models the process of information diffusion under information overload. The term view scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated is proposed to characterize the information diffusion efficiency. Through theoretical analysis, we find that factors such as network structure and view scope number have no impact on the information diffusion efficiency, which is a surprising result. To verify the results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly.
Koch, Sven H; Weir, Charlene; Haar, Maral; Staggers, Nancy; Agutter, Jim; Görges, Matthias; Westenskow, Dwayne
2012-01-01
Fatal errors can occur in intensive care units (ICUs). Researchers claim that information integration at the bedside may improve nurses' situation awareness (SA) of patients and decrease errors. However, it is unclear which information should be integrated and in what form. Our research uses the theory of SA to analyze the type of tasks, and their associated information gaps. We aimed to provide recommendations for integrated, consolidated information displays to improve nurses' SA. Systematic observations methods were used to follow 19 ICU nurses for 38 hours in 3 clinical practice settings. Storyboard methods and concept mapping helped to categorize the observed tasks, the associated information needs, and the information gaps of the most frequent tasks by SA level. Consensus and discussion of the research team was used to propose recommendations to improve information displays at the bedside based on information deficits. Nurses performed 46 different tasks at a rate of 23.4 tasks per hour. The information needed to perform the most common tasks was often inaccessible, difficult to see at a distance or located on multiple monitoring devices. Current devices at the ICU bedside do not adequately support a nurse's information-gathering activities. Medication management was the most frequent category of tasks. Information gaps were present at all levels of SA and across most of the tasks. Using a theoretical model to understand information gaps can aid in designing functional requirements. Integrated information that enhances nurses' Situation Awareness may decrease errors and improve patient safety in the future.
NASA Astrophysics Data System (ADS)
Tada, T.; Cho, I.; Shinozaki, Y.
2005-12-01
We have invented a Two-Radius (TR) circular array method of microtremor exploration, an algorithm that enables to estimate phase velocities of Love waves by analyzing horizontal-component records of microtremors that are obtained with an array of seismic sensors placed around circumferences of two different radii. The data recording may be done either simultaneously around the two circles or in two separate sessions with sensors distributed around each circle. Both Rayleigh and Love waves are present in the horizontal components of microtremors, but in the data processing of our TR method, all information on the Rayleigh waves ends up cancelled out, and information on the Love waves alone are left to be analyzed. Also, unlike the popularly used frequency-wavenumber spectral (F-K) method, our TR method does not resolve individual plane-wave components arriving from different directions and analyze their "vector" phase velocities, but instead directly evaluates their "scalar" phase velocities --- phase velocities that contain no information on the arrival direction of waves --- through a mathematical procedure which involves azimuthal averaging. The latter feature leads us to expect that, with our TR method, it is possible to conduct phase velocity analysis with smaller numbers of sensors, with higher stability, and up to longer-wavelength ranges than with the F-K method. With a view to investigating the capabilities and limitations of our TR method in practical implementation to real data, we have deployed circular seismic arrays of different sizes at a test site in Japan where the underground structure is well documented through geophysical exploration. Ten seismic sensors were placed equidistantly around two circumferences, five around each circle, with varying combinations of radii ranging from several meters to several tens of meters, and simultaneous records of microtremors around circles of two different radii were analyzed with our TR method to produce estimates for the phase velocities of Love waves. The estimates were then checked against "model" phase velocities that are derived from theoretical calculations. We have also conducted a check of the estimated spectral ratios against the "model" spectral ratios, where we mean by "spectral ratio" an intermediary quantity that is calculated from observed records prior to the estimation of the phase velocity in the data analysis procedure of our TR method. In most cases, the estimated phase velocities coincided well with the model phase velocities within a wavelength range extending roughly from 3r to 6r (r: array radius). It was found out that, outside the upper and lower resolution limits of the TR method, the discrepancy between the estimated and model phase velocities, as well as the discrepancy between the estimated and model spectral ratios, were accounted for satisfactorily by theoretical consideration of three factors: the presence of higher surface-wave modes, directional aliasing effects related to the finite number of sensors in the seismic array, and the presence of incoherent noise.
Information theoretical assessment of image gathering and coding for digital restoration
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; John, Sarah; Reichenbach, Stephen E.
1990-01-01
The process of image-gathering, coding, and restoration is presently treated in its entirety rather than as a catenation of isolated tasks, on the basis of the relationship between the spectral information density of a transmitted signal and the restorability of images from the signal. This 'information-theoretic' assessment accounts for the information density and efficiency of the acquired signal as a function of the image-gathering system's design and radiance-field statistics, as well as for the information efficiency and data compression that are obtainable through the combination of image gathering with coding to reduce signal redundancy. It is found that high information efficiency is achievable only through minimization of image-gathering degradation as well as signal redundancy.
Spreading dynamics of an e-commerce preferential information model on scale-free networks
NASA Astrophysics Data System (ADS)
Wan, Chen; Li, Tao; Guan, Zhi-Hong; Wang, Yuanmei; Liu, Xiongding
2017-02-01
In order to study the influence of the preferential degree and the heterogeneity of underlying networks on the spread of preferential e-commerce information, we propose a novel susceptible-infected-beneficial model based on scale-free networks. The spreading dynamics of the preferential information are analyzed in detail using the mean-field theory. We determine the basic reproductive number and equilibria. The theoretical analysis indicates that the basic reproductive number depends mainly on the preferential degree and the topology of the underlying networks. We prove the global stability of the information-elimination equilibrium. The permanence of preferential information and the global attractivity of the information-prevailing equilibrium are also studied in detail. Some numerical simulations are presented to verify the theoretical results.
Judging nursing information on the WWW: a theoretical understanding.
Cader, Raffik; Campbell, Steve; Watson, Don
2009-09-01
This paper is a report of a study of the judgement processes nurses use when evaluating World Wide Web information related to nursing practice. The World Wide Web has increased the global accessibility of online health information. However, the variable nature of the quality of World Wide Web information and its perceived level of reliability may lead to misinformation. This makes demands on healthcare professionals, and on nurses in particular, to ensure that health information of reliable quality is selected for use in practice. A grounded theory approach was adopted. Semi-structured interviews and focus groups were used to collect data, between 2004 and 2005, from 20 nurses undertaking a postqualification graduate course at a university and 13 nurses from a local hospital in the United Kingdom. A theoretical framework emerged that gave insight into the judgement process nurses use when evaluating World Wide Web information. Participants broke the judgement process down into specific tasks. In addition, they used tacit, process and propositional knowledge and intuition, quasi-rational cognition and analysis to undertake these tasks. World Wide Web information cues, time available and nurses' critical skills were influencing factors in their judgement process. Addressing the issue of quality and reliability associated with World Wide Web information is a global challenge. This theoretical framework could contribute towards meeting this challenge.
The Impact of Information Culture on Patient Safety Outcomes
Mikkonen, Santtu; Saranto, Kaija; Bates, David W.
2017-01-01
Summary Background An organization’s information culture and information management practices create conditions for processing patient information in hospitals. Information management incidents are failures that could lead to adverse events for the patient if they are not detected. Objectives To test a theoretical model that links information culture in acute care hospitals to information management incidents and patient safety outcomes. Methods Reason’s model for the stages of development of organizational accidents was applied. Study data were collected from a cross-sectional survey of 909 RNs who work in medical or surgical units at 32 acute care hospitals in Finland. Structural equation modeling was used to assess how well the hypothesized model fit the study data. Results Fit indices indicated a good fit for the model. In total, 18 of the 32 paths tested were statistically significant. Documentation errors had the strongest total effect on patient safety outcomes. Organizational guidance positively affected information availability and utilization of electronic patient records, whereas the latter had the strongest total effect on the reduction of information delays. Conclusions Patient safety outcomes are associated with information management incidents and information culture. Further, the dimensions of the information culture create work conditions that generate errors in hospitals. PMID:28272647
Hop limited epidemic-like information spreading in mobile social networks with selfish nodes
NASA Astrophysics Data System (ADS)
Wu, Yahui; Deng, Su; Huang, Hongbin
2013-07-01
Similar to epidemics, information can be transmitted directly among users in mobile social networks. Different from epidemics, we can control the spreading process by adjusting the corresponding parameters (e.g., hop count) directly. This paper proposes a theoretical model to evaluate the performance of an epidemic-like spreading algorithm, in which the maximal hop count of the information is limited. In addition, our model can be used to evaluate the impact of users’ selfish behavior. Simulations show the accuracy of our theoretical model. Numerical results show that the information hop count can have an important impact. In addition, the impact of selfish behavior is related to the information hop count.
A Monte-Carlo game theoretic approach for Multi-Criteria Decision Making under uncertainty
NASA Astrophysics Data System (ADS)
Madani, Kaveh; Lund, Jay R.
2011-05-01
Game theory provides a useful framework for studying Multi-Criteria Decision Making problems. This paper suggests modeling Multi-Criteria Decision Making problems as strategic games and solving them using non-cooperative game theory concepts. The suggested method can be used to prescribe non-dominated solutions and also can be used as a method to predict the outcome of a decision making problem. Non-cooperative stability definitions for solving the games allow consideration of non-cooperative behaviors, often neglected by other methods which assume perfect cooperation among decision makers. To deal with the uncertainty in input variables a Monte-Carlo Game Theory (MCGT) approach is suggested which maps the stochastic problem into many deterministic strategic games. The games are solved using non-cooperative stability definitions and the results include possible effects of uncertainty in input variables on outcomes. The method can handle multi-criteria multi-decision-maker problems with uncertainty. The suggested method does not require criteria weighting, developing a compound decision objective, and accurate quantitative (cardinal) information as it simplifies the decision analysis by solving problems based on qualitative (ordinal) information, reducing the computational burden substantially. The MCGT method is applied to analyze California's Sacramento-San Joaquin Delta problem. The suggested method provides insights, identifies non-dominated alternatives, and predicts likely decision outcomes.
Linkage disequilibrium interval mapping of quantitative trait loci.
Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte
2006-03-16
For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates.
An integrated conceptual framework for evaluating and improving 'understanding' in informed consent.
Bossert, Sabine; Strech, Daniel
2017-10-17
The development of understandable informed consent (IC) documents has proven to be one of the most important challenges in research with humans as well as in healthcare settings. Therefore, evaluating and improving understanding has been of increasing interest for empirical research on IC. However, several conceptual and practical challenges for the development of understandable IC documents remain unresolved. In this paper, we will outline and systematize some of these challenges. On the basis of our own experiences in empirical user testing of IC documents as well as the relevant literature on understanding in IC, we propose an integrated conceptual model for the development of understandable IC documents. The proposed conceptual model integrates different methods for the participatory improvement of written information, including IC, as well as quantitative methods for measuring understanding in IC. In most IC processes, understandable written information is an important prerequisite for valid IC. To improve the quality of IC documents, a conceptual model for participatory procedures of testing, revising, and retesting can be applied. However, the model presented in this paper needs further theoretical and empirical elaboration and clarification of several conceptual and practical challenges.
Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series
Vicente, Raul; Díaz-Pernas, Francisco J.; Wibral, Michael
2014-01-01
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems. PMID:25068489
Imaging mRNA In Vivo, from Birth to Death.
Tutucci, Evelina; Livingston, Nathan M; Singer, Robert H; Wu, Bin
2018-05-20
RNA is the fundamental information transfer system in the cell. The ability to follow single messenger RNAs (mRNAs) from transcription to degradation with fluorescent probes gives quantitative information about how the information is transferred from DNA to proteins. This review focuses on the latest technological developments in the field of single-mRNA detection and their usage to study gene expression in both fixed and live cells. By describing the application of these imaging tools, we follow the journey of mRNA from transcription to decay in single cells, with single-molecule resolution. We review current theoretical models for describing transcription and translation that were generated by single-molecule and single-cell studies. These methods provide a basis to study how single-molecule interactions generate phenotypes, fundamentally changing our understating of gene expression regulation.
Decoding DNA, RNA and peptides with quantum tunnelling
NASA Astrophysics Data System (ADS)
di Ventra, Massimiliano; Taniguchi, Masateru
2016-02-01
Drugs and treatments could be precisely tailored to an individual patient by extracting their cellular- and molecular-level information. For this approach to be feasible on a global scale, however, information on complete genomes (DNA), transcriptomes (RNA) and proteomes (all proteins) needs to be obtained quickly and at low cost. Quantum mechanical phenomena could potentially be of value here, because the biological information needs to be decoded at an atomic level and quantum tunnelling has recently been shown to be able to differentiate single nucleobases and amino acids in short sequences. Here, we review the different approaches to using quantum tunnelling for sequencing, highlighting the theoretical background to the method and the experimental capabilities demonstrated to date. We also explore the potential advantages of the approach and the technical challenges that must be addressed to deliver practical quantum sequencing devices.
NASA Astrophysics Data System (ADS)
Çırak, Çağrı; Demir, Selçuk; Ucun, Fatih; Çubuk, Osman
2011-08-01
Experimental and theoretical vibrational spectra of β-2-aminopyridinium dihydrogenphosphate (β-2APDP) have been investigated. The FT-IR spectrum of β-2APDP was recorded in the region 4000-400 cm -1. The optimized molecular structure and theoretical vibrational frequencies of β-2APDP have been investigated using ab initio Hartree-Fock (HF) and density functional B3LYP method with 6-311++G(d,p) basis set. The optimized geometric parameters (bond lengths and bond angles) and theoretical frequencies have been compared with the corresponding experimental data and it is found that they agree well with each other. All the assignments of the theoretical frequencies were performed by potential energy distributions using VEDA 4 program. Furthermore, the used scale factors were obtained from the ratio of the frequency values of the strongest peaks in the experimental and theoretical IR spectra. From the results it was concluded that the B3LYP method is superior to the HF method for the vibrational frequencies.
The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances
NASA Technical Reports Server (NTRS)
Beltran, Adriana; Salvador, James
1997-01-01
In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.
A note on probabilistic models over strings: the linear algebra approach.
Bouchard-Côté, Alexandre
2013-12-01
Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems.
Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments
NASA Astrophysics Data System (ADS)
Atwal, Gurinder S.; Kinney, Justin B.
2016-03-01
A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.
Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing
NASA Astrophysics Data System (ADS)
Rabbitt, Christopher
This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.
Dunsmoor, Joseph E.; Niv, Yael; Daw, Nathaniel; Phelps, Elizabeth A.
2015-01-01
Extinction serves as the leading theoretical framework and experimental model to describe how learned behaviors diminish through absence of anticipated reinforcement. In the past decade, extinction has moved beyond the realm of associative learning theory and behavioral experimentation in animals and has become a topic of considerable interest in the neuroscience of learning, memory, and emotion. Here, we review research and theories of extinction, both as a learning process and as a behavioral technique, and consider whether traditional understandings warrant a re-examination. We discuss the neurobiology, cognitive factors, and major computational theories, and revisit the predominant view that extinction results in new learning that interferes with expression of the original memory. Additionally, we reconsider the limitations of extinction as a technique to prevent the relapse of maladaptive behavior, and discuss novel approaches, informed by contemporary theoretical advances, that augment traditional extinction methods to target and potentially alter maladaptive memories. PMID:26447572
South, Susan C.; Hamdi, Nayla; Krueger, Robert F.
2015-01-01
For more than a decade, biometric moderation models have been used to examine whether genetic and environmental influences on individual differences might vary within the population. These quantitative gene × environment interaction (G×E) models not only have the potential to elucidate when genetic and environmental influences on a phenotype might differ, but why, as they provide an empirical test of several theoretical paradigms that serve as useful heuristics to explain etiology—diathesis-stress, bioecological, differential susceptibility, and social control. In the current manuscript, we review how these developmental theories align with different patterns of findings from statistical models of gene-environment interplay. We then describe the extant empirical evidence, using work by our own research group and others, to lay out genetically-informative plausible accounts of how phenotypes related to social inequality—physical health and cognition—might relate to these theoretical models. PMID:26426103
South, Susan C; Hamdi, Nayla R; Krueger, Robert F
2017-02-01
For more than a decade, biometric moderation models have been used to examine whether genetic and environmental influences on individual differences might vary within the population. These quantitative Gene × Environment interaction models have the potential to elucidate not only when genetic and environmental influences on a phenotype might differ, but also why, as they provide an empirical test of several theoretical paradigms that serve as useful heuristics to explain etiology-diathesis-stress, bioecological, differential susceptibility, and social control. In the current article, we review how these developmental theories align with different patterns of findings from statistical models of gene-environment interplay. We then describe the extant empirical evidence, using work by our own research group and others, to lay out genetically informative plausible accounts of how phenotypes related to social inequality-physical health and cognition-might relate to these theoretical models. © 2015 Wiley Periodicals, Inc.
Noninformative prior in the quantum statistical model of pure states
NASA Astrophysics Data System (ADS)
Tanaka, Fuyuhiko
2012-06-01
In the present paper, we consider a suitable definition of a noninformative prior on the quantum statistical model of pure states. While the full pure-states model is invariant under unitary rotation and admits the Haar measure, restricted models, which we often see in quantum channel estimation and quantum process tomography, have less symmetry and no compelling rationale for any choice. We adopt a game-theoretic approach that is applicable to classical Bayesian statistics and yields a noninformative prior for a general class of probability distributions. We define the quantum detection game and show that there exist noninformative priors for a general class of a pure-states model. Theoretically, it gives one of the ways that we represent ignorance on the given quantum system with partial information. Practically, our method proposes a default distribution on the model in order to use the Bayesian technique in the quantum-state tomography with a small sample.
Single Cell Proteomics in Biomedicine: High-dimensional Data Acquisition, Visualization and Analysis
Su, Yapeng; Shi, Qihui; Wei, Wei
2017-01-01
New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. PMID:28128880
Similarity Theory of Withdrawn Water Temperature Experiment
2015-01-01
Selective withdrawal from a thermal stratified reservoir has been widely utilized in managing reservoir water withdrawal. Besides theoretical analysis and numerical simulation, model test was also necessary in studying the temperature of withdrawn water. However, information on the similarity theory of the withdrawn water temperature model remains lacking. Considering flow features of selective withdrawal, the similarity theory of the withdrawn water temperature model was analyzed theoretically based on the modification of governing equations, the Boussinesq approximation, and some simplifications. The similarity conditions between the model and the prototype were suggested. The conversion of withdrawn water temperature between the model and the prototype was proposed. Meanwhile, the fundamental theory of temperature distribution conversion was firstly proposed, which could significantly improve the experiment efficiency when the basic temperature of the model was different from the prototype. Based on the similarity theory, an experiment was performed on the withdrawn water temperature which was verified by numerical method. PMID:26065020
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
Precision measurement of transition matrix elements via light shift cancellation.
Herold, C D; Vaidya, V D; Li, X; Rolston, S L; Porto, J V; Safronova, M S
2012-12-14
We present a method for accurate determination of atomic transition matrix elements at the 10(-3) level. Measurements of the ac Stark (light) shift around "magic-zero" wavelengths, where the light shift vanishes, provide precise constraints on the matrix elements. We make the first measurement of the 5s - 6p matrix elements in rubidium by measuring the light shift around the 421 and 423 nm zeros through diffraction of a condensate off a sequence of standing wave pulses. In conjunction with existing theoretical and experimental data, we find 0.3235(9)ea(0) and 0.5230(8)ea(0) for the 5s - 6p(1/2) and 5s - 6p(3/2) elements, respectively, an order of magnitude more accurate than the best theoretical values. This technique can provide needed, accurate matrix elements for many atoms, including those used in atomic clocks, tests of fundamental symmetries, and quantum information.
Problem-based learning in optical engineering studies
NASA Astrophysics Data System (ADS)
Voznesenskaya, Anna
2016-09-01
Nowadays, the Problem-Based Learning (PBL) is one of the most prospective educational technologies. PBL is based on evaluation of learning outcomes of a student, both professional and personal, instead of traditional evaluation of theoretical knowledge and selective practical skills. Such an approach requires changes in the curricula development. There should be introduced projects (cases) imitating real tasks from the professional life. These cases should include a problem summary with necessary theoretic description, charts, graphs, information sources etc, task to implement and evaluation indicators and criteria. Often these cases are evaluated with the assessment-center method. To motivate students for the given task they could be divided into groups and have a contest. Whilst it looks easy to implement in social, economic or teaching fields PBL is pretty complicated in engineering studies. Examples of cases in the first-cycle optical engineering studies are shown in this paper. Procedures of the PBL implementation and evaluation are described.
Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1993-01-01
Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.
Neighborhood Discriminant Hashing for Large-Scale Image Retrieval.
Tang, Jinhui; Li, Zechao; Wang, Meng; Zhao, Ruizhen
2015-09-01
With the proliferation of large-scale community-contributed images, hashing-based approximate nearest neighbor search in huge databases has aroused considerable interest from the fields of computer vision and multimedia in recent years because of its computational and memory efficiency. In this paper, we propose a novel hashing method named neighborhood discriminant hashing (NDH) (for short) to implement approximate similarity search. Different from the previous work, we propose to learn a discriminant hashing function by exploiting local discriminative information, i.e., the labels of a sample can be inherited from the neighbor samples it selects. The hashing function is expected to be orthogonal to avoid redundancy in the learned hashing bits as much as possible, while an information theoretic regularization is jointly exploited using maximum entropy principle. As a consequence, the learned hashing function is compact and nonredundant among bits, while each bit is highly informative. Extensive experiments are carried out on four publicly available data sets and the comparison results demonstrate the outperforming performance of the proposed NDH method over state-of-the-art hashing techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.
This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages ofmore » information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.« less
Securely Measuring the Overlap between Private Datasets with Cryptosets
Swamidass, S. Joshua; Matlock, Matthew; Rozenblit, Leon
2015-01-01
Many scientific questions are best approached by sharing data—collected by different groups or across large collaborative networks—into a combined analysis. Unfortunately, some of the most interesting and powerful datasets—like health records, genetic data, and drug discovery data—cannot be freely shared because they contain sensitive information. In many situations, knowing if private datasets overlap determines if it is worthwhile to navigate the institutional, ethical, and legal barriers that govern access to sensitive, private data. We report the first method of publicly measuring the overlap between private datasets that is secure under a malicious model without relying on private protocols or message passing. This method uses a publicly shareable summary of a dataset’s contents, its cryptoset, to estimate its overlap with other datasets. Cryptosets approach “information-theoretic” security, the strongest type of security possible in cryptography, which is not even crackable with infinite computing power. We empirically and theoretically assess both the accuracy of these estimates and the security of the approach, demonstrating that cryptosets are informative, with a stable accuracy, and secure. PMID:25714898
An integrated ball projection technology for the study of dynamic interceptive actions.
Stone, J A; Panchuk, D; Davids, K; North, J S; Fairweather, I; Maynard, I W
2014-12-01
Dynamic interceptive actions, such as catching or hitting a ball, are important task vehicles for investigating the complex relationship between cognition, perception, and action in performance environments. Representative experimental designs have become more important recently, highlighting the need for research methods to ensure that the coupling of information and movement is faithfully maintained. However, retaining representative design while ensuring systematic control of experimental variables is challenging, due to the traditional tendency to employ methods that typically involve use of reductionist motor responses such as buttonpressing or micromovements. Here, we outline the methodology behind a custom-built, integrated ball projection technology that allows images of advanced visual information to be synchronized with ball projection. This integrated technology supports the controlled presentation of visual information to participants while they perform dynamic interceptive actions. We discuss theoretical ideas behind the integration of hardware and software, along with practical issues resolved in technological design, and emphasize how the system can be integrated with emerging developments such as mixed reality environments. We conclude by considering future developments and applications of the integrated projection technology for research in human movement behaviors.
A Graph Theory Practice on Transformed Image: A Random Image Steganography
Thanikaiselvan, V.; Arulmozhivarman, P.; Subashanthini, S.; Amirtharajan, Rengarajan
2013-01-01
Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients. PMID:24453857
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
An analysis of random projection for changeable and privacy-preserving biometric verification.
Wang, Yongjin; Plataniotis, Konstantinos N
2010-10-01
Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.