Analyzing complex networks evolution through Information Theory quantifiers
NASA Astrophysics Data System (ADS)
Carpi, Laura C.; Rosso, Osvaldo A.; Saco, Patricia M.; Ravetti, Martín Gómez
2011-01-01
A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Niño/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.
Information Theory in Biology after 18 Years
ERIC Educational Resources Information Center
Johnson, Horton A.
1970-01-01
Reviews applications of information theory to biology, concluding that they have not proved very useful. Suggests modifications and extensions to increase the biological relevance of the theory, and speculates about applications in quantifying cell proliferation, chemical homeostasis and aging. (EB)
Lee, Joon; Maslove, David M
2015-07-31
Clinical workflow is infused with large quantities of data, particularly in areas with enhanced monitoring such as the Intensive Care Unit (ICU). Information theory can quantify the expected amounts of total and redundant information contained in a given clinical data type, and as such has the potential to inform clinicians on how to manage the vast volumes of data they are required to analyze in their daily practice. The objective of this proof-of-concept study was to quantify the amounts of redundant information associated with common ICU lab tests. We analyzed the information content of 11 laboratory test results from 29,149 adult ICU admissions in the MIMIC II database. Information theory was applied to quantify the expected amount of redundant information both between lab values from the same ICU day, and between consecutive ICU days. Most lab values showed a decreasing trend over time in the expected amount of novel information they contained. Platelet, blood urea nitrogen (BUN), and creatinine measurements exhibited the most amount of redundant information on days 2 and 3 compared to the previous day. The creatinine-BUN and sodium-chloride pairs had the most redundancy. Information theory can help identify and discourage unnecessary testing and bloodwork, and can in general be a useful data analytic technique for many medical specialties that deal with information overload.
Quantifying uncertainty in climate change science through empirical information theory.
Majda, Andrew J; Gershgorin, Boris
2010-08-24
Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.
Maximal coherence and the resource theory of purity
NASA Astrophysics Data System (ADS)
Streltsov, Alexander; Kampermann, Hermann; Wölk, Sabine; Gessner, Manuel; Bruß, Dagmar
2018-05-01
The resource theory of quantum coherence studies the off-diagonal elements of a density matrix in a distinguished basis, whereas the resource theory of purity studies all deviations from the maximally mixed state. We establish a direct connection between the two resource theories, by identifying purity as the maximal coherence which is achievable by unitary operations. The states that saturate this maximum identify a universal family of maximally coherent mixed states. These states are optimal resources under maximally incoherent operations, and thus independent of the way coherence is quantified. For all distance-based coherence quantifiers the maximal coherence can be evaluated exactly, and is shown to coincide with the corresponding distance-based purity quantifier. We further show that purity bounds the maximal amount of entanglement and discord that can be generated by unitary operations, thus demonstrating that purity is the most elementary resource for quantum information processing.
INFORMATION: THEORY, BRAIN, AND BEHAVIOR
Jensen, Greg; Ward, Ryan D.; Balsam, Peter D.
2016-01-01
In the 65 years since its formal specification, information theory has become an established statistical paradigm, providing powerful tools for quantifying probabilistic relationships. Behavior analysis has begun to adopt these tools as a novel means of measuring the interrelations between behavior, stimuli, and contingent outcomes. This approach holds great promise for making more precise determinations about the causes of behavior and the forms in which conditioning may be encoded by organisms. In addition to providing an introduction to the basics of information theory, we review some of the ways that information theory has informed the studies of Pavlovian conditioning, operant conditioning, and behavioral neuroscience. In addition to enriching each of these empirical domains, information theory has the potential to act as a common statistical framework by which results from different domains may be integrated, compared, and ultimately unified. PMID:24122456
A framework for designing and analyzing binary decision-making strategies in cellular systems†
Porter, Joshua R.; Andrews, Burton W.; Iglesias, Pablo A.
2015-01-01
Cells make many binary (all-or-nothing) decisions based on noisy signals gathered from their environment and processed through noisy decision-making pathways. Reducing the effect of noise to improve the fidelity of decision-making comes at the expense of increased complexity, creating a tradeoff between performance and metabolic cost. We present a framework based on rate distortion theory, a branch of information theory, to quantify this tradeoff and design binary decision-making strategies that balance low cost and accuracy in optimal ways. With this framework, we show that several observed behaviors of binary decision-making systems, including random strategies, hysteresis, and irreversibility, are optimal in an information-theoretic sense for various situations. This framework can also be used to quantify the goals around which a decision-making system is optimized and to evaluate the optimality of cellular decision-making systems by a fundamental information-theoretic criterion. As proof of concept, we use the framework to quantify the goals of the externally triggered apoptosis pathway. PMID:22370552
The probability heuristics model of syllogistic reasoning.
Chater, N; Oaksford, M
1999-03-01
A probability heuristic model (PHM) for syllogistic reasoning is proposed. An informational ordering over quantified statements suggests simple probability based heuristics for syllogistic reasoning. The most important is the "min-heuristic": choose the type of the least informative premise as the type of the conclusion. The rationality of this heuristic is confirmed by an analysis of the probabilistic validity of syllogistic reasoning which treats logical inference as a limiting case of probabilistic inference. A meta-analysis of past experiments reveals close fits with PHM. PHM also compares favorably with alternative accounts, including mental logics, mental models, and deduction as verbal reasoning. Crucially, PHM extends naturally to generalized quantifiers, such as Most and Few, which have not been characterized logically and are, consequently, beyond the scope of current mental logic and mental model theories. Two experiments confirm the novel predictions of PHM when generalized quantifiers are used in syllogistic arguments. PHM suggests that syllogistic reasoning performance may be determined by simple but rational informational strategies justified by probability theory rather than by logic. Copyright 1999 Academic Press.
Preliminary Characterization of Erythrocytes Deformability on the Entropy-Complexity Plane
Korol, Ana M; D’Arrigo, Mabel; Foresto, Patricia; Pérez, Susana; Martín, Maria T; Rosso, Osualdo A
2010-01-01
We present an application of wavelet-based Information Theory quantifiers (Normalized Total Shannon Entropy, MPR-Statistical Complexity and Entropy-Complexity plane) on red blood cells membrane viscoelasticity characterization. These quantifiers exhibit important localization advantages provided by the Wavelet Theory. The present approach produces a clear characterization of this dynamical system, finding out an evident manifestation of a random process on the red cell samples of healthy individuals, and its sharp reduction of randomness on analyzing a human haematological disease, such as β-thalassaemia minor. PMID:21611139
NASA Astrophysics Data System (ADS)
Rosso, Osvaldo A.; Craig, Hugh; Moscato, Pablo
2009-03-01
We introduce novel Information Theory quantifiers in a computational linguistic study that involves a large corpus of English Renaissance literature. The 185 texts studied (136 plays and 49 poems in total), with first editions that range from 1580 to 1640, form a representative set of its period. Our data set includes 30 texts unquestionably attributed to Shakespeare; in addition we also included A Lover’s Complaint, a poem which generally appears in Shakespeare collected editions but whose authorship is currently in dispute. Our statistical complexity quantifiers combine the power of Jensen-Shannon’s divergence with the entropy variations as computed from a probability distribution function of the observed word use frequencies. Our results show, among other things, that for a given entropy poems display higher complexity than plays, that Shakespeare’s work falls into two distinct clusters in entropy, and that his work is remarkable for its homogeneity and for its closeness to overall means.
Iskarous, Khalil; Mooshammer, Christine; Hoole, Phil; Recasens, Daniel; Shadle, Christine H.; Saltzman, Elliot; Whalen, D. H.
2013-01-01
Coarticulation and invariance are two topics at the center of theorizing about speech production and speech perception. In this paper, a quantitative scale is proposed that places coarticulation and invariance at the two ends of the scale. This scale is based on physical information flow in the articulatory signal, and uses Information Theory, especially the concept of mutual information, to quantify these central concepts of speech research. Mutual Information measures the amount of physical information shared across phonological units. In the proposed quantitative scale, coarticulation corresponds to greater and invariance to lesser information sharing. The measurement scale is tested by data from three languages: German, Catalan, and English. The relation between the proposed scale and several existing theories of coarticulation is discussed, and implications for existing theories of speech production and perception are presented. PMID:23927125
Measuring the jitter of ring oscillators by means of information theory quantifiers
NASA Astrophysics Data System (ADS)
Antonelli, M.; De Micco, L.; Larrondo, H. A.
2017-02-01
Ring oscillators (RO's) are elementary blocks widely used in digital design. Jitter is unavoidable in RO's, its presence is an undesired behavior in many applications, as clock generators. On the contrary, jitter may be used as the noise source in RO-based true-random numbers generators (TRNG). Consequently, jitter measure is a relevant issue to characterize a RO, and it is the subject of this paper. The main contribution is the use of Information Theory Quantifiers (ITQ) as measures of RO's jitter. It is shown that among several ITQ evaluated, two of them emerge as good measures because they are independent of parameters used for their statistical determination. They turned out to be robust and may be implemented experimentally. We encountered that a dual entropy plane allows a visual comparison of results.
Extracting features of Gaussian self-similar stochastic processes via the Bandt-Pompe approach.
Rosso, O A; Zunino, L; Pérez, D G; Figliola, A; Larrondo, H A; Garavaglia, M; Martín, M T; Plastino, A
2007-12-01
By recourse to appropriate information theory quantifiers (normalized Shannon entropy and Martín-Plastino-Rosso intensive statistical complexity measure), we revisit the characterization of Gaussian self-similar stochastic processes from a Bandt-Pompe viewpoint. We show that the ensuing approach exhibits considerable advantages with respect to other treatments. In particular, clear quantifiers gaps are found in the transition between the continuous processes and their associated noises.
NASA Astrophysics Data System (ADS)
Pan, Feng; Pachepsky, Yakov A.; Guber, Andrey K.; McPherson, Brian J.; Hill, Robert L.
2012-01-01
SummaryUnderstanding streamflow patterns in space and time is important for improving flood and drought forecasting, water resources management, and predictions of ecological changes. Objectives of this work include (a) to characterize the spatial and temporal patterns of streamflow using information theory-based measures at two thoroughly-monitored agricultural watersheds located in different hydroclimatic zones with similar land use, and (b) to elucidate and quantify temporal and spatial scale effects on those measures. We selected two USDA experimental watersheds to serve as case study examples, including the Little River experimental watershed (LREW) in Tifton, Georgia and the Sleepers River experimental watershed (SREW) in North Danville, Vermont. Both watersheds possess several nested sub-watersheds and more than 30 years of continuous data records of precipitation and streamflow. Information content measures (metric entropy and mean information gain) and complexity measures (effective measure complexity and fluctuation complexity) were computed based on the binary encoding of 5-year streamflow and precipitation time series data. We quantified patterns of streamflow using probabilities of joint or sequential appearances of the binary symbol sequences. Results of our analysis illustrate that information content measures of streamflow time series are much smaller than those for precipitation data, and the streamflow data also exhibit higher complexity, suggesting that the watersheds effectively act as filters of the precipitation information that leads to the observed additional complexity in streamflow measures. Correlation coefficients between the information-theory-based measures and time intervals are close to 0.9, demonstrating the significance of temporal scale effects on streamflow patterns. Moderate spatial scale effects on streamflow patterns are observed with absolute values of correlation coefficients between the measures and sub-watershed area varying from 0.2 to 0.6 in the two watersheds. We conclude that temporal effects must be evaluated and accounted for when the information theory-based methods are used for performance evaluation and comparison of hydrological models.
An application of information theory to stochastic classical gravitational fields
NASA Astrophysics Data System (ADS)
Angulo, J.; Angulo, J. C.; Angulo, J. M.
2018-06-01
The objective of this study lies on the incorporation of the concepts developed in the Information Theory (entropy, complexity, etc.) with the aim of quantifying the variation of the uncertainty associated with a stochastic physical system resident in a spatiotemporal region. As an example of application, a relativistic classical gravitational field has been considered, with a stochastic behavior resulting from the effect induced by one or several external perturbation sources. One of the key concepts of the study is the covariance kernel between two points within the chosen region. Using this concept and the appropriate criteria, a methodology is proposed to evaluate the change of uncertainty at a given spatiotemporal point, based on available information and efficiently applying the diverse methods that Information Theory provides. For illustration, a stochastic version of the Einstein equation with an added Gaussian Langevin term is analyzed.
Managing for resilience: an information theory-based ...
Ecosystems are complex and multivariate; hence, methods to assess the dynamics of ecosystems should have the capacity to evaluate multiple indicators simultaneously. Most research on identifying leading indicators of regime shifts has focused on univariate methods and simple models which have limited utility when evaluating real ecosystems, particularly because drivers are often unknown. We discuss some common univariate and multivariate approaches for detecting critical transitions in ecosystems and demonstrate their capabilities via case studies. Synthesis and applications. We illustrate the utility of an information theory-based index for assessing ecosystem dynamics. Trends in this index also provide a sentinel of both abrupt and gradual transitions in ecosystems. In response to the need to identify leading indicators of regime shifts in ecosystems, our research compares traditional indicators and Fisher information, an information theory based method, by examining four case study systems. Results demonstrate the utility of methods and offers great promise for quantifying and managing for resilience.
Towards understanding the behavior of physical systems using information theory
NASA Astrophysics Data System (ADS)
Quax, Rick; Apolloni, Andrea; Sloot, Peter M. A.
2013-09-01
One of the goals of complex network analysis is to identify the most influential nodes, i.e., the nodes that dictate the dynamics of other nodes. In the case of autonomous systems or transportation networks, highly connected hubs play a preeminent role in diffusing the flow of information and viruses; in contrast, in language evolution most linguistic norms come from the peripheral nodes who have only few contacts. Clearly a topological analysis of the interactions alone is not sufficient to identify the nodes that drive the state of the network. Here we show how information theory can be used to quantify how the dynamics of individual nodes propagate through a system. We interpret the state of a node as a storage of information about the state of other nodes, which is quantified in terms of Shannon information. This information is transferred through interactions and lost due to noise, and we calculate how far it can travel through a network. We apply this concept to a model of opinion formation in a complex social network to calculate the impact of each node by measuring how long its opinion is remembered by the network. Counter-intuitively we find that the dynamics of opinions are not determined by the hubs or peripheral nodes, but rather by nodes with an intermediate connectivity.
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Ensembles vs. information theory: supporting science under uncertainty
NASA Astrophysics Data System (ADS)
Nearing, Grey S.; Gupta, Hoshin V.
2018-05-01
Multi-model ensembles are one of the most common ways to deal with epistemic uncertainty in hydrology. This is a problem because there is no known way to sample models such that the resulting ensemble admits a measure that has any systematic (i.e., asymptotic, bounded, or consistent) relationship with uncertainty. Multi-model ensembles are effectively sensitivity analyses and cannot - even partially - quantify uncertainty. One consequence of this is that multi-model approaches cannot support a consistent scientific method - in particular, multi-model approaches yield unbounded errors in inference. In contrast, information theory supports a coherent hypothesis test that is robust to (i.e., bounded under) arbitrary epistemic uncertainty. This paper may be understood as advocating a procedure for hypothesis testing that does not require quantifying uncertainty, but is coherent and reliable (i.e., bounded) in the presence of arbitrary (unknown and unknowable) uncertainty. We conclude by offering some suggestions about how this proposed philosophy of science suggests new ways to conceptualize and construct simulation models of complex, dynamical systems.
Inferring the phase of the moon from the color of sunset
NASA Astrophysics Data System (ADS)
Thiermann, Ryan; Sweeney, Alison; Murugan, Arvind
We use information theory to investigate whether patterns in the spectral progression of twilight are informative of the lunar phase. Such optical cues have been sought to explain the synchronized spawning of corals and other biological processes that are coupled to the lunar cycle. We first quantify the maximum available information about lunar phase in twilight by combining measurements of twilight spectrum and models of spectral variations due to weather and atmospheric changes. We then quantify the biophysically accessible information by accounting for the spectral resolution of opsin proteins and the temporal resolution with which organisms can track spectral changes. We find that in most climates, relative spectral variation is a more reliable indicator of lunar phase than intensity variation alone since the former is less affected by cloud cover. We also find that organisms can extract most available information with three distinct opsins and reasonable integration times.
Plant canopy gap-size analysis theory for improving optical measurements of leaf-area index
NASA Astrophysics Data System (ADS)
Chen, Jing M.; Cihlar, Josef
1995-09-01
Optical instruments currently available for measuring the leaf-area index (LAI) of a plant canopy all utilize only the canopy gap-fraction information. These instruments include the Li-Cor LAI-2000 Plant Canopy Analyzer, Decagon, and Demon. The advantages of utilizing both the canopy gap-fraction and gap-size information are shown. For the purpose of measuring the canopy gap size, a prototype sunfleck-LAI instrument named Tracing Radiation and Architecture of Canopies (TRAC), has been developed and tested in two pure conifer plantations, red pine (Pinus resinosa Ait.) and jack pine (Pinus banksiana Lamb). A new gap-size-analysis theory is presented to quantify the effect of canopy architecture on optical measurements of LAI based on the gap-fraction principle. The theory is an improvement on that of Lang and Xiang [Agric. For. Meteorol. 37, 229 (1986)]. In principle, this theory can be used for any heterogeneous canopies.
Rosso, Osvaldo A; Ospina, Raydonal; Frery, Alejandro C
2016-01-01
We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups.
Quantifying learning in biotracer studies.
Brown, Christopher J; Brett, Michael T; Adame, Maria Fernanda; Stewart-Koster, Ben; Bunn, Stuart E
2018-04-12
Mixing models have become requisite tools for analyzing biotracer data, most commonly stable isotope ratios, to infer dietary contributions of multiple sources to a consumer. However, Bayesian mixing models will always return a result that defaults to their priors if the data poorly resolve the source contributions, and thus, their interpretation requires caution. We describe an application of information theory to quantify how much has been learned about a consumer's diet from new biotracer data. We apply the approach to two example data sets. We find that variation in the isotope ratios of sources limits the precision of estimates for the consumer's diet, even with a large number of consumer samples. Thus, the approach which we describe is a type of power analysis that uses a priori simulations to find an optimal sample size. Biotracer data are fundamentally limited in their ability to discriminate consumer diets. We suggest that other types of data, such as gut content analysis, must be used as prior information in model fitting, to improve model learning about the consumer's diet. Information theory may also be used to identify optimal sampling protocols in situations where sampling of consumers is limited due to expense or ethical concerns.
Covariance Matrix for Helicity Couplings
Sadasivan, D.; Doring, M.
2018-04-06
In this paper, the helicity couplings at Q 2 = 0 for excited baryonic states have been determined in the past, but no information is available regarding their correlations that are relevant for comparison to theory. We present here our calculation of such correlations between the helicity couplings. Finally, they contain information for quantitative comparisons with theoretical values, they can be used to quantify the impact of polarization observables, and can help design new experiments.
Process connectivity in a naturally prograding river delta
NASA Astrophysics Data System (ADS)
Sendrowski, Alicia; Passalacqua, Paola
2017-03-01
River deltas are lowland systems that can display high hydrological connectivity. This connectivity can be structural (morphological connections), functional (control of fluxes), and process connectivity (information flow from system drivers to sinks). In this work, we quantify hydrological process connectivity in Wax Lake Delta, coastal Louisiana, by analyzing couplings among external drivers (discharge, tides, and wind) and water levels recorded at five islands and one channel over summer 2014. We quantify process connections with information theory, a branch of mathematics concerned with the communication of information. We represent process connections as a network; variables serve as network nodes and couplings as network links describing the strength, direction, and time scale of information flow. Comparing process connections at long (105 days) and short (10 days) time scales, we show that tides exhibit daily synchronization with water level, with decreasing strength from downstream to upstream, and that tides transfer information as tides transition from spring to neap. Discharge synchronizes with water level and the time scale of its information transfer compares well to physical travel times through the system, computed with a hydrodynamic model. Information transfer and physical transport show similar spatial patterns, although information transfer time scales are larger than physical travel times. Wind events associated with water level setup lead to increased process connectivity with highly variable information transfer time scales. We discuss the information theory results in the context of the hydrologic behavior of the delta, the role of vegetation as a connector/disconnector on islands, and the applicability of process networks as tools for delta modeling results.
Transfer Entropy and Transient Limits of Computation
Prokopenko, Mikhail; Lizier, Joseph T.
2014-01-01
Transfer entropy is a recently introduced information-theoretic measure quantifying directed statistical coherence between spatiotemporal processes, and is widely used in diverse fields ranging from finance to neuroscience. However, its relationships to fundamental limits of computation, such as Landauer's limit, remain unknown. Here we show that in order to increase transfer entropy (predictability) by one bit, heat flow must match or exceed Landauer's limit. Importantly, we generalise Landauer's limit to bi-directional information dynamics for non-equilibrium processes, revealing that the limit applies to prediction, in addition to retrodiction (information erasure). Furthermore, the results are related to negentropy, and to Bremermann's limit and the Bekenstein bound, producing, perhaps surprisingly, lower bounds on the computational deceleration and information loss incurred during an increase in predictability about the process. The identified relationships set new computational limits in terms of fundamental physical quantities, and establish transfer entropy as a central measure connecting information theory, thermodynamics and theory of computation. PMID:24953547
Generalizability Theory and Classical Test Theory
ERIC Educational Resources Information Center
Brennan, Robert L.
2011-01-01
Broadly conceived, reliability involves quantifying the consistencies and inconsistencies in observed scores. Generalizability theory, or G theory, is particularly well suited to addressing such matters in that it enables an investigator to quantify and distinguish the sources of inconsistencies in observed scores that arise, or could arise, over…
Ospina, Raydonal; Frery, Alejandro C.
2016-01-01
We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups. PMID:27907014
A permutation information theory tour through different interest rate maturities: the Libor case.
Bariviera, Aurelio Fernández; Guercio, María Belén; Martinez, Lisana B; Rosso, Osvaldo A
2015-12-13
This paper analyses Libor interest rates for seven different maturities and referred to operations in British pounds, euros, Swiss francs and Japanese yen, during the period 2001-2015. The analysis is performed by means of two quantifiers derived from information theory: the permutation Shannon entropy and the permutation Fisher information measure. An anomalous behaviour in the Libor is detected in all currencies except euros during the years 2006-2012. The stochastic switch is more severe in one, two and three months maturities. Given the special mechanism of Libor setting, we conjecture that the behaviour could have been produced by the manipulation that was uncovered by financial authorities. We argue that our methodology is pertinent as a market overseeing instrument. © 2015 The Author(s).
Information theoretic comparisons of original and transformed data from Landsat MSS and TM
NASA Technical Reports Server (NTRS)
Malila, W. A.
1985-01-01
The dispersion and concentration of signal values in transformed data from the Landsat-4 MSS and TM instruments are analyzed using a communications theory approach. The definition of entropy of Shannon was used to quantify information, and the concept of mutual information was employed to develop a measure of information contained in several subsets of variables. Several comparisons of information content are made on the basis of the information content measure, including: system design capacities; data volume occupied by agricultural data; and the information content of original bands and Tasseled Cap variables. A method for analyzing noise effects in MSS and TM data is proposed.
López-Rosa, Sheila; Molina-Espíritu, Moyocoyani; Esquivel, Rodolfo O; Soriano-Correa, Catalina; Dehesa, Jésus S
2016-12-05
The relative structural location of a selected group of 27 sulfonamide-like molecules in a chemical space defined by three information theory quantities (Shannon entropy, Fisher information, and disequilibrium) is discussed. This group is composed of 15 active bacteriostatic molecules, 11 theoretically designed ones, and para-aminobenzoic acid. This endeavor allows molecules that share common chemical properties through the molecular backbone, but with significant differences in the identity of the chemical substituents, which might result in bacteriostatic activity, to be structurally classified and characterized. This is performed by quantifying the structural changes on the electron density distribution due to different functional groups and number of electrons. The macroscopic molecular features are described by means of the entropy-like notions of spatial electronic delocalization, order, and uniformity. Hence, an information theory three-dimensional space (IT-3D) emerges that allows molecules with common properties to be gathered. This space witnesses the biological activity of the sulfonamides. Some structural aspects and information theory properties can be associated, as a result of the IT-3D chemical space, with the bacteriostatic activity of these molecules. Most interesting is that the active bacteriostatic molecules are more similar to para-aminobenzoic acid than to the theoretically designed analogues. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel
2010-01-01
Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272
O'Connor, B.L.; Hondzo, Miki; Harvey, J.W.
2009-01-01
Traditionally, dissolved oxygen (DO) fluxes have been calculated using the thin-film theory with DO microstructure data in systems characterized by fine sediments and low velocities. However, recent experimental evidence of fluctuating DO concentrations near the sediment-water interface suggests that turbulence and coherent motions control the mass transfer, and the surface renewal theory gives a more mechanistic model for quantifying fluxes. Both models involve quantifying the mass transfer coefficient (k) and the relevant concentration difference (??C). This study compared several empirical models for quantifying k based on both thin-film and surface renewal theories, as well as presents a new method for quantifying ??C (dynamic approach) that is consistent with the observed DO concentration fluctuations near the interface. Data were used from a series of flume experiments that includes both physical and kinetic uptake limitations of the flux. Results indicated that methods for quantifying k and ??C using the surface renewal theory better estimated the DO flux across a range of fluid-flow conditions. ?? 2009 ASCE.
An Intuitionistic Fuzzy Logic Models for Multicriteria Decision Making Under Uncertainty
NASA Astrophysics Data System (ADS)
Jana, Biswajit; Mohanty, Sachi Nandan
2017-04-01
The purpose of this paper is to enhance the applicability of the fuzzy sets for developing mathematical models for decision making under uncertainty, In general a decision making process consist of four stages, namely collection of information from various sources, compile the information, execute the information and finally take the decision/action. Only fuzzy sets theory is capable to quantifying the linguistic expression to mathematical form in complex situation. Intuitionistic fuzzy set (IFSs) which reflects the fact that the degree of non membership is not always equal to one minus degree of membership. There may be some degree of hesitation. Thus, there are some situations where IFS theory provides a more meaningful and applicable to cope with imprecise information present for solving multiple criteria decision making problem. This paper emphasis on IFSs, which is help for solving real world problem in uncertainty situation.
Modelling the heart as a communication system.
Ashikaga, Hiroshi; Aguilar-Rodríguez, José; Gorsky, Shai; Lusczek, Elizabeth; Marquitti, Flávia Maria Darcie; Thompson, Brian; Wu, Degang; Garland, Joshua
2015-04-06
Electrical communication between cardiomyocytes can be perturbed during arrhythmia, but these perturbations are not captured by conventional electrocardiographic metrics. We developed a theoretical framework to quantify electrical communication using information theory metrics in two-dimensional cell lattice models of cardiac excitation propagation. The time series generated by each cell was coarse-grained to 1 when excited or 0 when resting. The Shannon entropy for each cell was calculated from the time series during four clinically important heart rhythms: normal heartbeat, anatomical reentry, spiral reentry and multiple reentry. We also used mutual information to perform spatial profiling of communication during these cardiac arrhythmias. We found that information sharing between cells was spatially heterogeneous. In addition, cardiac arrhythmia significantly impacted information sharing within the heart. Entropy localized the path of the drifting core of spiral reentry, which could be an optimal target of therapeutic ablation. We conclude that information theory metrics can quantitatively assess electrical communication among cardiomyocytes. The traditional concept of the heart as a functional syncytium sharing electrical information cannot predict altered entropy and information sharing during complex arrhythmia. Information theory metrics may find clinical application in the identification of rhythm-specific treatments which are currently unmet by traditional electrocardiographic techniques. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Wu, Zemin; Rong, Chunying; Lu, Tian; Ayers, Paul W; Liu, Shubin
2015-10-28
As a continuation of our recent efforts to quantify chemical reactivity with quantities from the information-theoretic approach within the framework of density functional reactivity theory, the effectiveness of applying these quantities to quantify electrophilicity for the bimolecular nucleophilic substitution (SN2) reactions in both gas phase and aqueous solvent is presented in this work. We examined a total of 21 self-exchange SN2 reactions for the compound with the general chemical formula of R1R2R3C-F, where R1, R2, and R3 represent substituting alkyl groups such as -H, -CH3, -C2H5, -C3H7, and -C4H9 in both gas and solvent phases. Our findings confirm that scaling properties for information-theoretic quantities found elsewhere are still valid. It has also been verified that the barrier height has the strongest correlation with the electrostatic interaction, but the contributions from the exchange-correlation and steric effects, though less significant, are indispensable. We additionally unveiled that the barrier height of these SN2 reactions can reliably be predicted not only by the Hirshfeld charge and information gain at the regioselective carbon atom, as previously reported by us for other systems, but also by other information-theoretic descriptors such as Shannon entropy, Fisher information, and Ghosh-Berkowitz-Parr entropy on the same atom. These new findings provide further insights for the better understanding of the factors impacting the chemical reactivity of this vastly important category of chemical transformations.
Unification of quantum information theory
NASA Astrophysics Data System (ADS)
Abeyesinghe, Anura
We present the unification of many previously disparate results in noisy quantum Shannon theory and the unification of all of noiseless quantum Shannon theory. More specifically we deal here with bipartite, unidirectional, and memoryless quantum Shannon theory. We find all the optimal protocols and quantify the relationship between the resources used, both for the one-shot and for the ensemble case, for what is arguably the most fundamental task in quantum information theory: sharing entangled states between a sender and a receiver. We find that all of these protocols are derived from our one-shot superdense coding protocol and relate nicely to each other. We then move on to noisy quantum information theory and give a simple, direct proof of the "mother" protocol, or rather her generalization to the Fully Quantum Slepian-Wolf protocol (FQSW). FQSW simultaneously accomplishes two goals: quantum communication-assisted entanglement distillation, and state transfer from the sender to the receiver. As a result, in addition to her other "children," the mother protocol generates the state merging primitive of Horodecki, Oppenheim, and Winter as well as a new class of distributed compression protocols for correlated quantum sources, which are optimal for sources described by separable density operators. Moreover, the mother protocol described here is easily transformed into the so-called "father" protocol, demonstrating that the division of single-sender/single-receiver protocols into two families was unnecessary: all protocols in the family are children of the mother.
Jones index, secret sharing and total quantum dimension
NASA Astrophysics Data System (ADS)
Fiedler, Leander; Naaijkens, Pieter; Osborne, Tobias J.
2017-02-01
We study the total quantum dimension in the thermodynamic limit of topologically ordered systems. In particular, using the anyons (or superselection sectors) of such models, we define a secret sharing scheme, storing information invisible to a malicious party, and argue that the total quantum dimension quantifies how well we can perform this task. We then argue that this can be made mathematically rigorous using the index theory of subfactors, originally due to Jones and later extended by Kosaki and Longo. This theory provides us with a ‘relative entropy’ of two von Neumann algebras and a quantum channel, and we argue how these can be used to quantify how much classical information two parties can hide form an adversary. We also review the total quantum dimension in finite systems, in particular how it relates to topological entanglement entropy. It is known that the latter also has an interpretation in terms of secret sharing schemes, although this is shown by completely different methods from ours. Our work provides a different and independent take on this, which at the same time is completely mathematically rigorous. This complementary point of view might be beneficial, for example, when studying the stability of the total quantum dimension when the system is perturbed.
Rapid quantitative chemical mapping of surfaces with sub-2 nm resolution
NASA Astrophysics Data System (ADS)
Lai, Chia-Yun; Perri, Saverio; Santos, Sergio; Garcia, Ricardo; Chiesa, Matteo
2016-05-01
We present a theory that exploits four observables in bimodal atomic force microscopy to produce maps of the Hamaker constant H. The quantitative H maps may be employed by the broader community to directly interpret the high resolution of standard bimodal AFM images as chemical maps while simultaneously quantifying chemistry in the non-contact regime. We further provide a simple methodology to optimize a range of operational parameters for which H is in the closest agreement with the Lifshitz theory in order to (1) simplify data acquisition and (2) generalize the methodology to any set of cantilever-sample systems.We present a theory that exploits four observables in bimodal atomic force microscopy to produce maps of the Hamaker constant H. The quantitative H maps may be employed by the broader community to directly interpret the high resolution of standard bimodal AFM images as chemical maps while simultaneously quantifying chemistry in the non-contact regime. We further provide a simple methodology to optimize a range of operational parameters for which H is in the closest agreement with the Lifshitz theory in order to (1) simplify data acquisition and (2) generalize the methodology to any set of cantilever-sample systems. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00496b
Information Processing Capacity of Dynamical Systems
NASA Astrophysics Data System (ADS)
Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge
2012-07-01
Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory.
Robust bidirectional links for photonic quantum networks
Xu, Jin-Shi; Yung, Man-Hong; Xu, Xiao-Ye; Tang, Jian-Shun; Li, Chuan-Feng; Guo, Guang-Can
2016-01-01
Optical fibers are widely used as one of the main tools for transmitting not only classical but also quantum information. We propose and report an experimental realization of a promising method for creating robust bidirectional quantum communication links through paired optical polarization-maintaining fibers. Many limitations of existing protocols can be avoided with the proposed method. In particular, the path and polarization degrees of freedom are combined to deterministically create a photonic decoherence-free subspace without the need for any ancillary photon. This method is input state–independent, robust against dephasing noise, postselection-free, and applicable bidirectionally. To rigorously quantify the amount of quantum information transferred, the optical fibers are analyzed with the tools developed in quantum communication theory. These results not only suggest a practical means for protecting quantum information sent through optical quantum networks but also potentially provide a new physical platform for enriching the structure of the quantum communication theory. PMID:26824069
Information Processing Capacity of Dynamical Systems
Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge
2012-01-01
Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory. PMID:22816038
Corrosion Monitors for Embedded Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Alex L.; Pfeifer, Kent B.; Casias, Adrian L.
2017-05-01
We have developed and characterized novel in-situ corrosion sensors to monitor and quantify the corrosive potential and history of localized environments. Embedded corrosion sensors can provide information to aid health assessments of internal electrical components including connectors, microelectronics, wires, and other susceptible parts. When combined with other data (e.g. temperature and humidity), theory, and computational simulation, the reliability of monitored systems can be predicted with higher fidelity.
Quantifying falsifiability of scientific theories
NASA Astrophysics Data System (ADS)
Nemenman, Ilya
I argue that the notion of falsifiability, a key concept in defining a valid scientific theory, can be quantified using Bayesian Model Selection, which is a standard tool in modern statistics. This relates falsifiability to the quantitative version of the statistical Occam's razor, and allows transforming some long-running arguments about validity of scientific theories from philosophical discussions to rigorous mathematical calculations.
Correlations and flow of information between the New York Times and stock markets
NASA Astrophysics Data System (ADS)
García-Medina, Andrés; Sandoval, Leonidas; Bañuelos, Efraín Urrutia; Martínez-Argüello, A. M.
2018-07-01
We use Random Matrix Theory (RMT) and information theory to analyze the correlations and flow of information between 64,939 news from The New York Times and 40 world financial indices during 10 months along the period 2015-2016. The set of news is quantified and transformed into daily polarity time series using tools from sentiment analysis. The results show that a common factor influences the world indices and news, which even share the same dynamics. Furthermore, the global correlation structure is found to be preserved when adding white noise, what indicates that correlations are not due to sample size effects. Likewise, we find a considerable amount of information flowing from news to world indices for some specific delay. This is of practical interest for trading purposes. Our results suggest a deep relationship between news and world indices, and show a situation where news drive world market movements, giving a new evidence to support behavioral finance as the current economic paradigm.
Information theoretic quantification of diagnostic uncertainty.
Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T
2012-01-01
Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.
The engineering of cybernetic systems
NASA Astrophysics Data System (ADS)
Fry, Robert L.
2002-05-01
This tutorial develops a logical basis for the engineering of systems that operate cybernetically. The term cybernetic system has a clear quantitative definition. It is a system that dynamically matches acquired information to selected actions relative to a computational issue that defines the essential purpose of the system or machine. This notion requires that information and control be further quantified. The logic of questions and assertions as developed by Cox provides one means of doing this. The design and operation of cybernetic systems can be understood by contrasting these kinds of systems with communication systems and information theory as developed by Shannon. The joint logic of questions and assertions can be seen to underlie and be common to both information theory as applied to the design of discrete communication systems and to a theory of discrete general systems. The joint logic captures a natural complementarity between systems that transmit and receive information and those that acquire and act on it. Specific comparisons and contrasts are made between the source rate and channel capacity of a communication system and the acquisition rate and control capacity of a general system. An overview is provided of the joint logic of questions and assertions and the ties that this logic has to both conventional information theory and to a general theory of systems. I-diagrams, the interrogative complement of Venn diagrams, are described as providing valuable reasoning tools. An initial framework is suggested for the design of cybernetic systems. Two examples are given to illustrate this framework as applied to discrete cybernetic systems. These examples include a predator-prey problem as illustrated through "The Dog Chrysippus Pursuing its Prey," and the derivation of a single-neuron system that operates cybernetically and is biologically plausible. Future areas of research are highlighted which require development for a mature engineering framework.
Frobenius-norm-based measures of quantum coherence and asymmetry
Yao, Yao; Dong, G. H.; Xiao, Xing; Sun, C. P.
2016-01-01
We formulate the Frobenius-norm-based measures for quantum coherence and asymmetry respectively. In contrast to the resource theory of coherence and asymmetry, we construct a natural measure of quantum coherence inspired from optical coherence theory while the group theoretical approach is employed to quantify the asymmetry of quantum states. Besides their simple structures and explicit physical meanings, we observe that these quantities are intimately related to the purity (or linear entropy) of the corresponding quantum states. Remarkably, we demonstrate that the proposed coherence quantifier is not only a measure of mixedness, but also an intrinsic (basis-independent) quantification of quantum coherence contained in quantum states, which can also be viewed as a normalized version of Brukner-Zeilinger invariant information. In our context, the asymmetry of N-qubit quantum systems is considered under local independent and collective transformations. In- triguingly, it is illustrated that the collective effect has a significant impact on the asymmetry measure, and quantum correlation between subsystems plays a non-negligible role in this circumstance. PMID:27558009
Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano
2013-01-01
Muscle synergies have been hypothesized to be the building blocks used by the central nervous system to generate movement. According to this hypothesis, the accomplishment of various motor tasks relies on the ability of the motor system to recruit a small set of synergies on a single-trial basis and combine them in a task-dependent manner. It is conceivable that this requires a fine tuning of the trial-to-trial relationships between the synergy activations. Here we develop an analytical methodology to address the nature and functional role of trial-to-trial correlations between synergy activations, which is designed to help to better understand how these correlations may contribute to generating appropriate motor behavior. The algorithm we propose first divides correlations between muscle synergies into types (noise correlations, quantifying the trial-to-trial covariations of synergy activations at fixed task, and signal correlations, quantifying the similarity of task tuning of the trial-averaged activation coefficients of different synergies), and then uses single-trial methods (task-decoding and information theory) to quantify their overall effect on the task-discriminating information carried by muscle synergy activations. We apply the method to both synchronous and time-varying synergies and exemplify it on electromyographic data recorded during performance of reaching movements in different directions. Our method reveals the robust presence of information-enhancing patterns of signal and noise correlations among pairs of synchronous synergies, and shows that they enhance by 9-15% (depending on the set of tasks) the task-discriminating information provided by the synergy decompositions. We suggest that the proposed methodology could be useful for assessing whether single-trial activations of one synergy depend on activations of other synergies and quantifying the effect of such dependences on the task-to-task differences in muscle activation patterns.
Equitability, mutual information, and the maximal information coefficient.
Kinney, Justin B; Atwal, Gurinder S
2014-03-04
How should one quantify the strength of association between two random variables without bias for relationships of a specific form? Despite its conceptual simplicity, this notion of statistical "equitability" has yet to receive a definitive mathematical formalization. Here we argue that equitability is properly formalized by a self-consistency condition closely related to Data Processing Inequality. Mutual information, a fundamental quantity in information theory, is shown to satisfy this equitability criterion. These findings are at odds with the recent work of Reshef et al. [Reshef DN, et al. (2011) Science 334(6062):1518-1524], which proposed an alternative definition of equitability and introduced a new statistic, the "maximal information coefficient" (MIC), said to satisfy equitability in contradistinction to mutual information. These conclusions, however, were supported only with limited simulation evidence, not with mathematical arguments. Upon revisiting these claims, we prove that the mathematical definition of equitability proposed by Reshef et al. cannot be satisfied by any (nontrivial) dependence measure. We also identify artifacts in the reported simulation evidence. When these artifacts are removed, estimates of mutual information are found to be more equitable than estimates of MIC. Mutual information is also observed to have consistently higher statistical power than MIC. We conclude that estimating mutual information provides a natural (and often practical) way to equitably quantify statistical associations in large datasets.
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...
2016-10-20
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.
2016-01-01
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187
Novel information theory-based measures for quantifying incongruence among phylogenetic trees.
Salichos, Leonidas; Stamatakis, Alexandros; Rokas, Antonis
2014-05-01
Phylogenies inferred from different data matrices often conflict with each other necessitating the development of measures that quantify this incongruence. Here, we introduce novel measures that use information theory to quantify the degree of conflict or incongruence among all nontrivial bipartitions present in a set of trees. The first measure, internode certainty (IC), calculates the degree of certainty for a given internode by considering the frequency of the bipartition defined by the internode (internal branch) in a given set of trees jointly with that of the most prevalent conflicting bipartition in the same tree set. The second measure, IC All (ICA), calculates the degree of certainty for a given internode by considering the frequency of the bipartition defined by the internode in a given set of trees in conjunction with that of all conflicting bipartitions in the same underlying tree set. Finally, the tree certainty (TC) and TC All (TCA) measures are the sum of IC and ICA values across all internodes of a phylogeny, respectively. IC, ICA, TC, and TCA can be calculated from different types of data that contain nontrivial bipartitions, including from bootstrap replicate trees to gene trees or individual characters. Given a set of phylogenetic trees, the IC and ICA values of a given internode reflect its specific degree of incongruence, and the TC and TCA values describe the global degree of incongruence between trees in the set. All four measures are implemented and freely available in version 8.0.0 and subsequent versions of the widely used program RAxML.
Measuring Integrated Information from the Decoding Perspective
Oizumi, Masafumi; Amari, Shun-ichi; Yanagawa, Toru; Fujii, Naotaka; Tsuchiya, Naotsugu
2016-01-01
Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT) of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 and is equal to 0 when the system does not generate information (no information) or when the system comprises independent parts (no integration). The upper bound of integrated information is the amount of information generated by the whole system. Here we derive the novel practical measure Φ* by introducing a concept of mismatched decoding developed from information theory. We show that Φ* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression of Φ* under the Gaussian assumption, which makes it readily applicable to experimental data. Our novel measure Φ* can generally be used as a measure of integrated information in research on consciousness, and also as a tool for network analysis on diverse areas of biology. PMID:26796119
Revisiting the European sovereign bonds with a permutation-information-theory approach
NASA Astrophysics Data System (ADS)
Fernández Bariviera, Aurelio; Zunino, Luciano; Guercio, María Belén; Martinez, Lisana B.; Rosso, Osvaldo A.
2013-12-01
In this paper we study the evolution of the informational efficiency in its weak form for seventeen European sovereign bonds time series. We aim to assess the impact of two specific economic situations in the hypothetical random behavior of these time series: the establishment of a common currency and a wide and deep financial crisis. In order to evaluate the informational efficiency we use permutation quantifiers derived from information theory. Specifically, time series are ranked according to two metrics that measure the intrinsic structure of their correlations: permutation entropy and permutation statistical complexity. These measures provide the rectangular coordinates of the complexity-entropy causality plane; the planar location of the time series in this representation space reveals the degree of informational efficiency. According to our results, the currency union contributed to homogenize the stochastic characteristics of the time series and produced synchronization in the random behavior of them. Additionally, the 2008 financial crisis uncovered differences within the apparently homogeneous European sovereign markets and revealed country-specific characteristics that were partially hidden during the monetary union heyday.
INFORMATION-THEORETIC INEQUALITIES ON UNIMODULAR LIE GROUPS
Chirikjian, Gregory S.
2010-01-01
Classical inequalities used in information theory such as those of de Bruijn, Fisher, Cramér, Rao, and Kullback carry over in a natural way from Euclidean space to unimodular Lie groups. These are groups that possess an integration measure that is simultaneously invariant under left and right shifts. All commutative groups are unimodular. And even in noncommutative cases unimodular Lie groups share many of the useful features of Euclidean space. The rotation and Euclidean motion groups, which are perhaps the most relevant Lie groups to problems in geometric mechanics, are unimodular, as are the unitary groups that play important roles in quantum computing. The extension of core information theoretic inequalities defined in the setting of Euclidean space to this broad class of Lie groups is potentially relevant to a number of problems relating to information gathering in mobile robotics, satellite attitude control, tomographic image reconstruction, biomolecular structure determination, and quantum information theory. In this paper, several definitions are extended from the Euclidean setting to that of Lie groups (including entropy and the Fisher information matrix), and inequalities analogous to those in classical information theory are derived and stated in the form of fifteen small theorems. In all such inequalities, addition of random variables is replaced with the group product, and the appropriate generalization of convolution of probability densities is employed. An example from the field of robotics demonstrates how several of these results can be applied to quantify the amount of information gained by pooling different sensory inputs. PMID:21113416
Kostal, Lubomir; Kobayashi, Ryota
2015-10-01
Information theory quantifies the ultimate limits on reliable information transfer by means of the channel capacity. However, the channel capacity is known to be an asymptotic quantity, assuming unlimited metabolic cost and computational power. We investigate a single-compartment Hodgkin-Huxley type neuronal model under the spike-rate coding scheme and address how the metabolic cost and the decoding complexity affects the optimal information transmission. We find that the sub-threshold stimulation regime, although attaining the smallest capacity, allows for the most efficient balance between the information transmission and the metabolic cost. Furthermore, we determine post-synaptic firing rate histograms that are optimal from the information-theoretic point of view, which enables the comparison of our results with experimental data. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Uncertainties in Forecasting Streamflow using Entropy Theory
NASA Astrophysics Data System (ADS)
Cui, H.; Singh, V. P.
2017-12-01
Streamflow forecasting is essential in river restoration, reservoir operation, power generation, irrigation, navigation, and water management. However, there is always uncertainties accompanied in forecast, which may affect the forecasting results and lead to large variations. Therefore, uncertainties must be considered and be assessed properly when forecasting streamflow for water management. The aim of our work is to quantify the uncertainties involved in forecasting streamflow and provide reliable streamflow forecast. Despite that streamflow time series are stochastic, they exhibit seasonal and periodic patterns. Therefore, streamflow forecasting entails modeling seasonality, periodicity, and its correlation structure, and assessing uncertainties. This study applies entropy theory to forecast streamflow and measure uncertainties during the forecasting process. To apply entropy theory for streamflow forecasting, spectral analysis is combined to time series analysis, as spectral analysis can be employed to characterize patterns of streamflow variation and identify the periodicity of streamflow. That is, it permits to extract significant information for understanding the streamflow process and prediction thereof. Application of entropy theory for streamflow forecasting involves determination of spectral density, determination of parameters, and extension of autocorrelation function. The uncertainties brought by precipitation input, forecasting model and forecasted results are measured separately using entropy. With information theory, how these uncertainties transported and aggregated during these processes will be described.
The rules of information aggregation and emergence of collective intelligent behavior.
Bettencourt, Luís M A
2009-10-01
Information is a peculiar quantity. Unlike matter and energy, which are conserved by the laws of physics, the aggregation of knowledge from many sources can in fact produce more information (synergy) or less (redundancy) than the sum of its parts. This feature can endow groups with problem-solving strategies that are superior to those possible among noninteracting individuals and, in turn, may provide a selection drive toward collective cooperation and coordination. Here we explore the formal properties of information aggregation as a general principle for explaining features of social organization. We quantify information in terms of the general formalism of information theory, which also prescribes the rules of how different pieces of evidence inform the solution of a given problem. We then show how several canonical examples of collective cognition and coordination can be understood through principles of minimization of uncertainty (maximization of predictability) under information pooling over many individuals. We discuss in some detail how collective coordination in swarms, markets, natural language processing, and collaborative filtering may be guided by the optimal aggregation of information in social collectives. We also identify circumstances when these processes fail, leading, for example, to inefficient markets. The contrast to approaches to understand coordination and collaboration via decision and game theory is also briefly discussed. Copyright © 2009 Cognitive Science Society, Inc.
Effective field theory models for nonviolent information transfer from black holes
NASA Astrophysics Data System (ADS)
Giddings, Steven B.; Shi, Yinbo
2014-06-01
Transfer of quantum information from the interior of a black hole to its atmosphere is described, in models based on effective field theory. This description illustrates that such transfer need not be violent to the semiclassical geometry or to infalling observers, and in particular can avoid producing a singular horizon or "firewall". One can specifically quantify the rate of information transfer and show that a rate necessary to unitarize black hole evaporation produces a relatively mild modification to the stress tensor near the horizon. In an exterior description of the transfer, the new interactions responsible for it are approximated by "effective sources" acting on fields in the black hole atmosphere. If the necessary interactions couple to general modes in the black hole atmosphere, one also finds a straightforward mechanism for information transfer rates to increase when a black hole is mined, avoiding paradoxical behavior. Correspondence limits are discussed, in the presence of such new interactions, for both small black holes and large ones; the near-horizon description of the latter is approximately that of Rindler space.
Quantifying Information Gain from Dynamic Downscaling Experiments
NASA Astrophysics Data System (ADS)
Tian, Y.; Peters-Lidard, C. D.
2015-12-01
Dynamic climate downscaling experiments are designed to produce information at higher spatial and temporal resolutions. Such additional information is generated from the low-resolution initial and boundary conditions via the predictive power of the physical laws. However, errors and uncertainties in the initial and boundary conditions can be propagated and even amplified to the downscaled simulations. Additionally, the limit of predictability in nonlinear dynamical systems will also damper the information gain, even if the initial and boundary conditions were error-free. Thus it is critical to quantitatively define and measure the amount of information increase from dynamic downscaling experiments, to better understand and appreciate their potentials and limitations. We present a scheme to objectively measure the information gain from such experiments. The scheme is based on information theory, and we argue that if a downscaling experiment is to exhibit value, it has to produce more information than what can be simply inferred from information sources already available. These information sources include the initial and boundary conditions, the coarse resolution model in which the higher-resolution models are embedded, and the same set of physical laws. These existing information sources define an "information threshold" as a function of the spatial and temporal resolution, and this threshold serves as a benchmark to quantify the information gain from the downscaling experiments, or any other approaches. For a downscaling experiment to shown any value, the information has to be above this threshold. A recent NASA-supported downscaling experiment is used as an example to illustrate the application of this scheme.
Quantum coherence via skew information and its polygamy
NASA Astrophysics Data System (ADS)
Yu, Chang-shui
2017-04-01
Quantifying coherence is a key task in both quantum-mechanical theory and practical applications. Here, a reliable quantum coherence measure is presented by utilizing the quantum skew information of the state of interest subject to a certain broken observable. This coherence measure is proven to fulfill all the criteria (especially the strong monotonicity) recently introduced in the resource theories of quantum coherence. The coherence measure has an analytic expression and an obvious operational meaning related to quantum metrology. In terms of this coherence measure, the distribution of the quantum coherence, i.e., how the quantum coherence is distributed among the multiple parties, is studied and a corresponding polygamy relation is proposed. As a further application, it is found that the coherence measure forms the natural upper bounds for quantum correlations prepared by incoherent operations. The experimental measurements of our coherence measure as well as the relative-entropy coherence and lp-norm coherence are studied finally.
Recurrence measure of conditional dependence and applications.
Ramos, Antônio M T; Builes-Jaramillo, Alejandro; Poveda, Germán; Goswami, Bedartha; Macau, Elbert E N; Kurths, Jürgen; Marwan, Norbert
2017-05-01
Identifying causal relations from observational data sets has posed great challenges in data-driven causality inference studies. One of the successful approaches to detect direct coupling in the information theory framework is transfer entropy. However, the core of entropy-based tools lies on the probability estimation of the underlying variables. Here we propose a data-driven approach for causality inference that incorporates recurrence plot features into the framework of information theory. We define it as the recurrence measure of conditional dependence (RMCD), and we present some applications. The RMCD quantifies the causal dependence between two processes based on joint recurrence patterns between the past of the possible driver and present of the potentially driven, excepting the contribution of the contemporaneous past of the driven variable. Finally, it can unveil the time scale of the influence of the sea-surface temperature of the Pacific Ocean on the precipitation in the Amazonia during recent major droughts.
Recurrence measure of conditional dependence and applications
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Builes-Jaramillo, Alejandro; Poveda, Germán; Goswami, Bedartha; Macau, Elbert E. N.; Kurths, Jürgen; Marwan, Norbert
2017-05-01
Identifying causal relations from observational data sets has posed great challenges in data-driven causality inference studies. One of the successful approaches to detect direct coupling in the information theory framework is transfer entropy. However, the core of entropy-based tools lies on the probability estimation of the underlying variables. Here we propose a data-driven approach for causality inference that incorporates recurrence plot features into the framework of information theory. We define it as the recurrence measure of conditional dependence (RMCD), and we present some applications. The RMCD quantifies the causal dependence between two processes based on joint recurrence patterns between the past of the possible driver and present of the potentially driven, excepting the contribution of the contemporaneous past of the driven variable. Finally, it can unveil the time scale of the influence of the sea-surface temperature of the Pacific Ocean on the precipitation in the Amazonia during recent major droughts.
Visual degradation in Leonardo da Vinci's iconic self-portrait: A nanoscale study
NASA Astrophysics Data System (ADS)
Conte, A. Mosca; Pulci, O.; Misiti, M. C.; Lojewska, J.; Teodonio, L.; Violante, C.; Missori, M.
2014-06-01
The discoloration of ancient paper, due to the development of oxidized groups acting as chromophores in its chief component, cellulose, is responsible for severe visual degradation in ancient artifacts. By adopting a non-destructive approach based on the combination of optical reflectance measurements and time-dependent density functional theory ab-initio calculations, we describe and quantify the chromophores affecting Leonardo da Vinci's iconic self-portrait. Their relative concentrations are very similar to those measured in modern and ancient samples aged in humid environments. This analysis quantifies the present level of optical degradation of the Leonardo da Vinci's self-portrait which, compared with future measurements, will assess its degradation rate. This is a fundamental information in order to plan appropriate conservation strategies.
Information-Based Analysis of Data Assimilation (Invited)
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Gupta, H. V.; Crow, W. T.; Gong, W.
2013-12-01
Data assimilation is defined as the Bayesian conditioning of uncertain model simulations on observations for the purpose of reducing uncertainty about model states. Practical data assimilation methods make the application of Bayes' law tractable either by employing assumptions about the prior, posterior and likelihood distributions (e.g., the Kalman family of filters) or by using resampling methods (e.g., bootstrap filter). We propose to quantify the efficiency of these approximations in an OSSE setting using information theory and, in an OSSE or real-world validation setting, to measure the amount - and more importantly, the quality - of information extracted from observations during data assimilation. To analyze DA assumptions, uncertainty is quantified as the Shannon-type entropy of a discretized probability distribution. The maximum amount of information that can be extracted from observations about model states is the mutual information between states and observations, which is equal to the reduction in entropy in our estimate of the state due to Bayesian filtering. The difference between this potential and the actual reduction in entropy due to Kalman (or other type of) filtering measures the inefficiency of the filter assumptions. Residual uncertainty in DA posterior state estimates can be attributed to three sources: (i) non-injectivity of the observation operator, (ii) noise in the observations, and (iii) filter approximations. The contribution of each of these sources is measurable in an OSSE setting. The amount of information extracted from observations by data assimilation (or system identification, including parameter estimation) can also be measured by Shannon's theory. Since practical filters are approximations of Bayes' law, it is important to know whether the information that is extracted form observations by a filter is reliable. We define information as either good or bad, and propose to measure these two types of information using partial Kullback-Leibler divergences. Defined this way, good and bad information sum to total information. This segregation of information into good and bad components requires a validation target distribution; in a DA OSSE setting, this can be the true Bayesian posterior, but in a real-world setting the validation target might be determined by a set of in situ observations.
Survey on nonlocal games and operator space theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palazuelos, Carlos, E-mail: cpalazue@mat.ucm.es; Vidick, Thomas, E-mail: vidick@cms.caltech.edu
This review article is concerned with a recently uncovered connection between operator spaces, a noncommutative extension of Banach spaces, and quantum nonlocality, a striking phenomenon which underlies many of the applications of quantum mechanics to information theory, cryptography, and algorithms. Using the framework of nonlocal games, we relate measures of the nonlocality of quantum mechanics to certain norms in the Banach and operator space categories. We survey recent results that exploit this connection to derive large violations of Bell inequalities, study the complexity of the classical and quantum values of games and their relation to Grothendieck inequalities, and quantify themore » nonlocality of different classes of entangled states.« less
Trade-off between information and disturbance in qubit thermometry
NASA Astrophysics Data System (ADS)
Seveso, Luigi; Paris, Matteo G. A.
2018-03-01
We address the trade-off between information and disturbance in qubit thermometry from the perspective of quantum estimation theory. Given a quantum measurement, we quantify information via the Fisher information of the measurement and disturbance via four different figures of merit, which capture different aspects (statistical, thermodynamical, geometrical) of the trade-off. For each disturbance measure, the efficient measurements, i.e., the measurements that introduce a disturbance not greater than any other measurement extracting the same amount of information, are determined explicitly. The family of efficient measurements varies with the choice of the disturbance measure. On the other hand, commutativity between the elements of the probability operator-valued measure (POVM) and the equilibrium state of the thermometer is a necessary condition for efficiency with respect to any figure of disturbance.
Krekels, Ehj; Novakovic, A M; Vermeulen, A M; Friberg, L E; Karlsson, M O
2017-08-01
As biomarkers are lacking, multi-item questionnaire-based tools like the Positive and Negative Syndrome Scale (PANSS) are used to quantify disease severity in schizophrenia. Analyzing composite PANSS scores as continuous data discards information and violates the numerical nature of the scale. Here a longitudinal analysis based on Item Response Theory is presented using PANSS data from phase III clinical trials. Latent disease severity variables were derived from item-level data on the positive, negative, and general PANSS subscales each. On all subscales, the time course of placebo responses were best described with Weibull models, and dose-independent functions with exponential models to describe the onset of the full effect were used to describe paliperidone's effect. Placebo and drug effect were most pronounced on the positive subscale. The final model successfully describes the time course of treatment effects on the individual PANSS item-levels, on all PANSS subscale levels, and on the total score level. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Cannon, Jonathan
2017-01-01
Mutual information is a commonly used measure of communication between neurons, but little theory exists describing the relationship between mutual information and the parameters of the underlying neuronal interaction. Such a theory could help us understand how specific physiological changes affect the capacity of neurons to synaptically communicate, and, in particular, they could help us characterize the mechanisms by which neuronal dynamics gate the flow of information in the brain. Here we study a pair of linear-nonlinear-Poisson neurons coupled by a weak synapse. We derive an analytical expression describing the mutual information between their spike trains in terms of synapse strength, neuronal activation function, the time course of postsynaptic currents, and the time course of the background input received by the two neurons. This expression allows mutual information calculations that would otherwise be computationally intractable. We use this expression to analytically explore the interaction of excitation, information transmission, and the convexity of the activation function. Then, using this expression to quantify mutual information in simulations, we illustrate the information-gating effects of neural oscillations and oscillatory coherence, which may either increase or decrease the mutual information across the synapse depending on parameters. Finally, we show analytically that our results can quantitatively describe the selection of one information pathway over another when multiple sending neurons project weakly to a single receiving neuron.
An EEG should not be obtained routinely after first unprovoked seizure in childhood.
Gilbert, D L; Buncher, C R
2000-02-08
To quantify and analyze the value of expected information from an EEG after first unprovoked seizure in childhood. An EEG is often recommended as part of the standard diagnostic evaluation after first seizure. A MEDLINE search from 1980 to 1998 was performed. From eligible studies, data on EEG results and seizure recurrence risk in children were abstracted, and sensitivity, specificity, and positive and negative predictive values of EEG in predicting recurrence were calculated. Linear information theory was used to quantify and compare the expected information from the EEG in all studies. Standard test-treat decision analysis with a treatment threshold at 80% recurrence risk was used to determine the range of pretest recurrence probabilities over which testing affects treatment decisions. Four studies involving 831 children were eligible for analysis. At best, the EEG had a sensitivity of 61%, a specificity of 71%, and an expected information of 0.16 out of a possible 0.50. The pretest probability of recurrence was less than the lower limit of the range for rational testing in all studies. In this analysis, the quantity of expected information from the EEG was too low to affect treatment recommendations in most patients. EEG should be ordered selectively, not routinely, after first unprovoked seizure in childhood.
NASA Astrophysics Data System (ADS)
Shi, Junchao; Zhang, Xudong; Liu, Ying; Chen, Qi
2017-03-01
In their interesting article [1] Wang et al. proposed a mathematical model based on evolutionary game theory [2] to tackle the fundamental question in embryo development, that how sperm and egg interact with each other, through epigenetic processes, to form a zygote and direct successful embryo development. This work is based on the premise that epigenetic reprogramming (referring to the erasure and reconstruction of epigenetic marks, such as DNA methylation and histone modifications) after fertilization might be of paramount importance to maintain the normal development of embryos, a premise we fully agree, given the compelling experimental evidence reported [3]. Wang et al. have specifically chosen to employ the well-studied DNA methylation reprogramming process during mammalian early embryo development, as a basis to develop their mathematical model, namely epigenetic game theory (epiGame). They concluded that the DNA methylation pattern in mammalian early embryo could be formulated and quantified, and their model can be further used to quantify the interactions, such as competition and/or cooperation of expressed genes that maximize the fitness of embryos. The efforts by Wang et al. in quantitatively and systematically analyzing the beginning of life apparently hold value and represent a novel direction for future embryo development research from both theoretical and experimental biologists. On the other hand, we see their theory still at its infancy, because there are plenty more parameters to consider and there are spaces for debates, such as the cases of haploid embryo development [4]. Here, we briefly comment on the dynamic process of epigenetic reprogramming that goes beyond DNA methylation, a dynamic interplay that involves histone modifications, non-coding RNAs, transposable elements et al., as well as the potential input of the various types of 'hereditary' epigenetic information in the gametes - a game that has started before the fertilization.
Recoverability in quantum information theory
NASA Astrophysics Data System (ADS)
Wilde, Mark
The fact that the quantum relative entropy is non-increasing with respect to quantum physical evolutions lies at the core of many optimality theorems in quantum information theory and has applications in other areas of physics. In this work, we establish improvements of this entropy inequality in the form of physically meaningful remainder terms. One of the main results can be summarized informally as follows: if the decrease in quantum relative entropy between two quantum states after a quantum physical evolution is relatively small, then it is possible to perform a recovery operation, such that one can perfectly recover one state while approximately recovering the other. This can be interpreted as quantifying how well one can reverse a quantum physical evolution. Our proof method is elementary, relying on the method of complex interpolation, basic linear algebra, and the recently introduced Renyi generalization of a relative entropy difference. The theorem has a number of applications in quantum information theory, which have to do with providing physically meaningful improvements to many known entropy inequalities. This is based on arXiv:1505.04661, now accepted for publication in Proceedings of the Royal Society A. I acknowledge support from startup funds from the Department of Physics and Astronomy at LSU, the NSF under Award No. CCF-1350397, and the DARPA Quiness Program through US Army Research Office award W31P4Q-12-1-0019.
Directional connectivity in hydrology and ecology.
Larsen, Laurel G; Choi, Jungyill; Nungesser, Martha K; Harvey, Judson W
2012-12-01
Quantifying hydrologic and ecological connectivity has contributed to understanding transport and dispersal processes and assessing ecosystem degradation or restoration potential. However, there has been little synthesis across disciplines. The growing field of ecohydrology and recent recognition that loss of hydrologic connectivity is leading to a global decline in biodiversity underscore the need for a unified connectivity concept. One outstanding need is a way to quantify directional connectivity that is consistent, robust to variations in sampling, and transferable across scales or environmental settings. Understanding connectivity in a particular direction (e.g., streamwise, along or across gradient, between sources and sinks, along cardinal directions) provides critical information for predicting contaminant transport, planning conservation corridor design, and understanding how landscapes or hydroscapes respond to directional forces like wind or water flow. Here we synthesize progress on quantifying connectivity and develop a new strategy for evaluating directional connectivity that benefits from use of graph theory in ecology and percolation theory in hydrology. The directional connectivity index (DCI) is a graph-theory based, multiscale metric that is generalizable to a range of different structural and functional connectivity applications. It exhibits minimal sensitivity to image rotation or resolution within a given range and responds intuitively to progressive, unidirectional change. Further, it is linearly related to the integral connectivity scale length--a metric common in hydrology that correlates well with actual fluxes--but is less computationally challenging and more readily comparable across different landscapes. Connectivity-orientation curves (i.e., directional connectivity computed over a range of headings) provide a quantitative, information-dense representation of environmental structure that can be used for comparison or detection of subtle differences in the physical-biological feedbacks driving pattern formation. Case-study application of the DCI to the Everglades in south Florida revealed that loss of directional hydrologic connectivity occurs more rapidly and is a more sensitive indicator of declining ecosystem function than other metrics (e.g., habitat area) used previously. Here and elsewhere, directional connectivity can provide insight into landscape drivers and processes, act as an early-warning indicator of environmental degradation, and serve as a planning tool or performance measure for conservation and restoration efforts.
Directional connectivity in hydrology and ecology
Larsen, Laurel G.; Choi, Jungyill; Nungesser, Martha K.; Harvey, Judson W.
2012-01-01
Quantifying hydrologic and ecological connectivity has contributed to understanding transport and dispersal processes and assessing ecosystem degradation or restoration potential. However, there has been little synthesis across disciplines. The growing field of ecohydrology and recent recognition that loss of hydrologic connectivity is leading to a global decline in biodiversity underscore the need for a unified connectivity concept. One outstanding need is a way to quantify directional connectivity that is consistent, robust to variations in sampling, and transferable across scales or environmental settings. Understanding connectivity in a particular direction (e.g., streamwise, along or across gradient, between sources and sinks, along cardinal directions) provides critical information for predicting contaminant transport, planning conservation corridor design, and understanding how landscapes or hydroscapes respond to directional forces like wind or water flow. Here we synthesize progress on quantifying connectivity and develop a new strategy for evaluating directional connectivity that benefits from use of graph theory in ecology and percolation theory in hydrology. The directional connectivity index (DCI) is a graph-theory based, multiscale metric that is generalizable to a range of different structural and functional connectivity applications. It exhibits minimal sensitivity to image rotation or resolution within a given range and responds intuitively to progressive, unidirectional change. Further, it is linearly related to the integral connectivity scale length—a metric common in hydrology that correlates well with actual fluxes—but is less computationally challenging and more readily comparable across different landscapes. Connectivity-orientation curves (i.e., directional connectivity computed over a range of headings) provide a quantitative, information-dense representation of environmental structure that can be used for comparison or detection of subtle differences in the physical-biological feedbacks driving pattern formation. Case-study application of the DCI to the Everglades in south Florida revealed that loss of directional hydrologic connectivity occurs more rapidly and is a more sensitive indicator of declining ecosystem function than other metrics (e.g., habitat area) used previously. Here and elsewhere, directional connectivity can provide insight into landscape drivers and processes, act as an early-warning indicator of environmental degradation, and serve as a planning tool or performance measure for conservation and restoration efforts.
Alem, Mauro; Townsend, Robert M.
2013-01-01
The theory of the optimal allocation of risk and the Townsend Thai panel data on financial transactions are used to assess the impact of the major formal and informal financial institutions of an emerging market economy. We link financial institution assessment to the actual impact on clients, rather than ratios and non-performing loans. We derive both consumption and investment equations from a common core theory with both risk and productive activities. The empirical specification follows closely from this theory and allows both OLS and IV estimation. We thus quantify the consumption and investment smoothing impact of financial institutions on households including those running farms and small businesses. A government development bank (BAAC) is shown to be particularly helpful in smoothing consumption and investment, in no small part through credit, consistent with its own operating system, which embeds an implicit insurance operation. Commercial banks are smoothing investment, largely through formal savings accounts. Other institutions seem ineffective by these metrics. PMID:25400319
Cooke, Richard; French, David P
2008-01-01
Meta-analysis was used to quantify how well the Theories of Reasoned Action and Planned Behaviour have predicted intentions to attend screening programmes and actual attendance behaviour. Systematic literature searches identified 33 studies that were included in the review. Across the studies as a whole, attitudes had a large-sized relationship with intention, while subjective norms and perceived behavioural control (PBC) possessed medium-sized relationships with intention. Intention had a medium-sized relationship with attendance, whereas the PBC-attendance relationship was small sized. Due to heterogeneity in results between studies, moderator analyses were conducted. The moderator variables were (a) type of screening test, (b) location of recruitment, (c) screening cost and (d) invitation to screen. All moderators affected theory of planned behaviour relationships. Suggestions for future research emerging from these results include targeting attitudes to promote intention to screen, a greater use of implementation intentions in screening information and examining the credibility of different screening providers.
Alem, Mauro; Townsend, Robert M
2014-11-01
The theory of the optimal allocation of risk and the Townsend Thai panel data on financial transactions are used to assess the impact of the major formal and informal financial institutions of an emerging market economy. We link financial institution assessment to the actual impact on clients, rather than ratios and non-performing loans. We derive both consumption and investment equations from a common core theory with both risk and productive activities. The empirical specification follows closely from this theory and allows both OLS and IV estimation. We thus quantify the consumption and investment smoothing impact of financial institutions on households including those running farms and small businesses. A government development bank (BAAC) is shown to be particularly helpful in smoothing consumption and investment, in no small part through credit, consistent with its own operating system, which embeds an implicit insurance operation. Commercial banks are smoothing investment, largely through formal savings accounts. Other institutions seem ineffective by these metrics.
Angular momentum transport with twisted exciton wave packets
NASA Astrophysics Data System (ADS)
Zang, Xiaoning; Lusk, Mark T.
2017-10-01
A chain of cofacial molecules with CN or CN h symmetry supports excitonic states with a screwlike structure. These can be quantified with the combination of an axial wave number and an azimuthal winding number. Combinations of these states can be used to construct excitonic wave packets that spiral down the chain with well-determined linear and angular momenta. These twisted exciton wave packets can be created and annihilated using laser pulses, and their angular momentum can be optically modified during transit. This allows for the creation of optoexcitonic circuits in which information, encoded in the angular momentum of light, is converted into excitonic wave packets that can be manipulated, transported, and then reemitted. A tight-binding paradigm is used to demonstrate the key ideas. The approach is then extended to quantify the evolution of twisted exciton wave packets in a many-body, multilevel time-domain density functional theory setting. In both settings, numerical methods are developed that allow the site-to-site transfer of angular momentum to be quantified.
Benchmarking successional progress in a quantitative food web.
Boit, Alice; Gaedke, Ursula
2014-01-01
Central to ecology and ecosystem management, succession theory aims to mechanistically explain and predict the assembly and development of ecological communities. Yet processes at lower hierarchical levels, e.g. at the species and functional group level, are rarely mechanistically linked to the under-investigated system-level processes which drive changes in ecosystem properties and functioning and are comparable across ecosystems. As a model system for secondary succession, seasonal plankton succession during the growing season is readily observable and largely driven autogenically. We used a long-term dataset from large, deep Lake Constance comprising biomasses, auto- and heterotrophic production, food quality, functional diversity, and mass-balanced food webs of the energy and nutrient flows between functional guilds of plankton and partly fish. Extracting population- and system-level indices from this dataset, we tested current hypotheses about the directionality of successional progress which are rooted in ecosystem theory, the metabolic theory of ecology, quantitative food web theory, thermodynamics, and information theory. Our results indicate that successional progress in Lake Constance is quantifiable, passing through predictable stages. Mean body mass, functional diversity, predator-prey weight ratios, trophic positions, system residence times of carbon and nutrients, and the complexity of the energy flow patterns increased during succession. In contrast, both the mass-specific metabolic activity and the system export decreased, while the succession rate exhibited a bimodal pattern. The weighted connectance introduced here represents a suitable index for assessing the evenness and interconnectedness of energy flows during succession. Diverging from earlier predictions, ascendency and eco-exergy did not increase during succession. Linking aspects of functional diversity to metabolic theory and food web complexity, we reconcile previously disjoint bodies of ecological theory to form a complete picture of successional progress within a pelagic food web. This comprehensive synthesis may be used as a benchmark for quantifying successional progress in other ecosystems.
Benchmarking Successional Progress in a Quantitative Food Web
Boit, Alice; Gaedke, Ursula
2014-01-01
Central to ecology and ecosystem management, succession theory aims to mechanistically explain and predict the assembly and development of ecological communities. Yet processes at lower hierarchical levels, e.g. at the species and functional group level, are rarely mechanistically linked to the under-investigated system-level processes which drive changes in ecosystem properties and functioning and are comparable across ecosystems. As a model system for secondary succession, seasonal plankton succession during the growing season is readily observable and largely driven autogenically. We used a long-term dataset from large, deep Lake Constance comprising biomasses, auto- and heterotrophic production, food quality, functional diversity, and mass-balanced food webs of the energy and nutrient flows between functional guilds of plankton and partly fish. Extracting population- and system-level indices from this dataset, we tested current hypotheses about the directionality of successional progress which are rooted in ecosystem theory, the metabolic theory of ecology, quantitative food web theory, thermodynamics, and information theory. Our results indicate that successional progress in Lake Constance is quantifiable, passing through predictable stages. Mean body mass, functional diversity, predator-prey weight ratios, trophic positions, system residence times of carbon and nutrients, and the complexity of the energy flow patterns increased during succession. In contrast, both the mass-specific metabolic activity and the system export decreased, while the succession rate exhibited a bimodal pattern. The weighted connectance introduced here represents a suitable index for assessing the evenness and interconnectedness of energy flows during succession. Diverging from earlier predictions, ascendency and eco-exergy did not increase during succession. Linking aspects of functional diversity to metabolic theory and food web complexity, we reconcile previously disjoint bodies of ecological theory to form a complete picture of successional progress within a pelagic food web. This comprehensive synthesis may be used as a benchmark for quantifying successional progress in other ecosystems. PMID:24587353
Estimating the mutual information of an EEG-based Brain-Computer Interface.
Schlögl, A; Neuper, C; Pfurtscheller, G
2002-01-01
An EEG-based Brain-Computer Interface (BCI) could be used as an additional communication channel between human thoughts and the environment. The efficacy of such a BCI depends mainly on the transmitted information rate. Shannon's communication theory was used to quantify the information rate of BCI data. For this purpose, experimental EEG data from four BCI experiments was analyzed off-line. Subjects imaginated left and right hand movements during EEG recording from the sensorimotor area. Adaptive autoregressive (AAR) parameters were used as features of single trial EEG and classified with linear discriminant analysis. The intra-trial variation as well as the inter-trial variability, the signal-to-noise ratio, the entropy of information, and the information rate were estimated. The entropy difference was used as a measure of the separability of two classes of EEG patterns.
Communication Strength of Correlations Violating Monogamy Relations
NASA Astrophysics Data System (ADS)
Kłobus, Waldemar; Oszmaniec, Michał; Augusiak, Remigiusz; Grudka, Andrzej
2016-05-01
In any theory satisfying the no-signaling principle correlations generated among spatially separated parties in a Bell-type experiment are subject to certain constraints known as monogamy relations. Recently, in the context of the black hole information loss problem it was suggested that these monogamy relations might be violated. This in turn implies that correlations arising in such a scenario must violate the no-signaling principle and hence can be used to send classical information between parties. Here, we study the amount of information that can be sent using such correlations. To this aim, we first provide a framework associating them with classical channels whose capacities are then used to quantify the usefulness of these correlations in sending information. Finally, we determine the minimal amount of information that can be sent using signaling correlations violating the monogamy relation associated to the chained Bell inequalities.
Entropy changes in brain function.
Rosso, Osvaldo A
2007-04-01
The traditional way of analyzing brain electrical activity, on the basis of electroencephalography (EEG) records, relies mainly on visual inspection and years of training. Although it is quite useful, of course, one has to acknowledge its subjective nature that hardly allows for a systematic protocol. In the present work quantifiers based on information theory and wavelet transform are reviewed. The "relative wavelet energy" provides information about the relative energy associated with different frequency bands present in the EEG and their corresponding degree of importance. The "normalized total wavelet entropy" carries information about the degree of order-disorder associated with a multi-frequency signal response. Their application in the analysis and quantification of short duration EEG signals (event-related potentials) and epileptic EEG records are summarized.
An Early Model for Value and Sustainability in Health Information Exchanges: Qualitative Study
2018-01-01
Background The primary value relative to health information exchange has been seen in terms of cost savings relative to laboratory and radiology testing, emergency department expenditures, and admissions. However, models are needed to statistically quantify value and sustainability and better understand the dependent and mediating factors that contribute to value and sustainability. Objective The purpose of this study was to provide a basis for early model development for health information exchange value and sustainability. Methods A qualitative study was conducted with 21 interviews of eHealth Exchange participants across 10 organizations. Using a grounded theory approach and 3.0 as a relative frequency threshold, 5 main categories and 16 subcategories emerged. Results This study identifies 3 core current perceived value factors and 5 potential perceived value factors—how interviewees predict health information exchanges may evolve as there are more participants. These value factors were used as the foundation for early model development for sustainability of health information exchange. Conclusions Using the value factors from the interviews, the study provides the basis for early model development for health information exchange value and sustainability. This basis includes factors from the research: fostering consumer engagement; establishing a provider directory; quantifying use, cost, and clinical outcomes; ensuring data integrity through patient matching; and increasing awareness, usefulness, interoperability, and sustainability of eHealth Exchange. PMID:29712623
An Early Model for Value and Sustainability in Health Information Exchanges: Qualitative Study.
Feldman, Sue S
2018-04-30
The primary value relative to health information exchange has been seen in terms of cost savings relative to laboratory and radiology testing, emergency department expenditures, and admissions. However, models are needed to statistically quantify value and sustainability and better understand the dependent and mediating factors that contribute to value and sustainability. The purpose of this study was to provide a basis for early model development for health information exchange value and sustainability. A qualitative study was conducted with 21 interviews of eHealth Exchange participants across 10 organizations. Using a grounded theory approach and 3.0 as a relative frequency threshold, 5 main categories and 16 subcategories emerged. This study identifies 3 core current perceived value factors and 5 potential perceived value factors-how interviewees predict health information exchanges may evolve as there are more participants. These value factors were used as the foundation for early model development for sustainability of health information exchange. Using the value factors from the interviews, the study provides the basis for early model development for health information exchange value and sustainability. This basis includes factors from the research: fostering consumer engagement; establishing a provider directory; quantifying use, cost, and clinical outcomes; ensuring data integrity through patient matching; and increasing awareness, usefulness, interoperability, and sustainability of eHealth Exchange. ©Sue S Feldman. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 30.04.2018.
Optimal design of focused experiments and surveys
NASA Astrophysics Data System (ADS)
Curtis, Andrew
1999-10-01
Experiments and surveys are often performed to obtain data that constrain some previously underconstrained model. Often, constraints are most desired in a particular subspace of model space. Experiment design optimization requires that the quality of any particular design can be both quantified and then maximized. This study shows how the quality can be defined such that it depends on the amount of information that is focused in the particular subspace of interest. In addition, algorithms are presented which allow one particular focused quality measure (from the class of focused measures) to be evaluated efficiently. A subclass of focused quality measures is also related to the standard variance and resolution measures from linearized inverse theory. The theory presented here requires that the relationship between model parameters and data can be linearized around a reference model without significant loss of information. Physical and financial constraints define the space of possible experiment designs. Cross-well tomographic examples are presented, plus a strategy for survey design to maximize information about linear combinations of parameters such as bulk modulus, κ =λ+ 2μ/3.
Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M
2014-01-01
Quantitative health impact assessment (HIA) is increasingly being used to assess the health impacts attributable to an environmental policy or intervention. As a consequence, there is a need to assess uncertainties in the assessments because of the uncertainty in the HIA models. In this paper, a framework is developed to quantify the uncertainty in the health impacts of environmental interventions and is applied to evaluate the impacts of poor housing ventilation. The paper describes the development of the framework through three steps: (i) selecting the relevant exposure metric and quantifying the evidence of potential health effects of the exposure; (ii) estimating the size of the population affected by the exposure and selecting the associated outcome measure; (iii) quantifying the health impact and its uncertainty. The framework introduces a novel application for the propagation of uncertainty in HIA, based on fuzzy set theory. Fuzzy sets are used to propagate parametric uncertainty in a non-probabilistic space and are applied to calculate the uncertainty in the morbidity burdens associated with three indoor ventilation exposure scenarios: poor, fair and adequate. The case-study example demonstrates how the framework can be used in practice, to quantify the uncertainty in health impact assessment where there is insufficient information to carry out a probabilistic uncertainty analysis. © 2013.
An Information Theoretic Characterisation of Auditory Encoding
Overath, Tobias; Cusack, Rhodri; Kumar, Sukhbinder; von Kriegstein, Katharina; Warren, Jason D; Grube, Manon; Carlyon, Robert P; Griffiths, Timothy D
2007-01-01
The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT). In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content. PMID:17958472
Wang, Xin; Wang, Ying; Sun, Hongbin
2016-01-01
In social media, trust and distrust among users are important factors in helping users make decisions, dissect information, and receive recommendations. However, the sparsity and imbalance of social relations bring great difficulties and challenges in predicting trust and distrust. Meanwhile, there are numerous inducing factors to determine trust and distrust relations. The relationship among inducing factors may be dependency, independence, and conflicting. Dempster-Shafer theory and neural network are effective and efficient strategies to deal with these difficulties and challenges. In this paper, we study trust and distrust prediction based on the combination of Dempster-Shafer theory and neural network. We firstly analyze the inducing factors about trust and distrust, namely, homophily, status theory, and emotion tendency. Then, we quantify inducing factors of trust and distrust, take these features as evidences, and construct evidence prototype as input nodes of multilayer neural network. Finally, we propose a framework of predicting trust and distrust which uses multilayer neural network to model the implementing process of Dempster-Shafer theory in different hidden layers, aiming to overcome the disadvantage of Dempster-Shafer theory without optimization method. Experimental results on a real-world dataset demonstrate the effectiveness of the proposed framework. PMID:27034651
Wibral, Michael; Priesemann, Viola; Kay, Jim W; Lizier, Joseph T; Phillips, William A
2017-03-01
In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a 'goal function', of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. 'edge filtering', 'working memory'). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon's mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called 'coding with synergy', which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Information gains from cosmic microwave background experiments
NASA Astrophysics Data System (ADS)
Seehars, Sebastian; Amara, Adam; Refregier, Alexandre; Paranjape, Aseem; Akeret, Joël
2014-07-01
To shed light on the fundamental problems posed by dark energy and dark matter, a large number of experiments have been performed and combined to constrain cosmological models. We propose a novel way of quantifying the information gained by updates on the parameter constraints from a series of experiments which can either complement earlier measurements or replace them. For this purpose, we use the Kullback-Leibler divergence or relative entropy from information theory to measure differences in the posterior distributions in model parameter space from a pair of experiments. We apply this formalism to a historical series of cosmic microwave background experiments ranging from Boomerang to WMAP, SPT, and Planck. Considering different combinations of these experiments, we thus estimate the information gain in units of bits and distinguish contributions from the reduction of statistical errors and the "surprise" corresponding to a significant shift of the parameters' central values. For this experiment series, we find individual relative entropy gains ranging from about 1 to 30 bits. In some cases, e.g. when comparing WMAP and Planck results, we find that the gains are dominated by the surprise rather than by improvements in statistical precision. We discuss how this technique provides a useful tool for both quantifying the constraining power of data from cosmological probes and detecting the tensions between experiments.
NASA Astrophysics Data System (ADS)
Zhang, Yuan-Ming; Zhang, Yinghao; Guo, Mingyue
2017-03-01
Wang's et al. article [1] is the first to integrate game theory (especially evolutionary game theory) with epigenetic modification of zygotic genomes. They described and assessed a modeling framework based on evolutionary game theory to quantify, how sperms and oocytes interact through epigenetic processes, to determine embryo development. They also studied the internal mechanisms for normal embryo development: 1) evolutionary interactions between DNA methylation of the paternal and maternal genomes, and 2) the application of game theory to formulate and quantify how different genes compete or cooperate to regulate embryogenesis through methylation. Although it is not very comprehensive and profound regarding game theory modeling, this article bridges the gap between evolutionary game theory and the epigenetic control of embryo development by powerful ordinary differential equations (ODEs). The epiGame framework includes four aspects: 1) characterizing how epigenetic game theory works by the strategy matrix, in which the pattern and relative magnitude of the methylation effects on embryogenesis, are described by the cooperation and competition mechanisms, 2) quantifying the game that the direction and degree of P-M interactions over embryo development can be explained by the sign and magnitude of interaction parameters in model (2), 3) modeling epigenetic interactions within the morula, especially for two coupled nonlinear ODEs, with explicit functions in model (4), which provide a good fit to the observed data for the two sexes (adjusted R2 = 0.956), and 4) revealing multifactorial interactions in embryogenesis from the coupled ODEs in model (2) to triplet ODEs in model (6). Clearly, this article extends game theory from evolutionary game theory to epigenetic game theory.
Biparametric complexities and generalized Planck radiation law
NASA Astrophysics Data System (ADS)
Puertas-Centeno, David; Toranzo, I. V.; Dehesa, J. S.
2017-12-01
Complexity theory embodies some of the hardest, most fundamental and most challenging open problems in modern science. The very term complexity is very elusive, so the main goal of this theory is to find meaningful quantifiers for it. In fact, we need various measures to take into account the multiple facets of this term. Here, some biparametric Crámer-Rao and Heisenberg-Rényi measures of complexity of continuous probability distributions are defined and discussed. Then, they are applied to blackbody radiation at temperature T in a d-dimensional universe. It is found that these dimensionless quantities do not depend on T nor on any physical constants. So, they have a universal character in the sense that they only depend on spatial dimensionality. To determine these complexity quantifiers, we have calculated their dispersion (typical deviations) and entropy (Rényi entropies and the generalized Fisher information) constituents. They are found to have a temperature-dependent behavior similar to the celebrated Wien’s displacement law of the dominant frequency ν_max at which the spectrum reaches its maximum. Moreover, they allow us to gain insights into new aspects of the d-dimensional blackbody spectrum and the quantification of quantum effects associated with space dimensionality.
Arguissain, Federico G; Biurrun Manresa, José A; Mørch, Carsten D; Andersen, Ole K
2015-01-30
To date, few studies have combined the simultaneous acquisition of nociceptive withdrawal reflexes (NWR) and somatosensory evoked potentials (SEPs). In fact, it is unknown whether the combination of these two signals acquired simultaneously could provide additional information on somatosensory processing at spinal and supraspinal level compared to individual NWR and SEP signals. By using the concept of mutual information (MI), it is possible to quantify the relation between electrical stimuli and simultaneous elicited electrophysiological responses in humans based on the estimated stimulus-response signal probability distributions. All selected features from NWR and SEPs were informative in regard to the stimulus when considered individually. Specifically, the information carried by NWR features was significantly higher than the information contained in the SEP features (p<0.05). Moreover, the joint information carried by the combination of features showed an overall redundancy compared to the sum of the individual contributions. Comparison with existing methods MI can be used to quantify the information that single-trial NWR and SEP features convey, as well as the information carried jointly by NWR and SEPs. This is a model-free approach that considers linear and non-linear correlations at any order and is not constrained by parametric assumptions. The current study introduces a novel approach that allows the quantification of the individual and joint information content of single-trial NWR and SEP features. This methodology could be used to decode and interpret spinal and supraspinal interaction in studies modulating the responsiveness of the nociceptive system. Copyright © 2014 Elsevier B.V. All rights reserved.
Information-theoretic measures of hydrogen-like ions in weakly coupled Debye plasmas
NASA Astrophysics Data System (ADS)
Zan, Li Rong; Jiao, Li Guang; Ma, Jia; Ho, Yew Kam
2017-12-01
Recent development of information theory provides researchers an alternative and useful tool to quantitatively investigate the variation of the electronic structure when atoms interact with the external environment. In this work, we make systematic studies on the information-theoretic measures for hydrogen-like ions immersed in weakly coupled plasmas modeled by Debye-Hückel potential. Shannon entropy, Fisher information, and Fisher-Shannon complexity in both position and momentum spaces are quantified in high accuracy for the hydrogen atom in a large number of stationary states. The plasma screening effect on embedded atoms can significantly affect the electronic density distributions, in both conjugate spaces, and it is quantified by the variation of information quantities. It is shown that the composite quantities (the Shannon entropy sum and the Fisher information product in combined spaces and Fisher-Shannon complexity in individual space) give a more comprehensive description of the atomic structure information than single ones. The nodes of wave functions play a significant role in the changes of composite information quantities caused by plasmas. With the continuously increasing screening strength, all composite quantities in circular states increase monotonously, while in higher-lying excited states where nodal structures exist, they first decrease to a minimum and then increase rapidly before the bound state approaches the continuum limit. The minimum represents the most reduction of uncertainty properties of the atom in plasmas. The lower bounds for the uncertainty product of the system based on composite information quantities are discussed. Our research presents a comprehensive survey in the investigation of information-theoretic measures for simple atoms embedded in Debye model plasmas.
Causality Analysis of fMRI Data Based on the Directed Information Theory Framework.
Wang, Zhe; Alahmadi, Ahmed; Zhu, David C; Li, Tongtong
2016-05-01
This paper aims to conduct fMRI-based causality analysis in brain connectivity by exploiting the directed information (DI) theory framework. Unlike the well-known Granger causality (GC) analysis, which relies on the linear prediction technique, the DI theory framework does not have any modeling constraints on the sequences to be evaluated and ensures estimation convergence. Moreover, it can be used to generate the GC graphs. In this paper, first, we introduce the core concepts in the DI framework. Second, we present how to conduct causality analysis using DI measures between two time series. We provide the detailed procedure on how to calculate the DI for two finite-time series. The two major steps involved here are optimal bin size selection for data digitization and probability estimation. Finally, we demonstrate the applicability of DI-based causality analysis using both the simulated data and experimental fMRI data, and compare the results with that of the GC analysis. Our analysis indicates that GC analysis is effective in detecting linear or nearly linear causal relationship, but may have difficulty in capturing nonlinear causal relationships. On the other hand, DI-based causality analysis is more effective in capturing both linear and nonlinear causal relationships. Moreover, it is observed that brain connectivity among different regions generally involves dynamic two-way information transmissions between them. Our results show that when bidirectional information flow is present, DI is more effective than GC to quantify the overall causal relationship.
Amplification, Redundancy, and Quantum Chernoff Information
NASA Astrophysics Data System (ADS)
Zwolak, Michael; Riedel, C. Jess; Zurek, Wojciech H.
2014-04-01
Amplification was regarded, since the early days of quantum theory, as a mysterious ingredient that endows quantum microstates with macroscopic consequences, key to the "collapse of the wave packet," and a way to avoid embarrassing problems exemplified by Schrödinger's cat. Such a bridge between the quantum microworld and the classical world of our experience was postulated ad hoc in the Copenhagen interpretation. Quantum Darwinism views amplification as replication, in many copies, of the information about quantum states. We show that such amplification is a natural consequence of a broad class of models of decoherence, including the photon environment we use to obtain most of our information. This leads to objective reality via the presence of robust and widely accessible records of selected quantum states. The resulting redundancy (the number of copies deposited in the environment) follows from the quantum Chernoff information that quantifies the information transmitted by a typical elementary subsystem of the environment.
NASA Astrophysics Data System (ADS)
Johnson, David T.
Quantum mechanics is an extremely successful and accurate physical theory, yet since its inception, it has been afflicted with numerous conceptual difficulties. The primary subject of this thesis is the theory of entropic quantum dynamics (EQD), which seeks to avoid these conceptual problems by interpreting quantum theory from an informational perspective. We begin by reviewing Cox's work in describing probability theory as a means of rationally and consistently quantifying uncertainties. We then discuss how probabilities can be updated according to either Bayes' theorem or the extended method of maximum entropy (ME). After that discussion, we review the work of Caticha and Giffin that shows that Bayes' theorem is a special case of ME. This important result demonstrates that the ME method is the general method for updating probabilities. We then review some motivating difficulties in quantum mechanics before discussing Caticha's work in deriving quantum theory from the approach of entropic dynamics, which concludes our review. After entropic dynamics is introduced, we develop the concepts of symmetries and transformations from an informational perspective. The primary result is the formulation of a symmetry condition that any transformation must satisfy in order to qualify as a symmetry in EQD. We then proceed to apply this condition to the extended Galilean transformation. This transformation is of interest as it exhibits features of both special and general relativity. The transformation yields a gravitational potential that arises from an equivalence of information. We conclude the thesis with a discussion of the measurement problem in quantum mechanics. We discuss the difficulties that arise in the standard quantum mechanical approach to measurement before developing our theory of entropic measurement. In entropic dynamics, position is the only observable. We show how a theory built on this one observable can account for the multitude of measurements present in quantum theory. Furthermore, we show that the Born rule need not be postulated, but can be derived in EQD. Finally, we show how the wave function can be updated by the ME method as the phase is constructed purely in terms of probabilities.
McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W
2015-03-27
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.
How precise are reported protein coordinate data?
Konagurthu, Arun S; Allison, Lloyd; Abramson, David; Stuckey, Peter J; Lesk, Arthur M
2014-03-01
Atomic coordinates in the Worldwide Protein Data Bank (wwPDB) are generally reported to greater precision than the experimental structure determinations have actually achieved. By using information theory and data compression to study the compressibility of protein atomic coordinates, it is possible to quantify the amount of randomness in the coordinate data and thereby to determine the realistic precision of the reported coordinates. On average, the value of each C(α) coordinate in a set of selected protein structures solved at a variety of resolutions is good to about 0.1 Å.
Period variability of coupled noisy oscillators
NASA Astrophysics Data System (ADS)
Mori, Fumito; Kori, Hiroshi
2013-03-01
Period variability, quantified by the standard deviation (SD) of the cycle-to-cycle period, is investigated for noisy phase oscillators. We define the checkpoint phase as the beginning or end point of one oscillation cycle and derive an expression for the SD as a function of this phase. We find that the SD is dependent on the checkpoint phase only when oscillators are coupled. The applicability of our theory is verified using a realistic model. Our work clarifies the relationship between period variability and synchronization from which valuable information regarding coupling can be inferred.
NASA Astrophysics Data System (ADS)
Berliner, M.
2017-12-01
Bayesian statistical decision theory offers a natural framework for decision-policy making in the presence of uncertainty. Key advantages of the approach include efficient incorporation of information and observations. However, in complicated settings it is very difficult, perhaps essentially impossible, to formalize the mathematical inputs needed in the approach. Nevertheless, using the approach as a template is useful for decision support; that is, organizing and communicating our analyses. Bayesian hierarchical modeling is valuable in quantifying and managing uncertainty such cases. I review some aspects of the idea emphasizing statistical model development and use in the context of sea-level rise.
Method for measuring multiple scattering corrections between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Shannon information entropy in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Ma, Chun-Wang; Ma, Yu-Gang
2018-03-01
The general idea of information entropy provided by C.E. Shannon "hangs over everything we do" and can be applied to a great variety of problems once the connection between a distribution and the quantities of interest is found. The Shannon information entropy essentially quantify the information of a quantity with its specific distribution, for which the information entropy based methods have been deeply developed in many scientific areas including physics. The dynamical properties of heavy-ion collisions (HICs) process make it difficult and complex to study the nuclear matter and its evolution, for which Shannon information entropy theory can provide new methods and observables to understand the physical phenomena both theoretically and experimentally. To better understand the processes of HICs, the main characteristics of typical models, including the quantum molecular dynamics models, thermodynamics models, and statistical models, etc., are briefly introduced. The typical applications of Shannon information theory in HICs are collected, which cover the chaotic behavior in branching process of hadron collisions, the liquid-gas phase transition in HICs, and the isobaric difference scaling phenomenon for intermediate mass fragments produced in HICs of neutron-rich systems. Even though the present applications in heavy-ion collision physics are still relatively simple, it would shed light on key questions we are seeking for. It is suggested to further develop the information entropy methods in nuclear reactions models, as well as to develop new analysis methods to study the properties of nuclear matters in HICs, especially the evolution of dynamics system.
Katsos, Napoleon; Roqueta, Clara Andrés; Estevan, Rosa Ana Clemente; Cummins, Chris
2011-04-01
Specific Language Impairment (SLI) is understood to be a disorder that predominantly affects phonology, morphosyntax and/or lexical semantics. There is little conclusive evidence on whether children with SLI are challenged with regard to Gricean pragmatic maxims and on whether children with SLI are competent with the logical meaning of quantifying expressions. We use the comprehension of statements quantified with 'all', 'none', 'some', 'some…not', 'most' and 'not all' as a paradigm to study whether Spanish-speaking children with SLI are competent with the pragmatic maxim of informativeness, as well as with the logical meaning of these expressions. Children with SLI performed more poorly than a group of age-matched typically-developing peers, and both groups performed more poorly with pragmatics than with logical meaning. Moreover, children with SLI were disproportionately challenged by pragmatic meaning compared to their age-matched peers. However, the performance of children with SLI was comparable to that of a group of younger language-matched typically-developing children. The findings document that children with SLI do face difficulties with employing the maxim of informativeness, as well as with understanding the logical meaning of quantifiers, but also that these difficulties are in keeping with their overall language difficulties rather than exceeding them. The implications of these findings for SLI, linguistic theory, and clinical practice are discussed. Copyright © 2010 Elsevier B.V. All rights reserved.
Energetic arousal and language: predictions from the computational theory of quantifiers processing.
Zajenkowski, Marcin
2013-10-01
The author examines the relationship between energetic arousal (EA) and the processing of sentences containing natural-language quantifiers. Previous studies and theories have shown that energy may differentially affect various cognitive functions. Recent investigations devoted to quantifiers strongly support the theory that various types of quantifiers involve different cognitive functions in the sentence-picture verification task. In the present study, 201 students were presented with a sentence-picture verification task consisting of simple propositions containing a quantifier that referred to the color of a car on display. Color pictures of cars accompanied the propositions. In addition, the level of participants' EA was measured before and after the verification task. It was found that EA and performance on proportional quantifiers (e.g., "More than half of the cars are red") are in an inverted U-shaped relationship. This result may be explained by the fact that proportional sentences engage working memory to a high degree, and previous models of EA-cognition associations have been based on the assumption that tasks that require parallel attentional and memory processes are best performed when energy is moderate. The research described in the present article has several applications, as it shows the optimal human conditions for verbal comprehension. For instance, it may be important in workplace design to control the level of arousal experienced by office staff when work is mostly related to the processing of complex texts. Energy level may be influenced by many factors, such as noise, time of day, or thermal conditions.
Uncertainty vs. Information (Invited)
NASA Astrophysics Data System (ADS)
Nearing, Grey
2017-04-01
Information theory is the branch of logic that describes how rational epistemic states evolve in the presence of empirical data (Knuth, 2005), and any logic of science is incomplete without such a theory. Developing a formal philosophy of science that recognizes this fact results in essentially trivial solutions to several longstanding problems are generally considered intractable, including: • Alleviating the need for any likelihood function or error model. • Derivation of purely logical falsification criteria for hypothesis testing. • Specification of a general quantitative method for process-level model diagnostics. More generally, I make the following arguments: 1. Model evaluation should not proceed by quantifying and/or reducing error or uncertainty, and instead should be approached as a problem of ensuring that our models contain as much information as our experimental data. I propose that the latter is the only question a scientist actually has the ability to ask. 2. Instead of building geophysical models as solutions to differential equations that represent conservation laws, we should build models as maximum entropy distributions constrained by conservation symmetries. This will allow us to derive predictive probabilities directly from first principles. Knuth, K. H. (2005) 'Lattice duality: The origin of probability and entropy', Neurocomputing, 67, pp. 245-274.
Quantum resource theory of non-stabilizer states in the one-shot regime
NASA Astrophysics Data System (ADS)
Ahmadi, Mehdi; Dang, Hoan; Gour, Gilad; Sanders, Barry
Universal quantum computing is known to be impossible using only stabilizer states and stabilizer operations. However, addition of non-stabilizer states (also known as magic states) to quantum circuits enables us to achieve universality. The resource theory of non-stablizer states aims at quantifying the usefulness of non-stabilizer states. Here, we focus on a fundamental question in this resource theory in the so called single-shot regime: Given two resource states, is there a free quantum channel that will (approximately or exactly) convert one to the other?. To provide an answer, we phrase the question as a semidefinite program with constraints on the Choi matrix of the corresponding channel. Then, we use the semidefinite version of the Farkas lemma to derive the necessary and sufficient conditions for the conversion between two arbitrary resource states via a free quantum channel. BCS appreciates financial support from Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter.
NASA Astrophysics Data System (ADS)
Bell, A.; Tang, G.; Yang, P.; Wu, D.
2017-12-01
Due to their high spatial and temporal coverage, cirrus clouds have a profound role in regulating the Earth's energy budget. Variability of their radiative, geometric, and microphysical properties can pose significant uncertainties in global climate model simulations if not adequately constrained. Thus, the development of retrieval methodologies able to accurately retrieve ice cloud properties and present associated uncertainties is essential. The effectiveness of cirrus cloud retrievals relies on accurate a priori understanding of ice radiative properties, as well as the current state of the atmosphere. Current studies have implemented information content theory analyses prior to retrievals to quantify the amount of information that should be expected on parameters to be retrieved, as well as the relative contribution of information provided by certain measurement channels. Through this analysis, retrieval algorithms can be designed in a way to maximize the information in measurements, and therefore ensure enough information is present to retrieve ice cloud properties. In this study, we present such an information content analysis to quantify the amount of information to be expected in retrievals of cirrus ice water path and particle effective diameter using sub-millimeter and thermal infrared radiometry. Preliminary results show these bands to be sensitive to changes in ice water path and effective diameter, and thus lend confidence their ability to simultaneously retrieve these parameters. Further quantification of sensitivity and the information provided from these bands can then be used to design and optimal retrieval scheme. While this information content analysis is employed on a theoretical retrieval combining simulated radiance measurements, the methodology could in general be applicable to any instrument or retrieval approach.
NASA Astrophysics Data System (ADS)
Xu, Pengcheng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi; Liu, Jiufu; Zou, Ying; He, Ruimin
2017-12-01
Hydrometeorological data are needed for obtaining point and areal mean, quantifying the spatial variability of hydrometeorological variables, and calibration and verification of hydrometeorological models. Hydrometeorological networks are utilized to collect such data. Since data collection is expensive, it is essential to design an optimal network based on the minimal number of hydrometeorological stations in order to reduce costs. This study proposes a two-phase copula entropy- based multiobjective optimization approach that includes: (1) copula entropy-based directional information transfer (CDIT) for clustering the potential hydrometeorological gauges into several groups, and (2) multiobjective method for selecting the optimal combination of gauges for regionalized groups. Although entropy theory has been employed for network design before, the joint histogram method used for mutual information estimation has several limitations. The copula entropy-based mutual information (MI) estimation method is shown to be more effective for quantifying the uncertainty of redundant information than the joint histogram (JH) method. The effectiveness of this approach is verified by applying to one type of hydrometeorological gauge network, with the use of three model evaluation measures, including Nash-Sutcliffe Coefficient (NSC), arithmetic mean of the negative copula entropy (MNCE), and MNCE/NSC. Results indicate that the two-phase copula entropy-based multiobjective technique is capable of evaluating the performance of regional hydrometeorological networks and can enable decision makers to develop strategies for water resources management.
Modelling information flow along the human connectome using maximum flow.
Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung
2018-01-01
The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.
Entropy, energy, and entanglement of localized states in bent triatomic molecules
NASA Astrophysics Data System (ADS)
Yuan, Qiang; Hou, Xi-Wen
2017-05-01
The dynamics of quantum entropy, energy, and entanglement is studied for various initial states in an important spectroscopic Hamiltonian of bent triatomic molecules H2O, D2O, and H2S. The total quantum correlation is quantified in terms of the mutual information and the entanglement by the concurrence borrowed from the theory of quantum information. The Pauli entropy and the intramolecular energy usually used in the theory of molecules are calculated to establish a possible relationship between both theories. Sections of two quantities among these four quantities are introduced to visualize such relationship. Analytic and numerical simulations demonstrate that if an initial state is taken to be the stretch- or the bend-vibrationally localized state, the mutual information, the Pauli entropy, and the concurrence are dominant-positively correlated while they are dominantly anti-correlated with the interacting energy among three anharmonic vibrational modes. In particular, such correlation is more distinct for the localized state with high excitations in the bending mode. The nice quasi-periodicity of those quantities in D2O molecule reveals that this molecule prepared in the localized state in the stretching or the bending mode can be more appreciated for molecular quantum computation. However, the dynamical correlations of those quantities behave irregularly for the dislocalized states. Moreover, the hierarchy of the mutual information and the Pauli entropy is explicitly proved. Quantum entropy and energy in every vibrational mode are investigated. Thereby, the relation between bipartite and tripartite entanglements is discussed as well. Those are useful for the understanding of quantum correlations in high-dimensional states in polyatomic molecules from quantum information and intramolecular dynamics.
Nong, Duong H; Lepczyk, Christopher A; Miura, Tomoaki; Fox, Jefferson M
2018-01-01
Urbanization has been driven by various social, economic, and political factors around the world for centuries. Because urbanization continues unabated in many places, it is crucial to understand patterns of urbanization and their potential ecological and environmental impacts. Given this need, the objectives of our study were to quantify urban growth rates, growth modes, and resultant changes in the landscape pattern of urbanization in Hanoi, Vietnam from 1993 to 2010 and to evaluate the extent to which the process of urban growth in Hanoi conformed to the diffusion-coalescence theory. We analyzed the spatiotemporal patterns and dynamics of the built-up land in Hanoi using landscape expansion modes, spatial metrics, and a gradient approach. Urbanization was most pronounced in the periods of 2001-2006 and 2006-2010 at a distance of 10 to 35 km around the urban center. Over the 17 year period urban expansion in Hanoi was dominated by infilling and edge expansion growth modes. Our findings support the diffusion-coalescence theory of urbanization. The shift of the urban growth areas over time and the dynamic nature of the spatial metrics revealed important information about our understanding of the urban growth process and cycle. Furthermore, our findings can be used to evaluate urban planning policies and aid in urbanization issues in rapidly urbanizing countries.
Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A
2013-01-01
Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.
McDonnell, J. D.; Schunck, N.; Higdon, D.; ...
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonnell, J. D.; Schunck, N.; Higdon, D.
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less
Potential energy landscapes identify the information-theoretic nature of the epigenome
Jenkinson, Garrett; Pujadas, Elisabet; Goutsias, John; Feinberg, Andrew P.
2017-01-01
Epigenetics studies genomic modifications carrying information independent of DNA sequence heritable through cell division. In 1940, Waddington coined the term “epigenetic landscape” as a metaphor for pluripotency and differentiation, but methylation landscapes have not yet been rigorously computed. By using principles of statistical physics and information theory, we derive epigenetic energy landscapes from whole-genome bisulfite sequencing data that allow us to quantify methylation stochasticity genome-wide using Shannon’s entropy and associate entropy with chromatin structure. Moreover, we consider the Jensen-Shannon distance between sample-specific energy landscapes as a measure of epigenetic dissimilarity and demonstrate its effectiveness for discerning epigenetic differences. By viewing methylation maintenance as a communications system, we introduce methylation channels and show that higher-order chromatin organization can be predicted from their informational properties. Our results provide a fundamental understanding of the information-theoretic nature of the epigenome that leads to a powerful approach for studying its role in disease and aging. PMID:28346445
Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo
2017-02-01
The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. But, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. We built this method on the theory of compressive sensing and the single pixelmore » optical camera. The performance of the system is quantified using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how robust and secure such an inspection would be. Particularly, it is found that an inspection with low noise (<1%) and high undersampling (>256×) exhibits high robustness and security.« less
Zhang, Wen; Liu, Peiqing; Guo, Hao; Wang, Jinjun
2017-11-01
The permutation entropy and the statistical complexity are employed to study the boundary-layer transition induced by the surface roughness. The velocity signals measured in the transition process are analyzed with these symbolic quantifiers, as well as the complexity-entropy causality plane, and the chaotic nature of the instability fluctuations is identified. The frequency of the dominant fluctuations has been found according to the time scales corresponding to the extreme values of the symbolic quantifiers. The laminar-turbulent transition process is accompanied by the evolution in the degree of organization of the complex eddy motions, which is also characterized with the growing smaller and flatter circles in the complexity-entropy causality plane. With the help of the permutation entropy and the statistical complexity, the differences between the chaotic fluctuations detected in the experiments and the classical Tollmien-Schlichting wave are shown and discussed. It is also found that the chaotic features of the instability fluctuations can be approximated with a number of regular sine waves superimposed on the fluctuations of the undisturbed laminar boundary layer. This result is related to the physical mechanism in the generation of the instability fluctuations, which is the noise-induced chaos.
NASA Astrophysics Data System (ADS)
Zhang, Wen; Liu, Peiqing; Guo, Hao; Wang, Jinjun
2017-11-01
The permutation entropy and the statistical complexity are employed to study the boundary-layer transition induced by the surface roughness. The velocity signals measured in the transition process are analyzed with these symbolic quantifiers, as well as the complexity-entropy causality plane, and the chaotic nature of the instability fluctuations is identified. The frequency of the dominant fluctuations has been found according to the time scales corresponding to the extreme values of the symbolic quantifiers. The laminar-turbulent transition process is accompanied by the evolution in the degree of organization of the complex eddy motions, which is also characterized with the growing smaller and flatter circles in the complexity-entropy causality plane. With the help of the permutation entropy and the statistical complexity, the differences between the chaotic fluctuations detected in the experiments and the classical Tollmien-Schlichting wave are shown and discussed. It is also found that the chaotic features of the instability fluctuations can be approximated with a number of regular sine waves superimposed on the fluctuations of the undisturbed laminar boundary layer. This result is related to the physical mechanism in the generation of the instability fluctuations, which is the noise-induced chaos.
Analysis of prescription database extracted from standard textbooks of traditional Dai medicine.
Zhang, Chuang; Chongsuvivatwong, Virasakdi; Keawpradub, Niwat; Lin, Yanfang
2012-08-29
Traditional Dai Medicine (TDM) is one of the four major ethnomedicine of China. In 2007 a group of experts produced a set of seven Dai medical textbooks on this subject. The first two were selected as the main data source to analyse well recognized prescriptions. To quantify patterns of prescriptions, common ingredients, indications and usages of TDM. A relational database linking the prescriptions, ingredients, herb names, indications, and usages was set up. Frequency of pattern of combination and common ingredients were tabulated. A total of 200 prescriptions and 402 herbs were compiled. Prescriptions based on "wind" disorders, a detoxification theory that most commonly deals with symptoms of digestive system diseases, accounted for over one third of all prescriptions. The major methods of preparations mostly used roots and whole herbs. The information extracted from the relational database may be useful for understanding symptomatic treatments. Antidote and detoxification theory deserves further research.
NASA Astrophysics Data System (ADS)
Melendez, Jordan; Wesolowski, Sarah; Furnstahl, Dick
2017-09-01
Chiral effective field theory (EFT) predictions are necessarily truncated at some order in the EFT expansion, which induces an error that must be quantified for robust statistical comparisons to experiment. A Bayesian model yields posterior probability distribution functions for these errors based on expectations of naturalness encoded in Bayesian priors and the observed order-by-order convergence pattern of the EFT. As a general example of a statistical approach to truncation errors, the model was applied to chiral EFT for neutron-proton scattering using various semi-local potentials of Epelbaum, Krebs, and Meißner (EKM). Here we discuss how our model can learn correlation information from the data and how to perform Bayesian model checking to validate that the EFT is working as advertised. Supported in part by NSF PHY-1614460 and DOE NUCLEI SciDAC DE-SC0008533.
New Aspects of Probabilistic Forecast Verification Using Information Theory
NASA Astrophysics Data System (ADS)
Tödter, Julian; Ahrens, Bodo
2013-04-01
This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.
Convex geometry of quantum resource quantification
NASA Astrophysics Data System (ADS)
Regula, Bartosz
2018-01-01
We introduce a framework unifying the mathematical characterisation of different measures of general quantum resources and allowing for a systematic way to define a variety of faithful quantifiers for any given convex quantum resource theory. The approach allows us to describe many commonly used measures such as matrix norm-based quantifiers, robustness measures, convex roof-based measures, and witness-based quantifiers together in a common formalism based on the convex geometry of the underlying sets of resource-free states. We establish easily verifiable criteria for a measure to possess desirable properties such as faithfulness and strong monotonicity under relevant free operations, and show that many quantifiers obtained in this framework indeed satisfy them for any considered quantum resource. We derive various bounds and relations between the measures, generalising and providing significantly simplified proofs of results found in the resource theories of quantum entanglement and coherence. We also prove that the quantification of resources in this framework simplifies for pure states, allowing us to obtain more easily computable forms of the considered measures, and show that many of them are in fact equal on pure states. Further, we investigate the dual formulation of resource quantifiers, which provide a characterisation of the sets of resource witnesses. We present an explicit application of the results to the resource theories of multi-level coherence, entanglement of Schmidt number k, multipartite entanglement, as well as magic states, providing insight into the quantification of the four resources by establishing novel quantitative relations and introducing new quantifiers, such as a measure of entanglement of Schmidt number k which generalises the convex roof-extended negativity, a measure of k-coherence which generalises the \
Harmonic analysis of electric locomotive and traction power system based on wavelet singular entropy
NASA Astrophysics Data System (ADS)
Dun, Xiaohong
2018-05-01
With the rapid development of high-speed railway and heavy-haul transport, the locomotive and traction power system has become the main harmonic source of China's power grid. In response to this phenomenon, the system's power quality issues need timely monitoring, assessment and governance. Wavelet singular entropy is an organic combination of wavelet transform, singular value decomposition and information entropy theory, which combines the unique advantages of the three in signal processing: the time-frequency local characteristics of wavelet transform, singular value decomposition explores the basic modal characteristics of data, and information entropy quantifies the feature data. Based on the theory of singular value decomposition, the wavelet coefficient matrix after wavelet transform is decomposed into a series of singular values that can reflect the basic characteristics of the original coefficient matrix. Then the statistical properties of information entropy are used to analyze the uncertainty of the singular value set, so as to give a definite measurement of the complexity of the original signal. It can be said that wavelet entropy has a good application prospect in fault detection, classification and protection. The mat lab simulation shows that the use of wavelet singular entropy on the locomotive and traction power system harmonic analysis is effective.
A holographic model for black hole complementarity
Lowe, David A.; Thorlacius, Larus
2016-12-07
Here, we explore a version of black hole complementarity, where an approximate semiclassical effective field theory for interior infalling degrees of freedom emerges holo-graphically from an exact evolution of exterior degrees of freedom. The infalling degrees of freedom have a complementary description in terms of outgoing Hawking radiation and must eventually decohere with respect to the exterior Hamiltonian, leading to a breakdown of the semiclassical description for an infaller. Trace distance is used to quantify the difference between the complementary time evolutions, and to define a decoherence time. We propose a dictionary where the evolution with respect to the bulkmore » effective Hamiltonian corresponds to mean field evolution in the holographic theory. In a particular model for the holographic theory, which exhibits fast scrambling, the decoherence time coincides with the scrambling time. The results support the hypothesis that decoherence of the infalling holographic state and disruptive bulk effects near the curvature singularity are comple-mentary descriptions of the same physics, which is an important step toward resolving the black hole information paradox.« less
Influence of sediment storage on downstream delivery of contaminated sediment
Malmon, Daniel V.; Reneau, Steven L.; Dunne, Thomas; Katzman, Danny; Drakos, Paul G.
2005-01-01
Sediment storage in alluvial valleys can strongly modulate the downstream migration of sediment and associated contaminants through landscapes. Traditional methods for routing contaminated sediment through valleys focus on in‐channel sediment transport but ignore the influence of sediment exchanges with temporary sediment storage reservoirs outside the channel, such as floodplains. In theory, probabilistic analysis of particle trajectories through valleys offers a useful strategy for quantifying the influence of sediment storage on the downstream movement of contaminated sediment. This paper describes a field application and test of this theory, using 137Cs as a sediment tracer over 45 years (1952–1997), downstream of a historical effluent outfall at the Los Alamos National Laboratory (LANL), New Mexico. The theory is parameterized using a sediment budget based on field data and an estimate of the 137Cs release history at the upstream boundary. The uncalibrated model reasonably replicates the approximate magnitude and spatial distribution of channel‐ and floodplain‐stored 137Cs measured in an independent field study. Model runs quantify the role of sediment storage in the long‐term migration of a pulse of contaminated sediment, quantify the downstream impact of upstream mitigation, and mathematically decompose the future 137Cs flux near the LANL property boundary to evaluate the relative contributions of various upstream contaminant sources. The fate of many sediment‐bound contaminants is determined by the relative timescales of contaminant degradation and particle residence time in different types of sedimentary environments. The theory provides a viable approach for quantifying the long‐term movement of contaminated sediment through valleys.
Detecting spatio-temporal modes in multivariate data by entropy field decomposition
NASA Astrophysics Data System (ADS)
Frank, Lawrence R.; Galinsky, Vitaly L.
2016-09-01
A new data analysis method that addresses a general problem of detecting spatio-temporal variations in multivariate data is presented. The method utilizes two recent and complimentary general approaches to data analysis, information field theory (IFT) and entropy spectrum pathways (ESPs). Both methods reformulate and incorporate Bayesian theory, thus use prior information to uncover underlying structure of the unknown signal. Unification of ESP and IFT creates an approach that is non-Gaussian and nonlinear by construction and is found to produce unique spatio-temporal modes of signal behavior that can be ranked according to their significance, from which space-time trajectories of parameter variations can be constructed and quantified. Two brief examples of real world applications of the theory to the analysis of data bearing completely different, unrelated nature, lacking any underlying similarity, are also presented. The first example provides an analysis of resting state functional magnetic resonance imaging data that allowed us to create an efficient and accurate computational method for assessing and categorizing brain activity. The second example demonstrates the potential of the method in the application to the analysis of a strong atmospheric storm circulation system during the complicated stage of tornado development and formation using data recorded by a mobile Doppler radar. Reference implementation of the method will be made available as a part of the QUEST toolkit that is currently under development at the Center for Scientific Computation in Imaging.
A modified belief entropy in Dempster-Shafer framework.
Zhou, Deyun; Tang, Yongchuan; Jiang, Wen
2017-01-01
How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What's more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method.
A modified belief entropy in Dempster-Shafer framework
Zhou, Deyun; Jiang, Wen
2017-01-01
How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What’s more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method. PMID:28481914
Amplification of Information by Photons and the Quantum Chernoff Bound
NASA Astrophysics Data System (ADS)
Zwolak, Michael; Riedel, C. Jess; Zurek, Wojciech H.
2014-03-01
Amplification was regarded, since the early days of quantum theory, as a mysterious ingredient that endows quantum microstates with macroscopic consequences, key to the ``collapse of the wavepacket,'' and a way to avoid embarrassing problems exemplified by Schrödinger's cat. This bridge between the quantum microworld and the classical world of our experience was postulated ad hoc in the Copenhagen Interpretation. Quantum Darwinism views amplification as replication, in many copies, of information about quantum states. We show that such amplification is a natural consequence of a broad class of models of decoherence, including the photon environment we use to obtain most of our information. The resultant amplification is huge, proportional to # ξQCB . Here, # is the environment size and ξQCB is the ``typical'' Quantum Chernoff Information, which quantifies the efficiency of the amplification. The information communicated though the environment is imprinted in the states of individual environment subsystems, e.g., in single photons, which document the transfer of information into the environment and result in the emergence of the classical world. See, http://mike.zwolak.org
Quantifying a Negative: How Homeland Security Adds Value
2015-12-01
access to future victims. The Law Enforcement agency could then identifying and quantifying the value of future crimes. For example, if a serial ... killer is captured with evidence of the next victim or an established pattern of victimization, network theory could be used to identify the next
Choice Experiments to Quantify Preferences for Health and Healthcare: State of the Practice.
Mühlbacher, Axel; Johnson, F Reed
2016-06-01
Stated-preference methods increasingly are used to quantify preferences in health economics, health technology assessment, benefit-risk analysis and health services research. The objective of stated-preference studies is to acquire information about trade-off preferences among treatment outcomes, prioritization of clinical decision criteria, likely uptake or adherence to healthcare products and acceptability of healthcare services or policies. A widely accepted approach to eliciting preferences is discrete-choice experiments. Patient, physician, insurant or general-public respondents choose among constructed, experimentally controlled alternatives described by decision-relevant features or attributes. Attributes can represent complete health states, sets of treatment outcomes or characteristics of a healthcare system. The observed pattern of choice reveals how different respondents or groups of respondents implicitly weigh, value and assess different characteristics of treatments, products or services. An important advantage of choice experiments is their foundation in microeconomic utility theory. This conceptual framework provides tests of internal validity, guidance for statistical analysis of latent preference structures, and testable behavioural hypotheses. Choice experiments require expertise in survey-research methods, random-utility theory, experimental design and advanced statistical analysis. This paper should be understood as an introduction to setting up a basic experiment rather than an exhaustive critique of the latest findings and procedures. Where appropriate, we have identified topics of active research where a broad consensus has not yet been established.
Lepczyk, Christopher A.; Miura, Tomoaki; Fox, Jefferson M.
2018-01-01
Urbanization has been driven by various social, economic, and political factors around the world for centuries. Because urbanization continues unabated in many places, it is crucial to understand patterns of urbanization and their potential ecological and environmental impacts. Given this need, the objectives of our study were to quantify urban growth rates, growth modes, and resultant changes in the landscape pattern of urbanization in Hanoi, Vietnam from 1993 to 2010 and to evaluate the extent to which the process of urban growth in Hanoi conformed to the diffusion-coalescence theory. We analyzed the spatiotemporal patterns and dynamics of the built-up land in Hanoi using landscape expansion modes, spatial metrics, and a gradient approach. Urbanization was most pronounced in the periods of 2001–2006 and 2006–2010 at a distance of 10 to 35 km around the urban center. Over the 17 year period urban expansion in Hanoi was dominated by infilling and edge expansion growth modes. Our findings support the diffusion-coalescence theory of urbanization. The shift of the urban growth areas over time and the dynamic nature of the spatial metrics revealed important information about our understanding of the urban growth process and cycle. Furthermore, our findings can be used to evaluate urban planning policies and aid in urbanization issues in rapidly urbanizing countries. PMID:29734346
Quantifying the bending of bilayer temperature-sensitive hydrogels
NASA Astrophysics Data System (ADS)
Dong, Chenling; Chen, Bin
2017-04-01
Stimuli-responsive hydrogels can serve as manipulators, including grippers, sensors, etc., where structures can undergo significant bending. Here, a finite-deformation theory is developed to quantify the evolution of the curvature of bilayer temperature-sensitive hydrogels when subjected to a temperature change. Analysis of the theory indicates that there is an optimal thickness ratio to acquire the largest curvature in the bilayer and also suggests that the sign or the magnitude of the curvature can be significantly affected by pre-stretches or small pores in the bilayer. This study may provide important guidelines in fabricating temperature-responsive bilayers with desirable mechanical performance.
Quantifying Bell nonlocality with the trace distance
NASA Astrophysics Data System (ADS)
Brito, S. G. A.; Amaral, B.; Chaves, R.
2018-02-01
Measurements performed on distant parts of an entangled quantum state can generate correlations incompatible with classical theories respecting the assumption of local causality. This is the phenomenon known as quantum nonlocality that, apart from its fundamental role, can also be put to practical use in applications such as cryptography and distributed computing. Clearly, developing ways of quantifying nonlocality is an important primitive in this scenario. Here, we propose to quantify the nonlocality of a given probability distribution via its trace distance to the set of classical correlations. We show that this measure is a monotone under the free operations of a resource theory and, furthermore, that it can be computed efficiently with a linear program. We put our framework to use in a variety of relevant Bell scenarios also comparing the trace distance to other standard measures in the literature.
NASA Astrophysics Data System (ADS)
Hyun, J. Y.; Yang, Y. C. E.; Tidwell, V. C.; Macknick, J.
2017-12-01
Modeling human behaviors and decisions in water resources management is a challenging issue due to its complexity and uncertain characteristics that affected by both internal (such as stakeholder's beliefs on any external information) and external factors (such as future policies and weather/climate forecast). Stakeholders' decision regarding how much water they need is usually not entirely rational in the real-world cases, so it is not quite suitable to model their decisions with a centralized (top-down) approach that assume everyone in a watershed follow the same order or pursue the same objective. Agent-based modeling (ABM) uses a decentralized approach (bottom-up) that allow each stakeholder to make his/her own decision based on his/her own objective and the belief of information acquired. In this study, we develop an ABM which incorporates the psychological human decision process by the theory of risk perception. The theory of risk perception quantifies human behaviors and decisions uncertainties using two sequential methodologies: the Bayesian Inference and the Cost-Loss Problem. The developed ABM is coupled with a regulation-based water system model: Riverware (RW) to evaluate different human decision uncertainties in water resources management. The San Juan River Basin in New Mexico (Figure 1) is chosen as a case study area, while we define 19 major irrigation districts as water use agents and their primary decision is to decide the irrigated area on an annual basis. This decision will be affected by three external factors: 1) upstream precipitation forecast (potential amount of water availability), 2) violation of the downstream minimum flow (required to support ecosystems), and 3) enforcement of a shortage sharing plan (a policy that is currently undertaken in the region for drought years). Three beliefs (as internal factors) that correspond to these three external factors will also be considered in the modeling framework. The objective of this study is to use the two-way coupling between ABM and RW to mimic how stakeholders' uncertain decisions that have been made through the theory of risk perception will affect local and basin-wide water uses.
Searching Choices: Quantifying Decision-Making Processes Using Search Engine Data.
Moat, Helen Susannah; Olivola, Christopher Y; Chater, Nick; Preis, Tobias
2016-07-01
When making a decision, humans consider two types of information: information they have acquired through their prior experience of the world, and further information they gather to support the decision in question. Here, we present evidence that data from search engines such as Google can help us model both sources of information. We show that statistics from search engines on the frequency of content on the Internet can help us estimate the statistical structure of prior experience; and, specifically, we outline how such statistics can inform psychological theories concerning the valuation of human lives, or choices involving delayed outcomes. Turning to information gathering, we show that search query data might help measure human information gathering, and it may predict subsequent decisions. Such data enable us to compare information gathered across nations, where analyses suggest, for example, a greater focus on the future in countries with a higher per capita GDP. We conclude that search engine data constitute a valuable new resource for cognitive scientists, offering a fascinating new tool for understanding the human decision-making process. Copyright © 2016 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
When can social media lead financial markets?
Zheludev, Ilya; Smith, Robert; Aste, Tomaso
2014-02-27
Social media analytics is showing promise for the prediction of financial markets. However, the true value of such data for trading is unclear due to a lack of consensus on which instruments can be predicted and how. Current approaches are based on the evaluation of message volumes and are typically assessed via retrospective (ex-post facto) evaluation of trading strategy returns. In this paper, we present instead a sentiment analysis methodology to quantify and statistically validate which assets could qualify for trading from social media analytics in an ex-ante configuration. We use sentiment analysis techniques and Information Theory measures to demonstrate that social media message sentiment can contain statistically-significant ex-ante information on the future prices of the S&P500 index and a limited set of stocks, in excess of what is achievable using solely message volumes.
When Can Social Media Lead Financial Markets?
NASA Astrophysics Data System (ADS)
Zheludev, Ilya; Smith, Robert; Aste, Tomaso
2014-02-01
Social media analytics is showing promise for the prediction of financial markets. However, the true value of such data for trading is unclear due to a lack of consensus on which instruments can be predicted and how. Current approaches are based on the evaluation of message volumes and are typically assessed via retrospective (ex-post facto) evaluation of trading strategy returns. In this paper, we present instead a sentiment analysis methodology to quantify and statistically validate which assets could qualify for trading from social media analytics in an ex-ante configuration. We use sentiment analysis techniques and Information Theory measures to demonstrate that social media message sentiment can contain statistically-significant ex-ante information on the future prices of the S&P500 index and a limited set of stocks, in excess of what is achievable using solely message volumes.
When Can Social Media Lead Financial Markets?
Zheludev, Ilya; Smith, Robert; Aste, Tomaso
2014-01-01
Social media analytics is showing promise for the prediction of financial markets. However, the true value of such data for trading is unclear due to a lack of consensus on which instruments can be predicted and how. Current approaches are based on the evaluation of message volumes and are typically assessed via retrospective (ex-post facto) evaluation of trading strategy returns. In this paper, we present instead a sentiment analysis methodology to quantify and statistically validate which assets could qualify for trading from social media analytics in an ex-ante configuration. We use sentiment analysis techniques and Information Theory measures to demonstrate that social media message sentiment can contain statistically-significant ex-ante information on the future prices of the S&P500 index and a limited set of stocks, in excess of what is achievable using solely message volumes. PMID:24572909
A Review of Hydrazine Sensors: The State of the Art
NASA Technical Reports Server (NTRS)
Meneghelli, B. J.
2004-01-01
Several types of sensors have been developed over the past few years that quantify the vapor concentrations of the hydrazines. These sensor s are able to detect concentrations as low as 10 parts per billion (ppb) up to several parts per million (ppm). The scope of this review wi ll be focused on those sensors that are most current in the marketpla ce as either leak detectors or personnel monitors. Some technical information on the theory of operations of each hydrazine detector will a lso be included. The review will highlight current operations that utilize hydrazine sensors including the Kennedy Space Center (KSC), the United States Air Force (USAF) at Cape Canaveral Air Station (CCAS), USAF F-16 facilities. The orientation of the review will be towards giving users usable practical information on hydrazine sensors.
From theory to experimental design-Quantifying a trait-based theory of predator-prey dynamics.
Laubmeier, A N; Wootton, Kate; Banks, J E; Bommarco, Riccardo; Curtsdotter, Alva; Jonsson, Tomas; Roslin, Tomas; Banks, H T
2018-01-01
Successfully applying theoretical models to natural communities and predicting ecosystem behavior under changing conditions is the backbone of predictive ecology. However, the experiments required to test these models are dictated by practical constraints, and models are often opportunistically validated against data for which they were never intended. Alternatively, we can inform and improve experimental design by an in-depth pre-experimental analysis of the model, generating experiments better targeted at testing the validity of a theory. Here, we describe this process for a specific experiment. Starting from food web ecological theory, we formulate a model and design an experiment to optimally test the validity of the theory, supplementing traditional design considerations with model analysis. The experiment itself will be run and described in a separate paper. The theory we test is that trophic population dynamics are dictated by species traits, and we study this in a community of terrestrial arthropods. We depart from the Allometric Trophic Network (ATN) model and hypothesize that including habitat use, in addition to body mass, is necessary to better model trophic interactions. We therefore formulate new terms which account for micro-habitat use as well as intra- and interspecific interference in the ATN model. We design an experiment and an effective sampling regime to test this model and the underlying assumptions about the traits dominating trophic interactions. We arrive at a detailed sampling protocol to maximize information content in the empirical data obtained from the experiment and, relying on theoretical analysis of the proposed model, explore potential shortcomings of our design. Consequently, since this is a "pre-experimental" exercise aimed at improving the links between hypothesis formulation, model construction, experimental design and data collection, we hasten to publish our findings before analyzing data from the actual experiment, thus setting the stage for strong inference.
Giordano, Bruno L.; Kayser, Christoph; Rousselet, Guillaume A.; Gross, Joachim; Schyns, Philippe G.
2016-01-01
Abstract We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open‐source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541–1573, 2017. © 2016 Wiley Periodicals, Inc. PMID:27860095
ERIC Educational Resources Information Center
Cowan, Logan T.; Van Wagenen, Sarah A.; Brown, Brittany A.; Hedin, Riley J.; Seino-Stephan, Yukiko; Hall, P. Cougar; West, Joshua H.
2013-01-01
Objective. To quantify the presence of health behavior theory constructs in iPhone apps targeting physical activity. Methods. This study used a content analysis of 127 apps from Apple's (App Store) "Health & Fitness" category. Coders downloaded the apps and then used an established theory-based instrument to rate each app's inclusion of…
Energy systems theory provides a theoretical basis for defining, measuring, and interpreting the concepts of ecological integrity and ecosystem health. Ecological integrity is defined as an emergent property of ecosystems operating at maximum power that can be quantified using va...
Ecological change points: The strength of density dependence and the loss of history.
Ponciano, José M; Taper, Mark L; Dennis, Brian
2018-05-01
Change points in the dynamics of animal abundances have extensively been recorded in historical time series records. Little attention has been paid to the theoretical dynamic consequences of such change-points. Here we propose a change-point model of stochastic population dynamics. This investigation embodies a shift of attention from the problem of detecting when a change will occur, to another non-trivial puzzle: using ecological theory to understand and predict the post-breakpoint behavior of the population dynamics. The proposed model and the explicit expressions derived here predict and quantify how density dependence modulates the influence of the pre-breakpoint parameters into the post-breakpoint dynamics. Time series transitioning from one stationary distribution to another contain information about where the process was before the change-point, where is it heading and how long it will take to transition, and here this information is explicitly stated. Importantly, our results provide a direct connection of the strength of density dependence with theoretical properties of dynamic systems, such as the concept of resilience. Finally, we illustrate how to harness such information through maximum likelihood estimation for state-space models, and test the model robustness to widely different forms of compensatory dynamics. The model can be used to estimate important quantities in the theory and practice of population recovery. Copyright © 2018 Elsevier Inc. All rights reserved.
A comparison of SAR ATR performance with information theoretic predictions
NASA Astrophysics Data System (ADS)
Blacknell, David
2003-09-01
Performance assessment of automatic target detection and recognition algorithms for SAR systems (or indeed any other sensors) is essential if the military utility of the system / algorithm mix is to be quantified. This is a relatively straightforward task if extensive trials data from an existing system is used. However, a crucial requirement is to assess the potential performance of novel systems as a guide to procurement decisions. This task is no longer straightforward since a hypothetical system cannot provide experimental trials data. QinetiQ has previously developed a theoretical technique for classification algorithm performance assessment based on information theory. The purpose of the study presented here has been to validate this approach. To this end, experimental SAR imagery of targets has been collected using the QinetiQ Enhanced Surveillance Radar to allow algorithm performance assessments as a number of parameters are varied. In particular, performance comparisons can be made for (i) resolutions up to 0.1m, (ii) single channel versus polarimetric (iii) targets in the open versus targets in scrubland and (iv) use versus non-use of camouflage. The change in performance as these parameters are varied has been quantified from the experimental imagery whilst the information theoretic approach has been used to predict the expected variation of performance with parameter value. A comparison of these measured and predicted assessments has revealed the strengths and weaknesses of the theoretical technique as will be discussed in the paper.
Data Assimilation at FLUXNET to Improve Models towards Ecological Forecasting (Invited)
NASA Astrophysics Data System (ADS)
Luo, Y.
2009-12-01
Dramatically increased volumes of data from observational and experimental networks such as FLUXNET call for transformation of ecological research to increase its emphasis on quantitative forecasting. Ecological forecasting will also meet the societal need to develop better strategies for natural resource management in a world of ongoing global change. Traditionally, ecological forecasting has been based on process-based models, informed by data in largely ad hoc ways. Although most ecological models incorporate some representation of mechanistic processes, today’s ecological models are generally not adequate to quantify real-world dynamics and provide reliable forecasts with accompanying estimates of uncertainty. A key tool to improve ecological forecasting is data assimilation, which uses data to inform initial conditions and to help constrain a model during simulation to yield results that approximate reality as closely as possible. In an era with dramatically increased availability of data from observational and experimental networks, data assimilation is a key technique that helps convert the raw data into ecologically meaningful products so as to accelerate our understanding of ecological processes, test ecological theory, forecast changes in ecological services, and better serve the society. This talk will use examples to illustrate how data from FLUXNET have been assimilated with process-based models to improve estimates of model parameters and state variables; to quantify uncertainties in ecological forecasting arising from observations, models and their interactions; and to evaluate information contributions of data and model toward short- and long-term forecasting of ecosystem responses to global change.
Detecting Spatio-Temporal Modes in Multivariate Data by Entropy Field Decomposition
Frank, Lawrence R.; Galinsky, Vitaly L.
2016-01-01
A new data analysis method that addresses a general problem of detecting spatio-temporal variations in multivariate data is presented. The method utilizes two recent and complimentary general approaches to data analysis, information field theory (IFT) and entropy spectrum pathways (ESP). Both methods reformulate and incorporate Bayesian theory, thus use prior information to uncover underlying structure of the unknown signal. Unification of ESP and IFT creates an approach that is non-Gaussian and non-linear by construction and is found to produce unique spatio-temporal modes of signal behavior that can be ranked according to their significance, from which space-time trajectories of parameter variations can be constructed and quantified. Two brief examples of real world applications of the theory to the analysis of data bearing completely different, unrelated nature, lacking any underlying similarity, are also presented. The first example provides an analysis of resting state functional magnetic resonance imaging (rsFMRI) data that allowed us to create an efficient and accurate computational method for assessing and categorizing brain activity. The second example demonstrates the potential of the method in the application to the analysis of a strong atmospheric storm circulation system during the complicated stage of tornado development and formation using data recorded by a mobile Doppler radar. Reference implementation of the method will be made available as a part of the QUEST toolkit that is currently under development at the Center for Scientific Computation in Imaging. PMID:27695512
Resilience: the viewpoint of modern thermodynamics and information theory
NASA Astrophysics Data System (ADS)
Mazzorana, Bruno
2015-04-01
Understanding, qualifying and quantifying resilience as the system's effective performance and reserve capacity is an essential need for implementing effective and efficient risk mitigation strategies; in particular if possible synergies between different mitigation alternatives, such as active and passive measures, should be achieved. Relevant progress has recently been made in explaining the phenomenon of adaptation from the standpoint of physics, thereby delineating the difference is in terms of physical properties between something that is well-adapted to its surrounding environment, and something that is not (England, 2013). In this context the specific role of the second law of thermodynamics could be clarified (Schneider and Kay, 1994) and the added value of information theory could be illustrated (Ulanowicz, 2009). According to these findings Ecosystems resilience in response to a disturbance is a balancing act between system's effective performance and its reserve capacity. By extending this string of argumentation, the universe of discourse encompassing the concept of resilience of socio-ecologic systems impacted by natural hazard processes, is enriched by relevant implications derived from fundamental notions of modern thermodynamics and information theory. Metrics, meant to gauge ecosystems robustness in terms of the tradeoff allotment between systems effective performance and its beneficial reserve capacities developed by Ulanowicz (2009), are reviewed and their transferability to the natural hazard risk research domain is thoroughly discussed. The derived knowledge can be explored to identify priorities for action towards an increased institutional resilience. References: England, J. L. 2013. Statistical Physics of self-replication." J. Chem. Phys., 139, 121923. Schneider, E.D., Kay, J.J. 1994. Life as a manifestation of the second law of thermodynamics. Mathematical and Computer Modelling, Vol 19, No.6-8. Ulanowicz, R.E. 2009. Increasing entropy, heat death or perpetual harmonies? Int. J. of Design & Nature and Ecodynamics, Vol.4, No. 2, 83-96.
A single-pixel X-ray imager concept and its application to secure radiographic inspections
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.; ...
2017-07-01
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. But, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. We built this method on the theory of compressive sensing and the single pixelmore » optical camera. The performance of the system is quantified using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how robust and secure such an inspection would be. Particularly, it is found that an inspection with low noise (<1%) and high undersampling (>256×) exhibits high robustness and security.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. However, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. The method is built on the theory of compressive sensing and the single pixelmore » optical camera. The performance of the system is quantified here using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how such an inspection would be made which can maintain high robustness and security. In particular, it is found that an inspection with low noise (<1%) and high undersampling (>256×) exhibits high robustness and security.« less
A single-pixel X-ray imager concept and its application to secure radiographic inspections
NASA Astrophysics Data System (ADS)
Gilbert, Andrew J.; Miller, Brian W.; Robinson, Sean M.; White, Timothy A.; Pitts, William Karl; Jarman, Kenneth D.; Seifert, Allen
2017-07-01
Imaging technology is generally considered too invasive for arms control inspections due to the concern that it cannot properly secure sensitive features of the inspected item. However, this same sensitive information, which could include direct information on the form and function of the items under inspection, could be used for robust arms control inspections. The single-pixel X-ray imager (SPXI) is introduced as a method to make such inspections, capturing the salient spatial information of an object in a secure manner while never forming an actual image. The method is built on the theory of compressive sensing and the single pixel optical camera. The performance of the system is quantified using simulated inspections of simple objects. Measures of the robustness and security of the method are introduced and used to determine how robust and secure such an inspection would be. In particular, it is found that an inspection with low noise ( < 1 %) and high undersampling ( > 256 ×) exhibits high robustness and security.
A continuum state variable theory to model the size-dependent surface energy of nanostructures.
Jamshidian, Mostafa; Thamburaja, Prakash; Rabczuk, Timon
2015-10-14
We propose a continuum-based state variable theory to quantify the excess surface free energy density throughout a nanostructure. The size-dependent effect exhibited by nanoplates and spherical nanoparticles i.e. the reduction of surface energy with reducing nanostructure size is well-captured by our continuum state variable theory. Our constitutive theory is also able to predict the reducing energetic difference between the surface and interior (bulk) portions of a nanostructure with decreasing nanostructure size.
A multi-fidelity analysis selection method using a constrained discrete optimization formulation
NASA Astrophysics Data System (ADS)
Stults, Ian C.
The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model uncertainty present in analyses with 4 or fewer input variables could be effectively quantified using a strategic distribution creation method; if more than 4 input variables exist, a Frontier Finding Particle Swarm Optimization should instead be used. Once model uncertainty in contributing analysis code choices has been quantified, a selection method is required to determine which of these choices should be used in simulations. Because much of the selection done for engineering problems is driven by the physics of the problem, these are poor candidate problems for testing the true fitness of a candidate selection method. Specifically moderate and high dimensional problems' variability can often be reduced to only a few dimensions and scalability often cannot be easily addressed. For these reasons a simple academic function was created for the uncertainty quantification, and a canonical form of the Fidelity Selection Problem (FSP) was created. Fifteen best- and worst-case scenarios were identified in an effort to challenge the candidate selection methods both with respect to the characteristics of the tradeoff between time cost and model uncertainty and with respect to the stringency of the constraints and problem dimensionality. The results from this experiment show that a Genetic Algorithm (GA) was able to consistently find the correct answer, but under certain circumstances, a discrete form of Particle Swarm Optimization (PSO) was able to find the correct answer more quickly. To better illustrate how the uncertainty quantification and discrete optimization might be conducted for a "real world" problem, an illustrative example was conducted using gas turbine engines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Pozzo, Walter; Nikhef National Institute for Subatomic Physics, Science Park 105, 1098 XG Amsterdam; Veitch, John
Second-generation interferometric gravitational-wave detectors, such as Advanced LIGO and Advanced Virgo, are expected to begin operation by 2015. Such instruments plan to reach sensitivities that will offer the unique possibility to test general relativity in the dynamical, strong-field regime and investigate departures from its predictions, in particular, using the signal from coalescing binary systems. We introduce a statistical framework based on Bayesian model selection in which the Bayes factor between two competing hypotheses measures which theory is favored by the data. Probability density functions of the model parameters are then used to quantify the inference on individual parameters. We alsomore » develop a method to combine the information coming from multiple independent observations of gravitational waves, and show how much stronger inference could be. As an introduction and illustration of this framework-and a practical numerical implementation through the Monte Carlo integration technique of nested sampling-we apply it to gravitational waves from the inspiral phase of coalescing binary systems as predicted by general relativity and a very simple alternative theory in which the graviton has a nonzero mass. This method can (and should) be extended to more realistic and physically motivated theories.« less
Analysis of prescription database extracted from standard textbooks of traditional Dai medicine
2012-01-01
Background Traditional Dai Medicine (TDM) is one of the four major ethnomedicine of China. In 2007 a group of experts produced a set of seven Dai medical textbooks on this subject. The first two were selected as the main data source to analyse well recognized prescriptions. Objective To quantify patterns of prescriptions, common ingredients, indications and usages of TDM. Methods A relational database linking the prescriptions, ingredients, herb names, indications, and usages was set up. Frequency of pattern of combination and common ingredients were tabulated. Results A total of 200 prescriptions and 402 herbs were compiled. Prescriptions based on "wind" disorders, a detoxification theory that most commonly deals with symptoms of digestive system diseases, accounted for over one third of all prescriptions. The major methods of preparations mostly used roots and whole herbs. Conclusion The information extracted from the relational database may be useful for understanding symptomatic treatments. Antidote and detoxification theory deserves further research. PMID:22931752
Robust estimation of microbial diversity in theory and in practice
Haegeman, Bart; Hamelin, Jérôme; Moriarty, John; Neal, Peter; Dushoff, Jonathan; Weitz, Joshua S
2013-01-01
Quantifying diversity is of central importance for the study of structure, function and evolution of microbial communities. The estimation of microbial diversity has received renewed attention with the advent of large-scale metagenomic studies. Here, we consider what the diversity observed in a sample tells us about the diversity of the community being sampled. First, we argue that one cannot reliably estimate the absolute and relative number of microbial species present in a community without making unsupported assumptions about species abundance distributions. The reason for this is that sample data do not contain information about the number of rare species in the tail of species abundance distributions. We illustrate the difficulty in comparing species richness estimates by applying Chao's estimator of species richness to a set of in silico communities: they are ranked incorrectly in the presence of large numbers of rare species. Next, we extend our analysis to a general family of diversity metrics (‘Hill diversities'), and construct lower and upper estimates of diversity values consistent with the sample data. The theory generalizes Chao's estimator, which we retrieve as the lower estimate of species richness. We show that Shannon and Simpson diversity can be robustly estimated for the in silico communities. We analyze nine metagenomic data sets from a wide range of environments, and show that our findings are relevant for empirically-sampled communities. Hence, we recommend the use of Shannon and Simpson diversity rather than species richness in efforts to quantify and compare microbial diversity. PMID:23407313
The research on user behavior evaluation method for network state
NASA Astrophysics Data System (ADS)
Zhang, Chengyuan; Xu, Haishui
2017-08-01
Based on the correlation between user behavior and network running state, this paper proposes a method of user behavior evaluation based on network state. Based on the analysis and evaluation methods in other fields of study, we introduce the theory and tools of data mining. Based on the network status information provided by the trusted network view, the user behavior data and the network state data are analysed. Finally, we construct the user behavior evaluation index and weight, and on this basis, we can accurately quantify the influence degree of the specific behavior of different users on the change of network running state, so as to provide the basis for user behavior control decision.
Learning and Generalization under Ambiguity: An fMRI Study
Chumbley, J. R.; Flandin, G.; Bach, D. R.; Daunizeau, J.; Fehr, E.; Dolan, R. J.; Friston, K. J.
2012-01-01
Adaptive behavior often exploits generalizations from past experience by applying them judiciously in new situations. This requires a means of quantifying the relative importance of prior experience and current information, so they can be balanced optimally. In this study, we ask whether the brain generalizes in an optimal way. Specifically, we used Bayesian learning theory and fMRI to test whether neuronal responses reflect context-sensitive changes in ambiguity or uncertainty about experience-dependent beliefs. We found that the hippocampus expresses clear ambiguity-dependent responses that are associated with an augmented rate of learning. These findings suggest candidate neuronal systems that may be involved in aberrations of generalization, such as over-confidence. PMID:22275857
Neutron crosstalk between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Prasad, M. K.; Snyderman, N. J.
2015-05-01
We propose a method to quantify the fractions of neutrons scattering between liquid scintillators. Using a spontaneous fission source, this method can be utilized to quickly characterize an array of liquid scintillators in terms of crosstalk. The point model theory due to Feynman is corrected to account for these multiple scatterings. Using spectral information measured by the liquid scintillators, fractions of multiple scattering can be estimated, and mass reconstruction of fissile materials under investigation can be improved. Monte Carlo simulations of mono-energetic neutron sources were performed to estimate neutron crosstalk. A californium source in an array of liquid scintillators wasmore » modeled to illustrate the improvement of the mass reconstruction.« less
NASA Technical Reports Server (NTRS)
Gedeon, D.; Wood, J. G.
1996-01-01
A number of wire mesh and metal felt test samples, with a range of porosities, yield generic correlations for friction factor, Nusselt number, enhanced axial conduction ratio, and overall heat flux ratio. This information is directed primarily toward stirling cycle regenerator modelers, but will be of use to anyone seeking to better model fluid flow through these porous materials. Behind these results lies an oscillating-flow test rig, which measures pumping dissipation and thermal energy transport in sample matrices, and several stages of data-reduction software, which correlate instantaneous values for the above dimensionless groups. Within the software, theoretical model reduces instantaneous quantifies from cycle-averaged measurables using standard parameter estimation techniques.
Learning and generalization under ambiguity: an fMRI study.
Chumbley, J R; Flandin, G; Bach, D R; Daunizeau, J; Fehr, E; Dolan, R J; Friston, K J
2012-01-01
Adaptive behavior often exploits generalizations from past experience by applying them judiciously in new situations. This requires a means of quantifying the relative importance of prior experience and current information, so they can be balanced optimally. In this study, we ask whether the brain generalizes in an optimal way. Specifically, we used Bayesian learning theory and fMRI to test whether neuronal responses reflect context-sensitive changes in ambiguity or uncertainty about experience-dependent beliefs. We found that the hippocampus expresses clear ambiguity-dependent responses that are associated with an augmented rate of learning. These findings suggest candidate neuronal systems that may be involved in aberrations of generalization, such as over-confidence.
Structure and information in spatial segregation
2017-01-01
Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. PMID:29078323
Li, Wen-Jie; Zhang, Shi-Huang; Wang, Hui-Min
2011-12-01
Ecosystem services evaluation is a hot topic in current ecosystem management, and has a close link with human beings welfare. This paper summarized the research progress on the evaluation of ecosystem services based on geographic information system (GIS) and remote sensing (RS) technology, which could be reduced to the following three characters, i. e., ecological economics theory is widely applied as a key method in quantifying ecosystem services, GIS and RS technology play a key role in multi-source data acquisition, spatiotemporal analysis, and integrated platform, and ecosystem mechanism model becomes a powerful tool for understanding the relationships between natural phenomena and human activities. Aiming at the present research status and its inadequacies, this paper put forward an "Assembly Line" framework, which was a distributed one with scalable characteristics, and discussed the future development trend of the integration research on ecosystem services evaluation based on GIS and RS technologies.
Structure and information in spatial segregation.
Chodrow, Philip S
2017-10-31
Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. Published under the PNAS license.
Applying information theory to small groups assessment: emotions and well-being at work.
García-Izquierdo, Antonio León; Moreno, Blanca; García-Izquierdo, Mariano
2010-05-01
This paper explores and analyzes the relations between emotions and well-being in a sample of aviation personnel, passenger crew (flight attendants). There is an increasing interest in studying the influence of emotions and its role as psychosocial factors in the work environment as they are able to act as facilitators or shock absorbers. The contrast of the theoretical models by using traditional parametric techniques requires a large sample size to the efficient estimation of the coefficients that quantify the relations between variables. Since the available sample that we have is small, the most common size in European enterprises, we used the maximum entropy principle to explore the emotions that are involved in the psychosocial risks. The analyses show that this method takes advantage of the limited information available and guarantee an optimal estimation, the results of which are coherent with theoretical models and numerous empirical researches about emotions and well-being.
Spatial Uncertainty Modeling of Fuzzy Information in Images for Pattern Classification
Pham, Tuan D.
2014-01-01
The modeling of the spatial distribution of image properties is important for many pattern recognition problems in science and engineering. Mathematical methods are needed to quantify the variability of this spatial distribution based on which a decision of classification can be made in an optimal sense. However, image properties are often subject to uncertainty due to both incomplete and imprecise information. This paper presents an integrated approach for estimating the spatial uncertainty of vagueness in images using the theory of geostatistics and the calculus of probability measures of fuzzy events. Such a model for the quantification of spatial uncertainty is utilized as a new image feature extraction method, based on which classifiers can be trained to perform the task of pattern recognition. Applications of the proposed algorithm to the classification of various types of image data suggest the usefulness of the proposed uncertainty modeling technique for texture feature extraction. PMID:25157744
Integrated Data Analysis for Fusion: A Bayesian Tutorial for Fusion Diagnosticians
NASA Astrophysics Data System (ADS)
Dinklage, Andreas; Dreier, Heiko; Fischer, Rainer; Gori, Silvio; Preuss, Roland; Toussaint, Udo von
2008-03-01
Integrated Data Analysis (IDA) offers a unified way of combining information relevant to fusion experiments. Thereby, IDA meets with typical issues arising in fusion data analysis. In IDA, all information is consistently formulated as probability density functions quantifying uncertainties in the analysis within the Bayesian probability theory. For a single diagnostic, IDA allows the identification of faulty measurements and improvements in the setup. For a set of diagnostics, IDA gives joint error distributions allowing the comparison and integration of different diagnostics results. Validation of physics models can be performed by model comparison techniques. Typical data analysis applications benefit from IDA capabilities of nonlinear error propagation, the inclusion of systematic effects and the comparison of different physics models. Applications range from outlier detection, background discrimination, model assessment and design of diagnostics. In order to cope with next step fusion device requirements, appropriate techniques are explored for fast analysis applications.
Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John
2018-03-07
DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.
Hansen, Scott K.; Berkowitz, Brian; Vesselinov, Velimir V.; ...
2016-12-01
Path reversibility and radial symmetry are often assumed in push-pull tracer test analysis. In reality, heterogeneous flow fields mean that both assumptions are idealizations. In this paper, to understand their impact, we perform a parametric study which quantifies the scattering effects of ambient flow, local-scale dispersion, and velocity field heterogeneity on push-pull breakthrough curves and compares them to the effects of mobile-immobile mass transfer (MIMT) processes including sorption and diffusion into secondary porosity. We identify specific circumstances in which MIMT overwhelmingly determines the breakthrough curve, which may then be considered uninformative about drift and local-scale dispersion. Assuming path reversibility, wemore » develop a continuous-time-random-walk-based interpretation framework which is flow-field-agnostic and well suited to quantifying MIMT. Adopting this perspective, we show that the radial flow assumption is often harmless: to the extent that solute paths are reversible, the breakthrough curve is uninformative about velocity field heterogeneity. Our interpretation method determines a mapping function (i.e., subordinator) from travel time in the absence of MIMT to travel time in its presence. A mathematical theory allowing this function to be directly “plugged into” an existing Laplace-domain transport model to incorporate MIMT is presented and demonstrated. Algorithms implementing the calibration are presented and applied to interpretation of data from a push-pull test performed in a heterogeneous environment. A successful four-parameter fit is obtained, of comparable fidelity to one obtained using a million-node 3-D numerical model. In conclusion, we demonstrate analytically and numerically how push-pull tests quantifying MIMT are sensitive to remobilization, but not immobilization, kinetics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Scott K.; Berkowitz, Brian; Vesselinov, Velimir V.
Path reversibility and radial symmetry are often assumed in push-pull tracer test analysis. In reality, heterogeneous flow fields mean that both assumptions are idealizations. In this paper, to understand their impact, we perform a parametric study which quantifies the scattering effects of ambient flow, local-scale dispersion, and velocity field heterogeneity on push-pull breakthrough curves and compares them to the effects of mobile-immobile mass transfer (MIMT) processes including sorption and diffusion into secondary porosity. We identify specific circumstances in which MIMT overwhelmingly determines the breakthrough curve, which may then be considered uninformative about drift and local-scale dispersion. Assuming path reversibility, wemore » develop a continuous-time-random-walk-based interpretation framework which is flow-field-agnostic and well suited to quantifying MIMT. Adopting this perspective, we show that the radial flow assumption is often harmless: to the extent that solute paths are reversible, the breakthrough curve is uninformative about velocity field heterogeneity. Our interpretation method determines a mapping function (i.e., subordinator) from travel time in the absence of MIMT to travel time in its presence. A mathematical theory allowing this function to be directly “plugged into” an existing Laplace-domain transport model to incorporate MIMT is presented and demonstrated. Algorithms implementing the calibration are presented and applied to interpretation of data from a push-pull test performed in a heterogeneous environment. A successful four-parameter fit is obtained, of comparable fidelity to one obtained using a million-node 3-D numerical model. In conclusion, we demonstrate analytically and numerically how push-pull tests quantifying MIMT are sensitive to remobilization, but not immobilization, kinetics.« less
Memory in Microbes: Quantifying History-Dependent Behavior in a Bacterium
Bischofs, Ilka; Price, Gavin; Keasling, Jay; Arkin, Adam P.
2008-01-01
Memory is usually associated with higher organisms rather than bacteria. However, evidence is mounting that many regulatory networks within bacteria are capable of complex dynamics and multi-stable behaviors that have been linked to memory in other systems. Moreover, it is recognized that bacteria that have experienced different environmental histories may respond differently to current conditions. These “memory” effects may be more than incidental to the regulatory mechanisms controlling acclimation or to the status of the metabolic stores. Rather, they may be regulated by the cell and confer fitness to the organism in the evolutionary game it participates in. Here, we propose that history-dependent behavior is a potentially important manifestation of memory, worth classifying and quantifying. To this end, we develop an information-theory based conceptual framework for measuring both the persistence of memory in microbes and the amount of information about the past encoded in history-dependent dynamics. This method produces a phenomenological measure of cellular memory without regard to the specific cellular mechanisms encoding it. We then apply this framework to a strain of Bacillus subtilis engineered to report on commitment to sporulation and degradative enzyme (AprE) synthesis and estimate the capacity of these systems and growth dynamics to ‘remember’ 10 distinct cell histories prior to application of a common stressor. The analysis suggests that B. subtilis remembers, both in short and long term, aspects of its cell history, and that this memory is distributed differently among the observables. While this study does not examine the mechanistic bases for memory, it presents a framework for quantifying memory in cellular behaviors and is thus a starting point for studying new questions about cellular regulation and evolutionary strategy. PMID:18324309
Random Matrix Theory and Econophysics
NASA Astrophysics Data System (ADS)
Rosenow, Bernd
2000-03-01
Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory Analysis of Diffusion in Stock Price Dynamics, preprint
ERIC Educational Resources Information Center
Hanson, Janet; Bangert, Arthur; Ruff, William
2016-01-01
According to school growth mindset theory a school's organizational structure influences teachers' beliefs in their collective ability to help all students grow and learn; including those from diverse cultural, religious, identity, and socioeconomic demographics. The implicit theory of growth mindset has been quantified for a school's culture on…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Zhange; Higa, Kenneth; Han, Kee Sung
The presence of lithium hexafluorophosphate (LiPF 6) ion pairs in carbonate-based electrolyte solutions is widely accepted in the field of battery electrolyte research and is expected to affect solution transport properties. No existing techniques are capable of directly quantifying salt dissociation in these solutions. Previous publications by others have provided estimates of dissociation degrees using dilute solution theory and pulsed field gradient nuclear magnetic resonance spectroscopy (PFG-NMR) measurements of self-diffusivity. However, the behavior of a concentrated electrolyte solution can deviate significantly from dilute solution theory predictions. This paper, for the first time, instead uses Onsager–Stefan–Maxwell concentrated solution theory and themore » generalized Darken relation with PFG-NMR measurements to quantify the degrees of dissociation in electrolyte solutions (LiPF 6 in ethylene carbonate/diethyl carbonate, 1:1 by weight). At LiPF 6 concentrations ranging from 0.1 M to 1.5 M, the salt dissociation degree is found to range from 61% to 37%. Finally, transport properties are then calculated through concentrated solution theory with corrections for these significant levels of ion pairing.« less
Evaluating Transport Properties and Ionic Dissociation of LiPF 6 in Concentrated Electrolyte
Feng, Zhange; Higa, Kenneth; Han, Kee Sung; ...
2017-08-17
The presence of lithium hexafluorophosphate (LiPF 6) ion pairs in carbonate-based electrolyte solutions is widely accepted in the field of battery electrolyte research and is expected to affect solution transport properties. No existing techniques are capable of directly quantifying salt dissociation in these solutions. Previous publications by others have provided estimates of dissociation degrees using dilute solution theory and pulsed field gradient nuclear magnetic resonance spectroscopy (PFG-NMR) measurements of self-diffusivity. However, the behavior of a concentrated electrolyte solution can deviate significantly from dilute solution theory predictions. This paper, for the first time, instead uses Onsager–Stefan–Maxwell concentrated solution theory and themore » generalized Darken relation with PFG-NMR measurements to quantify the degrees of dissociation in electrolyte solutions (LiPF 6 in ethylene carbonate/diethyl carbonate, 1:1 by weight). At LiPF 6 concentrations ranging from 0.1 M to 1.5 M, the salt dissociation degree is found to range from 61% to 37%. Finally, transport properties are then calculated through concentrated solution theory with corrections for these significant levels of ion pairing.« less
Evaluating Transport Properties and Ionic Dissociation of LiPF 6 in Concentrated Electrolyte
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Zhange; Higa, Kenneth; Han, Kee Sung
2017-01-01
The presence of lithium hexafluorophosphate (LiPF6) ion pairs in carbonate-based electrolyte solutions is widely accepted in the field of battery electrolyte research and is expected to affect solution transport properties. No existing techniques are capable of directly quantifying salt dissociation in these solutions. Previous publications by others have provided estimates of dissociation degrees using dilute solution theory and pulsed field gradient nuclear magnetic resonance spectroscopy (PFG-NMR) measurements of self-diffusivity. However, the behavior of a concentrated electrolyte solution can deviate significantly from dilute solution theory predictions. This work, for the first time, instead uses Onsager–Stefan–Maxwell concentrated solution theory and the generalized.more » Darken relation with PFG-NMR measurements to quantify the degrees of dissociation in electrolyte solutions (LiPF6 in ethylene carbonate/diethyl carbonate, 1:1 by weight). At LiPF6 concentrations ranging from 0.1 M to 1.5 M, the salt dissociation degree is found to range from 61% to 37%. Transport properties are then calculated through concentrated solution theory with corrections for these significant levels of ion pairing.« less
Evaluation of research in biomedical ontologies
Dumontier, Michel; Gkoutos, Georgios V.
2013-01-01
Ontologies are now pervasive in biomedicine, where they serve as a means to standardize terminology, to enable access to domain knowledge, to verify data consistency and to facilitate integrative analyses over heterogeneous biomedical data. For this purpose, research on biomedical ontologies applies theories and methods from diverse disciplines such as information management, knowledge representation, cognitive science, linguistics and philosophy. Depending on the desired applications in which ontologies are being applied, the evaluation of research in biomedical ontologies must follow different strategies. Here, we provide a classification of research problems in which ontologies are being applied, focusing on the use of ontologies in basic and translational research, and we demonstrate how research results in biomedical ontologies can be evaluated. The evaluation strategies depend on the desired application and measure the success of using an ontology for a particular biomedical problem. For many applications, the success can be quantified, thereby facilitating the objective evaluation and comparison of research in biomedical ontology. The objective, quantifiable comparison of research results based on scientific applications opens up the possibility for systematically improving the utility of ontologies in biomedical research. PMID:22962340
Neural basis for generalized quantifier comprehension.
McMillan, Corey T; Clark, Robin; Moore, Peachie; Devita, Christian; Grossman, Murray
2005-01-01
Generalized quantifiers like "all cars" are semantically well understood, yet we know little about their neural representation. Our model of quantifier processing includes a numerosity device, operations that combine number elements and working memory. Semantic theory posits two types of quantifiers: first-order quantifiers identify a number state (e.g. "at least 3") and higher-order quantifiers additionally require maintaining a number state actively in working memory for comparison with another state (e.g. "less than half"). We used BOLD fMRI to test the hypothesis that all quantifiers recruit inferior parietal cortex associated with numerosity, while only higher-order quantifiers recruit prefrontal cortex associated with executive resources like working memory. Our findings showed that first-order and higher-order quantifiers both recruit right inferior parietal cortex, suggesting that a numerosity component contributes to quantifier comprehension. Moreover, only probes of higher-order quantifiers recruited right dorsolateral prefrontal cortex, suggesting involvement of executive resources like working memory. We also observed activation of thalamus and anterior cingulate that may be associated with selective attention. Our findings are consistent with a large-scale neural network centered in frontal and parietal cortex that supports comprehension of generalized quantifiers.
Global financial indices and twitter sentiment: A random matrix theory approach
NASA Astrophysics Data System (ADS)
García, A.
2016-11-01
We use Random Matrix Theory (RMT) approach to analyze the correlation matrix structure of a collection of public tweets and the corresponding return time series associated to 20 global financial indices along 7 trading months of 2014. In order to quantify the collection of tweets, we constructed daily polarity time series from public tweets via sentiment analysis. The results from RMT analysis support the fact of the existence of true correlations between financial indices, polarities, and the mixture of them. Moreover, we found a good agreement between the temporal behavior of the extreme eigenvalues of both empirical data, and similar results were found when computing the inverse participation ratio, which provides an evidence about the emergence of common factors in global financial information whether we use the return or polarity data as a source. In addition, we found a very strong presumption that polarity Granger causes returns of an Indonesian index for a long range of lag trading days, whereas for Israel, South Korea, Australia, and Japan, the predictive information of returns is also presented but with less presumption. Our results suggest that incorporating polarity as a financial indicator may open up new insights to understand the collective and even individual behavior of global financial indices.
Assessment of Uncertainties Related to Seismic Hazard Using Fuzzy Analysis
NASA Astrophysics Data System (ADS)
Jorjiashvili, N.; Yokoi, T.; Javakhishvili, Z.
2013-05-01
Seismic hazard analysis in last few decades has been become very important issue. Recently, new technologies and available data have been improved that helped many scientists to understand where and why earthquakes happen, physics of earthquakes, etc. They have begun to understand the role of uncertainty in Seismic hazard analysis. However, there is still significant problem how to handle existing uncertainty. The same lack of information causes difficulties to quantify uncertainty accurately. Usually attenuation curves are obtained in statistical way: regression analysis. Statistical and probabilistic analysis show overlapped results for the site coefficients. This overlapping takes place not only at the border between two neighboring classes, but also among more than three classes. Although the analysis starts from classifying sites using the geological terms, these site coefficients are not classified at all. In the present study, this problem is solved using Fuzzy set theory. Using membership functions the ambiguities at the border between neighboring classes can be avoided. Fuzzy set theory is performed for southern California by conventional way. In this study standard deviations that show variations between each site class obtained by Fuzzy set theory and classical way are compared. Results on this analysis show that when we have insufficient data for hazard assessment site classification based on Fuzzy set theory shows values of standard deviations less than obtained by classical way which is direct proof of less uncertainty.
Garcia-Ramos, Camille; Lin, Jack J; Kellermann, Tanja S; Bonilha, Leonardo; Prabhakaran, Vivek; Hermann, Bruce P
2016-01-01
The recent revision of the classification of the epilepsies released by the ILAE Commission on Classification and Terminology (2005–2009) has been a major development in the field. Papers in this section of the special issue were charged with examining the relevance of other techniques and approaches to examining, categorizing and classifying cognitive and behavioral comorbidities. In that light, we investigate the applicability of graph theory to understand the impact of epilepsy on cognition compared to controls, and then the patterns of cognitive development in normally developing children which would set the stage for prospective comparisons of children with epilepsy and controls. The overall goal is to examine the potential utility of other analytic tools and approaches to conceptualize the cognitive comorbidities in epilepsy. Given that the major cognitive domains representing cognitive function are interdependent, the associations between the neuropsychological abilities underlying these domains can be referred to as a cognitive network. Therefore, the architecture of this cognitive network can be quantified and assessed using graph theory methods, rendering a novel approach to the characterization of cognitive status. In this article we provide fundamental information about graph theory procedures, followed by application of these techniques to cross-sectional analysis of neuropsychological data in children with epilepsy compared to controls, finalizing with prospective analysis of neuropsychological development in younger and older healthy controls. PMID:27017326
Information gains from cosmological probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grandis, S.; Seehars, S.; Refregier, A.
In light of the growing number of cosmological observations, it is important to develop versatile tools to quantify the constraining power and consistency of cosmological probes. Originally motivated from information theory, we use the relative entropy to compute the information gained by Bayesian updates in units of bits. This measure quantifies both the improvement in precision and the 'surprise', i.e. the tension arising from shifts in central values. Our starting point is a WMAP9 prior which we update with observations of the distance ladder, supernovae (SNe), baryon acoustic oscillations (BAO), and weak lensing as well as the 2015 Planck release.more » We consider the parameters of the flat ΛCDM concordance model and some of its extensions which include curvature and Dark Energy equation of state parameter w . We find that, relative to WMAP9 and within these model spaces, the probes that have provided the greatest gains are Planck (10 bits), followed by BAO surveys (5.1 bits) and SNe experiments (3.1 bits). The other cosmological probes, including weak lensing (1.7 bits) and (H{sub 0}) measures (1.7 bits), have contributed information but at a lower level. Furthermore, we do not find any significant surprise when updating the constraints of WMAP9 with any of the other experiments, meaning that they are consistent with WMAP9. However, when we choose Planck15 as the prior, we find that, accounting for the full multi-dimensionality of the parameter space, the weak lensing measurements of CFHTLenS produce a large surprise of 4.4 bits which is statistically significant at the 8 σ level. We discuss how the relative entropy provides a versatile and robust framework to compare cosmological probes in the context of current and future surveys.« less
NASA Astrophysics Data System (ADS)
Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
NASA Astrophysics Data System (ADS)
Yang, Jing; Youssef, Mostafa; Yildiz, Bilge
2018-01-01
In this work, we quantify oxygen self-diffusion in monoclinic-phase zirconium oxide as a function of temperature and oxygen partial pressure. A migration barrier of each type of oxygen defect was obtained by first-principles calculations. Random walk theory was used to quantify the diffusivities of oxygen interstitials by using the calculated migration barriers. Kinetic Monte Carlo simulations were used to calculate diffusivities of oxygen vacancies by distinguishing the threefold- and fourfold-coordinated lattice oxygen. By combining the equilibrium defect concentrations obtained in our previous work together with the herein calculated diffusivity of each defect species, we present the resulting oxygen self-diffusion coefficients and the corresponding atomistically resolved transport mechanisms. The predicted effective migration barriers and diffusion prefactors are in reasonable agreement with the experimentally reported values. This work provides insights into oxygen diffusion engineering in Zr O2 -related devices and parametrization for continuum transport modeling.
Ince, Robin A A; Giordano, Bruno L; Kayser, Christoph; Rousselet, Guillaume A; Gross, Joachim; Schyns, Philippe G
2017-03-01
We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc. 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Devine, Sean D
2016-02-01
Replication can be envisaged as a computational process that is able to generate and maintain order far-from-equilibrium. Replication processes, can self-regulate, as the drive to replicate can counter degradation processes that impact on a system. The capability of replicated structures to access high quality energy and eject disorder allows Landauer's principle, in conjunction with Algorithmic Information Theory, to quantify the entropy requirements to maintain a system far-from-equilibrium. Using Landauer's principle, where destabilising processes, operating under the second law of thermodynamics, change the information content or the algorithmic entropy of a system by ΔH bits, replication processes can access order, eject disorder, and counter the change without outside interventions. Both diversity in replicated structures, and the coupling of different replicated systems, increase the ability of the system (or systems) to self-regulate in a changing environment as adaptation processes select those structures that use resources more efficiently. At the level of the structure, as selection processes minimise the information loss, the irreversibility is minimised. While each structure that emerges can be said to be more entropically efficient, as such replicating structures proliferate, the dissipation of the system as a whole is higher than would be the case for inert or simpler structures. While a detailed application to most real systems would be difficult, the approach may well be useful in understanding incremental changes to real systems and provide broad descriptions of system behaviour. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porse, Sean L.; Wade, Sarah; Hovorka, Susan D.
Risk communication literature suggests that for a number of reasons, the public may perceive a risk to be greater than indicated by its statistical probability. Public concern over risk can lead to significant and costly delays in project permitting and operations. Considering these theories, media coverage of CO₂-related well blowouts in 2013 gave rise to the questions: What is the risk of CO₂ well blowouts associated with CCUS through CO₂ EOR? What is the potential public perception of those risks? What information could be used to respond to public concern? To address these questions, this study aims to: 1) providemore » a framework for understanding the nature of onshore well blowouts, 2) quantify the incidence of such events for three specific geographic regions of Texas, 3) relate this data to CCUS and findings from other studies, and 4) explore the potential implications for public perception of this risk associated with CCUS projects. While quantifying answers to these questions proved to be challenging, the results from this study suggest that (1) the perceived risk of CO₂ well blowouts may exceed the statistical risk and (2) information that could be used to address this gap could be made more readily available to the greater benefit of industry and stakeholders who support the development of CCUS as an option for addressing anthropogenic CO₂ emissions. The study also suggests approaches to best conduct such data inquiries.« less
NASA Astrophysics Data System (ADS)
Bhaskar, Ankush; Ramesh, Durbha Sai; Vichare, Geeta; Koganti, Triven; Gurubaran, S.
2017-12-01
Identification and quantification of possible drivers of recent global temperature variability remains a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases: CO2, CH4 and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance ( TSI) and cosmic ray flux ( CR); El Niño Southern Oscillation ( ENSO) and Global Mean Temperature Anomaly ( GMTA) made during 1984-2005 are utilized to distinguish driving and responding signals of global temperature variability. Estimates of their relative contributions reveal that CO2 ({˜ } 24 %), CH4 ({˜ } 19 %) and volcanic aerosols ({˜ }23 %) are the primary contributors to the observed variations in GMTA. While, UV ({˜ } 9 %) and ENSO ({˜ } 12 %) act as secondary drivers of variations in the GMTA, the remaining play a marginal role in the observed recent global temperature variability. Interestingly, ENSO and GMTA mutually drive each other at varied time lags. This study assists future modelling efforts in climate science.
Similarity of Symbol Frequency Distributions with Heavy Tails
NASA Astrophysics Data System (ADS)
Gerlach, Martin; Font-Clos, Francesc; Altmann, Eduardo G.
2016-04-01
Quantifying the similarity between symbolic sequences is a traditional problem in information theory which requires comparing the frequencies of symbols in different sequences. In numerous modern applications, ranging from DNA over music to texts, the distribution of symbol frequencies is characterized by heavy-tailed distributions (e.g., Zipf's law). The large number of low-frequency symbols in these distributions poses major difficulties to the estimation of the similarity between sequences; e.g., they hinder an accurate finite-size estimation of entropies. Here, we show analytically how the systematic (bias) and statistical (fluctuations) errors in these estimations depend on the sample size N and on the exponent γ of the heavy-tailed distribution. Our results are valid for the Shannon entropy (α =1 ), its corresponding similarity measures (e.g., the Jensen-Shanon divergence), and also for measures based on the generalized entropy of order α . For small α 's, including α =1 , the errors decay slower than the 1 /N decay observed in short-tailed distributions. For α larger than a critical value α*=1 +1 /γ ≤2 , the 1 /N decay is recovered. We show the practical significance of our results by quantifying the evolution of the English language over the last two centuries using a complete α spectrum of measures. We find that frequent words change more slowly than less frequent words and that α =2 provides the most robust measure to quantify language change.
Interspecific competition in plants: how well do current methods answer fundamental questions?
Connolly, J; Wayne, P; Bazzaz, F A
2001-02-01
Accurately quantifying and interpreting the processes and outcomes of competition among plants is essential for evaluating theories of plant community organization and evolution. We argue that many current experimental approaches to quantifying competitive interactions introduce size bias, which may significantly impact the quantitative and qualitative conclusions drawn from studies. Size bias generally arises when estimates of competitive ability are erroneously influenced by the initial size of competing individuals. We employ a series of quantitative thought experiments to demonstrate the potential for size bias in analysis of four traditional experimental designs (pairwise, replacement series, additive series, and response surfaces) either when only final measurements are available or when both initial and final measurements are collected. We distinguish three questions relevant to describing competitive interactions: Which species dominates? Which species gains? and How do species affect each other? The choice of experimental design and measurements greatly influences the scope of inference permitted. Conditions under which the latter two questions can give biased information are tabulated. We outline a new approach to characterizing competition that avoids size bias and that improves the concordance between research question and experimental design. The implications of the choice of size metrics used to quantify both the initial state and the responses of elements in interspecific mixtures are discussed. The relevance of size bias in competition studies with organisms other than plants is also discussed.
Beyond DNA: integrating inclusive inheritance into an extended theory of evolution.
Danchin, Étienne; Charmantier, Anne; Champagne, Frances A; Mesoudi, Alex; Pujol, Benoit; Blanchet, Simon
2011-06-17
Many biologists are calling for an 'extended evolutionary synthesis' that would 'modernize the modern synthesis' of evolution. Biological information is typically considered as being transmitted across generations by the DNA sequence alone, but accumulating evidence indicates that both genetic and non-genetic inheritance, and the interactions between them, have important effects on evolutionary outcomes. We review the evidence for such effects of epigenetic, ecological and cultural inheritance and parental effects, and outline methods that quantify the relative contributions of genetic and non-genetic heritability to the transmission of phenotypic variation across generations. These issues have implications for diverse areas, from the question of missing heritability in human complex-trait genetics to the basis of major evolutionary transitions.
Optimal and robust control of transition
NASA Technical Reports Server (NTRS)
Bewley, T. R.; Agarwal, R.
1996-01-01
Optimal and robust control theories are used to determine feedback control rules that effectively stabilize a linearly unstable flow in a plane channel. Wall transpiration (unsteady blowing/suction) with zero net mass flux is used as the control. Control algorithms are considered that depend both on full flowfield information and on estimates of that flowfield based on wall skin-friction measurements only. The development of these control algorithms accounts for modeling errors and measurement noise in a rigorous fashion; these disturbances are considered in both a structured (Gaussian) and unstructured ('worst case') sense. The performance of these algorithms is analyzed in terms of the eigenmodes of the resulting controlled systems, and the sensitivity of individual eigenmodes to both control and observation is quantified.
NASA Astrophysics Data System (ADS)
Stanley, H. E.; Gabaix, Xavier; Gopikrishnan, Parameswaran; Plerou, Vasiliki
2007-08-01
One challenge of economics is that the systems treated by these sciences have no perfect metronome in time and no perfect spatial architecture-crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time. We present an overview of recent research joining practitioners of economic theory and statistical physics to try to better understand puzzles regarding economic fluctuations. One of these puzzles is how to describe outliers, phenomena that lie outside of patterns of statistical regularity. We review evidence consistent with the possibility that such outliers may not exist. This possibility is supported by recent analysis of databases containing information about each trade of every stock.
Measuring the potential utility of seasonal climate predictions
NASA Astrophysics Data System (ADS)
Tippett, Michael K.; Kleeman, Richard; Tang, Youmin
2004-11-01
Variation of sea surface temperature (SST) on seasonal-to-interannual time-scales leads to changes in seasonal weather statistics and seasonal climate anomalies. Relative entropy, an information theory measure of utility, is used to quantify the impact of SST variations on seasonal precipitation compared to natural variability. An ensemble of general circulation model (GCM) simulations is used to estimate this quantity in three regions where tropical SST has a large impact on precipitation: South Florida, the Nordeste of Brazil and Kenya. We find the yearly variation of relative entropy is strongly correlated with shifts in ensemble mean precipitation and weakly correlated with ensemble variance. Relative entropy is also found to be related to measures of the ability of the GCM to reproduce observations.
The use of predictive models to optimize risk of decisions.
Baranyi, József; Buss da Silva, Nathália
2017-01-02
The purpose of this paper is to set up a mathematical framework that risk assessors and regulators could use to quantify the "riskiness" of a particular recommendation (choice/decision). The mathematical theory introduced here can be used for decision support systems. We point out that efficient use of predictive models in decision making for food microbiology needs to consider three major points: (1) the uncertainty and variability of the used information based on which the decision is to be made; (2) the validity of the predictive models aiding the assessor; and (3) the cost generated by the difference between the a-priory choice and the a-posteriori outcome. Copyright © 2016 Elsevier B.V. All rights reserved.
Daemonic ergotropy: enhanced work extraction from quantum correlations
NASA Astrophysics Data System (ADS)
Francica, Gianluca; Goold, John; Plastina, Francesco; Paternostro, Mauro
2017-03-01
We investigate how the presence of quantum correlations can influence work extraction in closed quantum systems, establishing a new link between the field of quantum non-equilibrium thermodynamics and the one of quantum information theory. We consider a bipartite quantum system and we show that it is possible to optimize the process of work extraction, thanks to the correlations between the two parts of the system, by using an appropriate feedback protocol based on the concept of ergotropy. We prove that the maximum gain in the extracted work is related to the existence of quantum correlations between the two parts, quantified by either quantum discord or, for pure states, entanglement. We then illustrate our general findings on a simple physical situation consisting of a qubit system.
Granger-causality maps of diffusion processes.
Wahl, Benjamin; Feudel, Ulrike; Hlinka, Jaroslav; Wächter, Matthias; Peinke, Joachim; Freund, Jan A
2016-02-01
Granger causality is a statistical concept devised to reconstruct and quantify predictive information flow between stochastic processes. Although the general concept can be formulated model-free it is often considered in the framework of linear stochastic processes. Here we show how local linear model descriptions can be employed to extend Granger causality into the realm of nonlinear systems. This novel treatment results in maps that resolve Granger causality in regions of state space. Through examples we provide a proof of concept and illustrate the utility of these maps. Moreover, by integration we convert the local Granger causality into a global measure that yields a consistent picture for a global Ornstein-Uhlenbeck process. Finally, we recover invariance transformations known from the theory of autoregressive processes.
Nonlinear system theory: another look at dependence.
Wu, Wei Biao
2005-10-04
Based on the nonlinear system theory, we introduce previously undescribed dependence measures for stationary causal processes. Our physical and predictive dependence measures quantify the degree of dependence of outputs on inputs in physical systems. The proposed dependence measures provide a natural framework for a limit theory for stationary processes. In particular, under conditions with quite simple forms, we present limit theorems for partial sums, empirical processes, and kernel density estimates. The conditions are mild and easily verifiable because they are directly related to the data-generating mechanisms.
Shen, Minxue; Cui, Yuanwu; Hu, Ming; Xu, Linyong
2017-01-13
The study aimed to validate a scale to assess the severity of "Yin deficiency, intestine heat" pattern of functional constipation based on the modern test theory. Pooled longitudinal data of 237 patients with "Yin deficiency, intestine heat" pattern of constipation from a prospective cohort study were used to validate the scale. Exploratory factor analysis was used to examine the common factors of items. A multidimensional item response model was used to assess the scale with the presence of multidimensionality. The Cronbach's alpha ranged from 0.79 to 0.89, and the split-half reliability ranged from 0.67 to 0.79 at different measurements. Exploratory factor analysis identified two common factors, and all items had cross factor loadings. Bidimensional model had better goodness of fit than the unidimensional model. Multidimensional item response model showed that the all items had moderate to high discrimination parameters. Parameters indicated that the first latent trait signified intestine heat, while the second trait characterized Yin deficiency. Information function showed that items demonstrated highest discrimination power among patients with moderate to high level of disease severity. Multidimensional item response theory provides a useful and rational approach in validating scales for assessing the severity of patterns in traditional Chinese medicine.
The chapter provides qualitative information on the magnitude of industrial sources of methane and, where possible, provides information to allow the reader to quantify methane emissions. One difficulty in quantifying methane emissions from industry is the inconsistent treatment ...
Development of Turbulent Biological Closure Parameterizations
2011-09-30
LONG-TERM GOAL: The long-term goals of this project are: (1) to develop a theoretical framework to quantify turbulence induced NPZ interactions. (2) to apply the theory to develop parameterizations to be used in realistic environmental physical biological coupling numerical models. OBJECTIVES: Connect the Goodman and Robinson (2008) statistically based pdf theory to Advection Diffusion Reaction (ADR) modeling of NPZ interaction.
Infiltration on sloping terrain and its role on runoff generation and slope stability
NASA Astrophysics Data System (ADS)
Loáiciga, Hugo A.; Johnson, J. Michael
2018-06-01
A modified Green-and-Ampt model is formulated to quantify infiltration on sloping terrain underlain by homogeneous soil wetted by surficial water application. This paper's theory for quantifying infiltration relies on the mathematical statement of the coupled partial differential equations (pdes) governing infiltration and runoff. These pdes are solved by employing an explicit finite-difference numerical method that yields the infiltration, the infiltration rate, the depth to the wetting front, the rate of runoff, and the depth of runoff everywhere on the slope during external wetting. Data inputs consist of a water application rate or the rainfall hyetograph of a storm of arbitrary duration, soil hydraulic characteristics and antecedent moisture, and the slope's hydraulic and geometric characteristics. The presented theory predicts the effect an advancing wetting front has on slope stability with respect to translational sliding. This paper's theory also develops the 1D pde governing suspended sediment transport and slope degradation caused by runoff influenced by infiltration. Three examples illustrate the application of the developed theory to calculate infiltration and runoff on a slope and their role on the stability of cohesive and cohesionless soils forming sloping terrain.
Task Uncertainty Can Account for Mixing and Switch Costs in Task-Switching
Rennie, Jaime L.
2015-01-01
Cognitive control is required in situations that involve uncertainty or change, such as when resolving conflict, selecting responses and switching tasks. Recently, it has been suggested that cognitive control can be conceptualised as a mechanism which prioritises goal-relevant information to deal with uncertainty. This hypothesis has been supported using a paradigm that requires conflict resolution. In this study, we examine whether cognitive control during task switching is also consistent with this notion. We used information theory to quantify the level of uncertainty in different trial types during a cued task-switching paradigm. We test the hypothesis that differences in uncertainty between task repeat and task switch trials can account for typical behavioural effects in task-switching. Increasing uncertainty was associated with less efficient performance (i.e., slower and less accurate), particularly on switch trials and trials that afford little opportunity for advance preparation. Interestingly, both mixing and switch costs were associated with a common episodic control process. These results support the notion that cognitive control may be conceptualised as an information processor that serves to resolve uncertainty in the environment. PMID:26107646
Confabulation in Alzheimer's disease and amnesia: a qualitative account and a new taxonomy.
La Corte, Valentina; Serra, Mara; Attali, Eve; Boissé, Marie-Françoise; Dalla Barba, Gianfranco
2010-11-01
Clinical and experimental observation have shown that patients who confabulate, especially but not exclusively when provoked by specific questions, retrieve personal habits, repeated events or over-learned information and mistake them for actually experienced, specific, unique events. Accordingly, the aim of this study is to characterize and quantify the relative contribution of this type of confabulation, which we refer to as Habits Confabulation (HC), to confabulations produced by 10 mild Alzheimer's disease (AD) patients and 8 confabulating amnesics (CA) of various etiologies. On the Confabulation Battery (Dalla Barba, 1993a, Dalla Barba & Decaix, 2009), a set of questions involving the retrieval of various kinds of semantic and episodic information, patients produced a total of 424 confabulation. HC accounted for 42% and 62% of confabulations in AD patients and CA, respectively. This result indicates that, regardless the clinical diagnosis, the brain pathology or their lesion's site, confabulation largely reflects the individuals' tendency to consider habits, routines, and over-learned information as unique episodes. These results are discussed in the framework of the Memory Consciousness and Temporality Theory (Dalla Barba, 2002).
Sang, Xiahan; LeBeau, James M
2014-03-01
We report the development of revolving scanning transmission electron microscopy--RevSTEM--a technique that enables characterization and removal of sample drift distortion from atomic resolution images without the need for a priori crystal structure information. To measure and correct the distortion, we acquire an image series while rotating the scan coordinate system between successive frames. Through theory and experiment, we show that the revolving image series captures the information necessary to analyze sample drift rate and direction. At atomic resolution, we quantify the image distortion using the projective standard deviation, a rapid, real-space method to directly measure lattice vector angles. By fitting these angles to a physical model, we show that the refined drift parameters provide the input needed to correct distortion across the series. We demonstrate that RevSTEM simultaneously removes the need for a priori structure information to correct distortion, leads to a dramatically improved signal-to-noise ratio, and enables picometer precision and accuracy regardless of drift rate. Copyright © 2013 Elsevier B.V. All rights reserved.
Combining disparate data for decision making
NASA Astrophysics Data System (ADS)
Gettings, M. E.
2010-12-01
Combining information of disparate types from multiple data or model sources is a fundamental task in decision making theory. Procedures for combining and utilizing quantitative data with uncertainties are well-developed in several approaches, but methods for including qualitative and semi-quantitative data are much less so. Possibility theory offers an approach to treating all three data types in an objective and repeatable way. In decision making, biases are frequently present in several forms, including those arising from data quality, data spatial and temporal distribution, and the analyst's knowledge and beliefs as to which data or models are most important. The latter bias is particularly evident in the case of qualitative data and there are numerous examples of analysts feeling that a qualitative dataset is more relevant than a quantified one. Possibility theory and fuzzy logic now provide fairly general rules for quantifying qualitative and semi-quantitative data in ways that are repeatable and minimally biased. Once a set of quantified data and/or model layers is obtained, there are several methods of combining them to obtain insight useful in decision making. These include: various combinations of layers using formal fuzzy logic (for example, layer A and (layer B or layer C) but not layer D); connecting the layers with varying influence links in a Fuzzy Cognitive Map; and using the set of layers for the universe of discourse for agent based model simulations. One example of logical combinations that have proven useful is the definition of possible habitat for valley fever fungus (Coccidioides sp.) using variables such as soil type, altitude, aspect, moisture and temperature. A second example is the delineation of the lithology and possible mineralization of several areas beneath basin fill in southern Arizona. A Fuzzy Cognitive Map example is the impacts of development and operation of a hypothetical mine in an area adjacent to a city. In this model variables such as water use, environmental quality measures (visual and geochemical), deposit quality, rate of development, and commodity price combine in complex ways to yield frequently counter-intuitive results. By varying the interaction strengths linking the variables, insight into the complex interactions of the system can be gained. An example using agent-based modeling is a model designed to test the hypothesis that new valley fever fungus sites could be established from existing sites by wind transport of fungal spores. The variables include layers simulating precipitation, temperature, soil moisture, and soil chemistry based on historical climate records and studies of known valley fever habitat. Numerous agent-based model runs show that the system is self organizing to the extent that there will be new sites established by wind transport over decadal scales. Possibility theory provides a framework for gaining insight into the interaction of known or suspected variables in a complex system. Once the data layers are quantified into possibility functions, varying hypotheses of the relative importance of variables and processes can be obtained by repeated combinations with varying weights. This permits an evaluation of the effects of various data layers, their uncertainties, and biases from the layers, all of which improve the objectivity of decision making.
Information on Quantifiers and Argument Structure in English Learner's Dictionaries.
ERIC Educational Resources Information Center
Lee, Thomas Hun-tak
1993-01-01
Lexicographers have been arguing for the inclusion of abstract and complex grammatical information in dictionaries. This paper examines the extent to which information about quantifiers and the argument structure of verbs is encoded in English learner's dictionaries. The Oxford Advanced Learner's Dictionary (1989), the Longman Dictionary of…
Kruse, Clemens Scott; DeShazo, Jonathan; Kim, Forest; Fulton, Lawrence
2014-05-23
The Health Information Technology for Economic and Clinical Health Act (HITECH) allocated $19.2 billion to incentivize adoption of the electronic health record (EHR). Since 2009, Meaningful Use Criteria have dominated information technology (IT) strategy. Health care organizations have struggled to meet expectations and avoid penalties to reimbursements from the Center for Medicare and Medicaid Services (CMS). Organizational theories attempt to explain factors that influence organizational change, and many theories address changes in organizational strategy. However, due to the complexities of the health care industry, existing organizational theories fall short of demonstrating association with significant health care IT implementations. There is no organizational theory for health care that identifies, groups, and analyzes both internal and external factors of influence for large health care IT implementations like adoption of the EHR. The purpose of this systematic review is to identify a full-spectrum of both internal organizational and external environmental factors associated with the adoption of health information technology (HIT), specifically the EHR. The result is a conceptual model that is commensurate with the complexity of with the health care sector. We performed a systematic literature search in PubMed (restricted to English), EBSCO Host, and Google Scholar for both empirical studies and theory-based writing from 1993-2013 that demonstrated association between influential factors and three modes of HIT: EHR, electronic medical record (EMR), and computerized provider order entry (CPOE). We also looked at published books on organizational theories. We made notes and noted trends on adoption factors. These factors were grouped as adoption factors associated with various versions of EHR adoption. The resulting conceptual model summarizes the diversity of independent variables (IVs) and dependent variables (DVs) used in articles, editorials, books, as well as quantitative and qualitative studies (n=83). As of 2009, only 16.30% (815/4999) of nonfederal, acute-care hospitals had adopted a fully interoperable EHR. From the 83 articles reviewed in this study, 16/83 (19%) identified internal organizational factors and 9/83 (11%) identified external environmental factors associated with adoption of the EHR, EMR, or CPOE. The conceptual model for EHR adoption associates each variable with the work that identified it. Commonalities exist in the literature for internal organizational and external environmental factors associated with the adoption of the EHR and/or CPOE. The conceptual model for EHR adoption associates internal and external factors, specific to the health care industry, associated with adoption of the EHR. It becomes apparent that these factors have some level of association, but the association is not consistently calculated individually or in combination. To better understand effective adoption strategies, empirical studies should be performed from this conceptual model to quantify the positive or negative effect of each factor.
Quantifying quantum coherence with quantum Fisher information.
Feng, X N; Wei, L F
2017-11-14
Quantum coherence is one of the old but always important concepts in quantum mechanics, and now it has been regarded as a necessary resource for quantum information processing and quantum metrology. However, the question of how to quantify the quantum coherence has just been paid the attention recently (see, e.g., Baumgratz et al. PRL, 113. 140401 (2014)). In this paper we verify that the well-known quantum Fisher information (QFI) can be utilized to quantify the quantum coherence, as it satisfies the monotonicity under the typical incoherent operations and the convexity under the mixing of the quantum states. Differing from most of the pure axiomatic methods, quantifying quantum coherence by QFI could be experimentally testable, as the bound of the QFI is practically measurable. The validity of our proposal is specifically demonstrated with the typical phase-damping and depolarizing evolution processes of a generic single-qubit state, and also by comparing it with the other quantifying methods proposed previously.
Bridging scales in the evolution of infectious disease life histories: theory.
Day, Troy; Alizon, Samuel; Mideo, Nicole
2011-12-01
A significant goal of recent theoretical research on pathogen evolution has been to develop theory that bridges within- and between-host dynamics. The main approach used to date is one that nests within-host models of pathogen replication in models for the between-host spread of infectious diseases. Although this provides an elegant approach, it nevertheless suffers from some practical difficulties. In particular, the information required to satisfactorily model the mechanistic details of the within-host dynamics is not often available. Here, we present a theoretical approach that circumvents these difficulties by quantifying the relevant within-host factors in an empirically tractable way. The approach is closely related to quantitative genetic models for function-valued traits, and it also allows for the prediction of general characteristics of disease life history, including the timing of virulence, transmission, and host recovery. In a companion paper, we illustrate the approach by applying it to data from a model system of malaria. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Boulder Dislodgement by Tsunamis and Storms: Version 2.0
NASA Astrophysics Data System (ADS)
Weiss, Robert
2016-04-01
In the past, boulder dislodgement by tsunami and storm waves has been approached with a simple threshold approach in which a boulder was moved if the sum of the acting forces on the boulder is larger than zero. The impulse theory taught us, however, that this criterion is not enough to explain particle dislodgement. We employ an adapted version of the Newton's Second Law of Motion (NSLM) in order to consider the essence of the impulse theory which is that the sum of the forces has to exceed a certain threshold for a certain period of time. Furthermore, a classical assumption is to consider linear waves. However, when waves travel toward the shore, they alter due to non-linear processes. We employ the TRIADS model to quantify that change and how it impacts boulder dislodgement. We present our results of the coupled model (adapted NSLM and TRIADS model). The results project a more complex picture of boulder transport by storms and tsunami. The following question arises: What information do we actually invert, and what does it tell us about the causative event?
Uncovering Randomness and Success in Society
Jalan, Sarika; Sarkar, Camellia; Madhusudanan, Anagha; Dwivedi, Sanjiv Kumar
2014-01-01
An understanding of how individuals shape and impact the evolution of society is vastly limited due to the unavailability of large-scale reliable datasets that can simultaneously capture information regarding individual movements and social interactions. We believe that the popular Indian film industry, “Bollywood”, can provide a social network apt for such a study. Bollywood provides massive amounts of real, unbiased data that spans more than 100 years, and hence this network has been used as a model for the present paper. The nodes which maintain a moderate degree or widely cooperate with the other nodes of the network tend to be more fit (measured as the success of the node in the industry) in comparison to the other nodes. The analysis carried forth in the current work, using a conjoined framework of complex network theory and random matrix theory, aims to quantify the elements that determine the fitness of an individual node and the factors that contribute to the robustness of a network. The authors of this paper believe that the method of study used in the current paper can be extended to study various other industries and organizations. PMID:24533073
Uncovering randomness and success in society.
Jalan, Sarika; Sarkar, Camellia; Madhusudanan, Anagha; Dwivedi, Sanjiv Kumar
2014-01-01
An understanding of how individuals shape and impact the evolution of society is vastly limited due to the unavailability of large-scale reliable datasets that can simultaneously capture information regarding individual movements and social interactions. We believe that the popular Indian film industry, "Bollywood", can provide a social network apt for such a study. Bollywood provides massive amounts of real, unbiased data that spans more than 100 years, and hence this network has been used as a model for the present paper. The nodes which maintain a moderate degree or widely cooperate with the other nodes of the network tend to be more fit (measured as the success of the node in the industry) in comparison to the other nodes. The analysis carried forth in the current work, using a conjoined framework of complex network theory and random matrix theory, aims to quantify the elements that determine the fitness of an individual node and the factors that contribute to the robustness of a network. The authors of this paper believe that the method of study used in the current paper can be extended to study various other industries and organizations.
Schäfer, Sascha; Liang, Wenxi; Zewail, Ahmed H
2011-12-07
Recent studies in ultrafast electron crystallography (UEC) using a reflection diffraction geometry have enabled the investigation of a wide range of phenomena on the femtosecond and picosecond time scales. In all these studies, the analysis of the diffraction patterns and their temporal change after excitation was performed within the kinematical scattering theory. In this contribution, we address the question, to what extent dynamical scattering effects have to be included in order to obtain quantitative information about structural dynamics. We discuss different scattering regimes and provide diffraction maps that describe all essential features of scatterings and observables. The effects are quantified by dynamical scattering simulations and examined by direct comparison to the results of ultrafast electron diffraction experiments on an in situ prepared Ni(100) surface, for which structural dynamics can be well described by a two-temperature model. We also report calculations for graphite surfaces. The theoretical framework provided here allows for further UEC studies of surfaces especially at larger penetration depths and for those of heavy-atom materials. © 2011 American Institute of Physics
Random matrix theory and fund of funds portfolio optimisation
NASA Astrophysics Data System (ADS)
Conlon, T.; Ruskin, H. J.; Crane, M.
2007-08-01
The proprietary nature of Hedge Fund investing means that it is common practise for managers to release minimal information about their returns. The construction of a fund of hedge funds portfolio requires a correlation matrix which often has to be estimated using a relatively small sample of monthly returns data which induces noise. In this paper, random matrix theory (RMT) is applied to a cross-correlation matrix C, constructed using hedge fund returns data. The analysis reveals a number of eigenvalues that deviate from the spectrum suggested by RMT. The components of the deviating eigenvectors are found to correspond to distinct groups of strategies that are applied by hedge fund managers. The inverse participation ratio is used to quantify the number of components that participate in each eigenvector. Finally, the correlation matrix is cleaned by separating the noisy part from the non-noisy part of C. This technique is found to greatly reduce the difference between the predicted and realised risk of a portfolio, leading to an improved risk profile for a fund of hedge funds.
Generalization of information-based concepts in forecast verification
NASA Astrophysics Data System (ADS)
Tödter, J.; Ahrens, B.
2012-04-01
This work deals with information-theoretical methods in probabilistic forecast verification. Recent findings concerning the Ignorance Score are shortly reviewed, then the generalization to continuous forecasts is shown. For ensemble forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are the prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up the natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The applicability and usefulness of the conceptually appealing CRIGN is illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This is also directly applicable to the more traditional CRPS.
Dynamics of social contagions with memory of nonredundant information
NASA Astrophysics Data System (ADS)
Wang, Wei; Tang, Ming; Zhang, Hai-Feng; Lai, Ying-Cheng
2015-07-01
A key ingredient in social contagion dynamics is reinforcement, as adopting a certain social behavior requires verification of its credibility and legitimacy. Memory of nonredundant information plays an important role in reinforcement, which so far has eluded theoretical analysis. We first propose a general social contagion model with reinforcement derived from nonredundant information memory. Then, we develop a unified edge-based compartmental theory to analyze this model, and a remarkable agreement with numerics is obtained on some specific models. We use a spreading threshold model as a specific example to understand the memory effect, in which each individual adopts a social behavior only when the cumulative pieces of information that the individual received from his or her neighbors exceeds an adoption threshold. Through analysis and numerical simulations, we find that the memory characteristic markedly affects the dynamics as quantified by the final adoption size. Strikingly, we uncover a transition phenomenon in which the dependence of the final adoption size on some key parameters, such as the transmission probability, can change from being discontinuous to being continuous. The transition can be triggered by proper parameters and structural perturbations to the system, such as decreasing individuals' adoption threshold, increasing initial seed size, or enhancing the network heterogeneity.
Non-ad-hoc decision rule for the Dempster-Shafer method of evidential reasoning
NASA Astrophysics Data System (ADS)
Cheaito, Ali; Lecours, Michael; Bosse, Eloi
1998-03-01
This paper is concerned with the fusion of identity information through the use of statistical analysis rooted in Dempster-Shafer theory of evidence to provide automatic identification aboard a platform. An identity information process for a baseline Multi-Source Data Fusion (MSDF) system is defined. The MSDF system is applied to information sources which include a number of radars, IFF systems, an ESM system, and a remote track source. We use a comprehensive Platform Data Base (PDB) containing all the possible identity values that the potential target may take, and we use the fuzzy logic strategies which enable the fusion of subjective attribute information from sensor and the PDB to make the derivation of target identity more quickly, more precisely, and with statistically quantifiable measures of confidence. The conventional Dempster-Shafer lacks a formal basis upon which decision can be made in the face of ambiguity. We define a non-ad hoc decision rule based on the expected utility interval for pruning the `unessential' propositions which would otherwise overload the real-time data fusion systems. An example has been selected to demonstrate the implementation of our modified Dempster-Shafer method of evidential reasoning.
Shaffer, Howard J; LaBrie, Richard A; LaPlante, Debi
2004-03-01
Exposure and adaptation models provide competing perspectives of the environmental influence on the development of addictive disorders. Exposure theory suggests that the presence of environmental toxins (e.g., casinos) increases the likelihood of related disease (e.g., gambling-related disorders). Adaptation theory proposes that new environmental toxins initially increase adverse reactions; subsequently, symptoms diminish as individuals adapt to such toxins and acquire resistance. The authors describe a new public health regional exposure model (REM) that provides a tool to gather empirical evidence in support of either model. This article demonstrates how the strategic REM, modified to examine gambling exposure, uses standardized indices of exposure to social phenomena at the regional level to quantify social constructs.
Borowska, Alicja; Szwaczkowski, Tomasz; Kamiński, Stanisław; Hering, Dorota M; Kordan, Władysław; Lecewicz, Marek
2018-05-01
Use of information theory can be an alternative statistical approach to detect genome regions and candidate genes that are associated with livestock traits. The aim of this study was to verify the validity of the SNPs effects on some semen quality variables of bulls using entropy analysis. Records from 288 Holstein-Friesian bulls from one AI station were included. The following semen quality variables were analyzed: CASA kinematic variables of sperm (total motility, average path velocity, straight line velocity, curvilinear velocity, amplitude of lateral head displacement, beat cross frequency, straightness, linearity), sperm membrane integrity (plazmolema, mitochondrial function), sperm ATP content. Molecular data included 48,192 SNPs. After filtering (call rate = 0.95 and MAF = 0.05), 34,794 SNPs were included in the entropy analysis. The entropy and conditional entropy were estimated for each SNP. Conditional entropy quantifies the remaining uncertainty about values of the variable with the knowledge of SNP. The most informative SNPs for each variable were determined. The computations were performed using the R statistical package. A majority of the loci had relatively small contributions. The most informative SNPs for all variables were mainly located on chromosomes: 3, 4, 5 and 16. The results from the study indicate that important genome regions and candidate genes that determine semen quality variables in bulls are located on a number of chromosomes. Some detected clusters of SNPs were located in RNA (U6 and 5S_rRNA) for all the variables for which analysis occurred. Associations between PARK2 as well GALNT13 genes and some semen characteristics were also detected. Copyright © 2018 Elsevier B.V. All rights reserved.
Decoherence estimation in quantum theory and beyond
NASA Astrophysics Data System (ADS)
Pfister, Corsin
The quantum physics literature provides many different characterizations of decoherence. Most of them have in common that they describe decoherence as a kind of influence on a quantum system upon interacting with an another system. In the spirit of quantum information theory, we adapt a particular viewpoint on decoherence which describes it as the loss of information into a system that is possibly controlled by an adversary. We use a quantitative framework for decoherence that builds on operational characterizations of the min-entropy that have been developed in the quantum information literature. It characterizes decoherence as an influence on quantum channels that reduces their suitability for a variety of quantifiable tasks such as the distribution of secret cryptographic keys of a certain length or the distribution of a certain number of maximally entangled qubit pairs. This allows for a quantitative and operational characterization of decoherence via operational characterizations of the min-entropy. In this thesis, we present a series of results about the estimation of the minentropy, subdivided into three parts. The first part concerns the estimation of a quantum adversary's uncertainty about classical information--expressed by the smooth min-entropy--as it is done in protocols for quantum key distribution (QKD). We analyze this form of min-entropy estimation in detail and find that some of the more recently suggested QKD protocols have previously unnoticed security loopholes. We show that the specifics of the sifting subroutine of a QKD protocol are crucial for security by pointing out mistakes in the security analysis in the literature and by presenting eavesdropping attacks on those problematic protocols. We provide solutions to the identified problems and present a formalized analysis of the min-entropy estimate that incorporates the sifting stage of QKD protocols. In the second part, we extend ideas from QKD to a protocol that allows to estimate an adversary's uncertainty about quantum information, expressed by the fully quantum smooth min-entropy. Roughly speaking, we show that a protocol that resembles the parallel execution of two QKD protocols can be used to lower bound the min-entropy of some unmeasured qubits. We explain how this result may influence the ongoing search for protocols for entanglement distribution. The third part is dedicated to the development of a framework that allows the estimation of decoherence even in experiments that cannot be correctly described by quantum theory. Inspired by an equivalent formulation of the min-entropy that relates it to the fidelity with a maximally entangled state, we define a decoherence quantity for a very general class of probabilistic theories that reduces to the min-entropy in the special case of quantum theory. This entails a definition of maximal entanglement for generalized probabilistic theories. Using techniques from semidefinite and linear programming, we show how bounds on this quantity can be estimated through Bell-type experiments. This allows to test models for decoherence that cannot be described by quantum theory. As an example application, we devise an experimental test of a model for gravitational decoherence that has been suggested in the literature.
Garcia-Ramos, Camille; Lin, Jack J; Kellermann, Tanja S; Bonilha, Leonardo; Prabhakaran, Vivek; Hermann, Bruce P
2016-11-01
The recent revision of the classification of the epilepsies released by the ILAE Commission on Classification and Terminology (2005-2009) has been a major development in the field. Papers in this section of the special issue explore the relevance of other techniques to examine, categorize, and classify cognitive and behavioral comorbidities in epilepsy. In this review, we investigate the applicability of graph theory to understand the impact of epilepsy on cognition compared with controls and, then, the patterns of cognitive development in normally developing children which would set the stage for prospective comparisons of children with epilepsy and controls. The overall goal is to examine the potential utility of this analytic tool and approach to conceptualize the cognitive comorbidities in epilepsy. Given that the major cognitive domains representing cognitive function are interdependent, the associations between neuropsychological abilities underlying these domains can be referred to as a cognitive network. Therefore, the architecture of this cognitive network can be quantified and assessed using graph theory methods, rendering a novel approach to the characterization of cognitive status. We first provide fundamental information about graph theory procedures, followed by application of these techniques to cross-sectional analysis of neuropsychological data in children with epilepsy compared with that of controls, concluding with prospective analysis of neuropsychological development in younger and older healthy controls. This article is part of a Special Issue entitled "The new approach to classification: Rethinking cognition and behavior in epilepsy". Copyright © 2016 Elsevier Inc. All rights reserved.
Lu, Ruipeng; Mucaki, Eliseos J; Rogan, Peter K
2017-03-17
Data from ChIP-seq experiments can derive the genome-wide binding specificities of transcription factors (TFs) and other regulatory proteins. We analyzed 765 ENCODE ChIP-seq peak datasets of 207 human TFs with a novel motif discovery pipeline based on recursive, thresholded entropy minimization. This approach, while obviating the need to compensate for skewed nucleotide composition, distinguishes true binding motifs from noise, quantifies the strengths of individual binding sites based on computed affinity and detects adjacent cofactor binding sites that coordinate with the targets of primary, immunoprecipitated TFs. We obtained contiguous and bipartite information theory-based position weight matrices (iPWMs) for 93 sequence-specific TFs, discovered 23 cofactor motifs for 127 TFs and revealed six high-confidence novel motifs. The reliability and accuracy of these iPWMs were determined via four independent validation methods, including the detection of experimentally proven binding sites, explanation of effects of characterized SNPs, comparison with previously published motifs and statistical analyses. We also predict previously unreported TF coregulatory interactions (e.g. TF complexes). These iPWMs constitute a powerful tool for predicting the effects of sequence variants in known binding sites, performing mutation analysis on regulatory SNPs and predicting previously unrecognized binding sites and target genes. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Fluid Registration of Diffusion Tensor Images Using Information Theory
Chiang, Ming-Chang; Leow, Alex D.; Klunder, Andrea D.; Dutton, Rebecca A.; Barysheva, Marina; Rose, Stephen E.; McMahon, Katie L.; de Zubicaray, Greig I.; Toga, Arthur W.; Thompson, Paul M.
2008-01-01
We apply an information-theoretic cost metric, the symmetrized Kullback-Leibler (sKL) divergence, or J-divergence, to fluid registration of diffusion tensor images. The difference between diffusion tensors is quantified based on the sKL-divergence of their associated probability density functions (PDFs). Three-dimensional DTI data from 34 subjects were fluidly registered to an optimized target image. To allow large image deformations but preserve image topology, we regularized the flow with a large-deformation diffeomorphic mapping based on the kinematics of a Navier-Stokes fluid. A driving force was developed to minimize the J-divergence between the deforming source and target diffusion functions, while reorienting the flowing tensors to preserve fiber topography. In initial experiments, we showed that the sKL-divergence based on full diffusion PDFs is adaptable to higher-order diffusion models, such as high angular resolution diffusion imaging (HARDI). The sKL-divergence was sensitive to subtle differences between two diffusivity profiles, showing promise for nonlinear registration applications and multisubject statistical analysis of HARDI data. PMID:18390342
Gaussian process tomography for soft x-ray spectroscopy at WEST without equilibrium information
NASA Astrophysics Data System (ADS)
Wang, T.; Mazon, D.; Svensson, J.; Li, D.; Jardin, A.; Verdoolaege, G.
2018-06-01
Gaussian process tomography (GPT) is a recently developed tomography method based on the Bayesian probability theory [J. Svensson, JET Internal Report EFDA-JET-PR(11)24, 2011 and Li et al., Rev. Sci. Instrum. 84, 083506 (2013)]. By modeling the soft X-ray (SXR) emissivity field in a poloidal cross section as a Gaussian process, the Bayesian SXR tomography can be carried out in a robust and extremely fast way. Owing to the short execution time of the algorithm, GPT is an important candidate for providing real-time reconstructions with a view to impurity transport and fast magnetohydrodynamic control. In addition, the Bayesian formalism allows quantifying uncertainty on the inferred parameters. In this paper, the GPT technique is validated using a synthetic data set expected from the WEST tokamak, and the results are shown of its application to the reconstruction of SXR emissivity profiles measured on Tore Supra. The method is compared with the standard algorithm based on minimization of the Fisher information.
NASA Astrophysics Data System (ADS)
Li, Haifeng; Zhu, Qing; Yang, Xiaoxia; Xu, Linrong
2012-10-01
Typical characteristics of remote sensing applications are concurrent tasks, such as those found in disaster rapid response. The existing composition approach to geographical information processing service chain, searches for an optimisation solution and is what can be deemed a "selfish" way. This way leads to problems of conflict amongst concurrent tasks and decreases the performance of all service chains. In this study, a non-cooperative game-based mathematical model to analyse the competitive relationships between tasks, is proposed. A best response function is used, to assure each task maintains utility optimisation by considering composition strategies of other tasks and quantifying conflicts between tasks. Based on this, an iterative algorithm that converges to Nash equilibrium is presented, the aim being to provide good convergence and maximise the utilisation of all tasks under concurrent task conditions. Theoretical analyses and experiments showed that the newly proposed method, when compared to existing service composition methods, has better practical utility in all tasks.
NASA Astrophysics Data System (ADS)
Johnson, J.; Verrill, N.; Horton, D.; Wing, S.
2017-12-01
Since the beginning of NOAA and NASA's Geostationary Operational Environmental Satellite (GOES) program in 1975, GOES satellites have been monitoring the geomagnetic field at geosynchronous orbit with onboard magnetometers. Using this GOES magnetometer data, we develop a state variable which characterizes the stretching of the near-Earth magnetotail by mapping the data to a central location within the magnetotail at geosynchronous distance (≈6.6 RE). Because the stretching of the magnetotail is thought to be related to the occurrence of substorms, we then assess the transfer entropy between the measure of tail stretching and substorm onsets in order to quantify the information content of our state variable with regards to substorms. Our results support the idea that stretching in the magnetotail precedes substorms and that the relationship is causal, which can be useful for magnetospheric activity and substorm predictions. We are currently assessing how well magnetic field measurements at geosynchronous orbit characterize tail stretching and their usefulness for predictions.
Experimental verification of an indefinite causal order
Rubino, Giulia; Rozema, Lee A.; Feix, Adrien; Araújo, Mateus; Zeuner, Jonas M.; Procopio, Lorenzo M.; Brukner, Časlav; Walther, Philip
2017-01-01
Investigating the role of causal order in quantum mechanics has recently revealed that the causal relations of events may not be a priori well defined in quantum theory. Although this has triggered a growing interest on the theoretical side, creating processes without a causal order is an experimental task. We report the first decisive demonstration of a process with an indefinite causal order. To do this, we quantify how incompatible our setup is with a definite causal order by measuring a “causal witness.” This mathematical object incorporates a series of measurements that are designed to yield a certain outcome only if the process under examination is not consistent with any well-defined causal order. In our experiment, we perform a measurement in a superposition of causal orders—without destroying the coherence—to acquire information both inside and outside of a “causally nonordered process.” Using this information, we experimentally determine a causal witness, demonstrating by almost 7 SDs that the experimentally implemented process does not have a definite causal order. PMID:28378018
Space Vehicle Guidance, Navigation, Control, and Estimation Operations Technologies
2018-03-29
angular position around the ellipse, and the out-of-place amplitude and angular position. These elements are explicitly relatable to the six rectangular...quasi) second order relative orbital elements are explored. One theory uses the expanded solution form and introduces several instantaneous ellipses...In each case, the theory quantifies distortion of the first order relative orbital elements when including second order effects. The new variables are
The Physics of Open Ended Evolution
NASA Astrophysics Data System (ADS)
Adams, Alyssa M.
What makes living systems different than non-living ones? Unfortunately this question is impossible to answer, at least currently. Instead, we must face computationally tangible questions based on our current understanding of physics, computation, information, and biology. Yet we have few insights into how living systems might quantifiably differ from their non-living counterparts, as in a mathematical foundation to explain away our observations of biological evolution, emergence, innovation, and organization. The development of a theory of living systems, if at all possible, demands a mathematical understanding of how data generated by complex biological systems changes over time. In addition, this theory ought to be broad enough as to not be constrained to an Earth-based biochemistry. In this dissertation, the philosophy of studying living systems from the perspective of traditional physics is first explored as a motivating discussion for subsequent research. Traditionally, we have often thought of the physical world from a bottom-up approach: things happening on a smaller scale aggregate into things happening on a larger scale. In addition, the laws of physics are generally considered static over time. Research suggests that biological evolution may follow dynamic laws that (at least in part) change as a function of the state of the system. Of the three featured research projects, cellular automata (CA) are used as a model to study certain aspects of living systems in two of them. These aspects include self-reference, open-ended evolution, local physical universality, subjectivity, and information processing. Open-ended evolution and local physical universality are attributed to the vast amount of innovation observed throughout biological evolution. Biological systems may distinguish themselves in terms of information processing and storage, not outside the theory of computation. The final research project concretely explores real-world phenomenon by means of mapping dominance hierarchies in the evolution of video game strategies. Though the main question of how life differs from non-life remains unanswered, the mechanisms behind open-ended evolution and physical universality are revealed.
Asymmetry and coherence weight of quantum states
NASA Astrophysics Data System (ADS)
Bu, Kaifeng; Anand, Namit; Singh, Uttam
2018-03-01
The asymmetry of quantum states is an important resource in quantum information processing tasks such as quantum metrology and quantum communication. In this paper, we introduce the notion of asymmetry weight—an operationally motivated asymmetry quantifier in the resource theory of asymmetry. We study the convexity and monotonicity properties of asymmetry weight and focus on its interplay with the corresponding semidefinite programming (SDP) forms along with its connection to other asymmetry measures. Since the SDP form of asymmetry weight is closely related to asymmetry witnesses, we find that the asymmetry weight can be regarded as a (state-dependent) asymmetry witness. Moreover, some specific entanglement witnesses can be viewed as a special case of an asymmetry witness—which indicates a potential connection between asymmetry and entanglement. We also provide an operationally meaningful coherence measure, which we term coherence weight, and investigate its relationship to other coherence measures like the robustness of coherence and the l1 norm of coherence. In particular, we show that for Werner states in any dimension d all three coherence quantifiers, namely, the coherence weight, the robustness of coherence, and the l1 norm of coherence, are equal and are given by a single letter formula.
Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling
NASA Astrophysics Data System (ADS)
Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.
2017-12-01
It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.
NASA Astrophysics Data System (ADS)
Valous, N. A.; Delgado, A.; Drakakis, K.; Sun, D.-W.
2014-02-01
The study of plant tissue parenchyma's intercellular air spaces contributes to the understanding of anatomy and physiology. This is challenging due to difficulty in making direct measurements of the pore space and the complex mosaic of parenchymatous tissue. The architectural complexity of pore space has shown that single geometrical measurements are not sufficient for characterization. The inhomogeneity of distribution depends not only on the percentage content of phase, but also on how the phase fills the space. The lacunarity morphometric, as multiscale measure, provides information about the distribution of gaps that correspond to degree of spatial organization in parenchyma. Additionally, modern theories have suggested strategies, where the focus has shifted from the study of averages and histograms to the study of patterns in data fluctuations. Detrended fluctuation analysis provides information on the correlation properties of the parenchyma at different spatial scales. The aim is to quantify (with the aid of the aforementioned metrics), the mesostructural changes—that occur from one cycle of freezing and thawing—in the void phase of pome fruit parenchymatous tissue, acquired with X-ray microcomputed tomography. Complex systems methods provide numerical indices and detailed insights regarding the freezing-induced modifications upon the arrangement of cells and voids. These structural changes have the potential to lead to physiological disorders. The work can further stimulate interest for the analysis of internal plant tissue structures coupled with other physico-chemical processes or phenomena.
Porse, Sean L.; Wade, Sarah; Hovorka, Susan D.
2014-12-31
Risk communication literature suggests that for a number of reasons, the public may perceive a risk to be greater than indicated by its statistical probability. Public concern over risk can lead to significant and costly delays in project permitting and operations. Considering these theories, media coverage of CO₂-related well blowouts in 2013 gave rise to the questions: What is the risk of CO₂ well blowouts associated with CCUS through CO₂ EOR? What is the potential public perception of those risks? What information could be used to respond to public concern? To address these questions, this study aims to: 1) providemore » a framework for understanding the nature of onshore well blowouts, 2) quantify the incidence of such events for three specific geographic regions of Texas, 3) relate this data to CCUS and findings from other studies, and 4) explore the potential implications for public perception of this risk associated with CCUS projects. While quantifying answers to these questions proved to be challenging, the results from this study suggest that (1) the perceived risk of CO₂ well blowouts may exceed the statistical risk and (2) information that could be used to address this gap could be made more readily available to the greater benefit of industry and stakeholders who support the development of CCUS as an option for addressing anthropogenic CO₂ emissions. The study also suggests approaches to best conduct such data inquiries.« less
Adjoint-Based Uncertainty Quantification with MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seifried, Jeffrey E.
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence inmore » the simulation is acquired.« less
Using phase for radar scatterer classification
NASA Astrophysics Data System (ADS)
Moore, Linda J.; Rigling, Brian D.; Penno, Robert P.; Zelnio, Edmund G.
2017-04-01
Traditional synthetic aperture radar (SAR) systems tend to discard phase information of formed complex radar imagery prior to automatic target recognition (ATR). This practice has historically been driven by available hardware storage, processing capabilities, and data link capacity. Recent advances in high performance computing (HPC) have enabled extremely dense storage and processing solutions. Therefore, previous motives for discarding radar phase information in ATR applications have been mitigated. First, we characterize the value of phase in one-dimensional (1-D) radar range profiles with respect to the ability to correctly estimate target features, which are currently employed in ATR algorithms for target discrimination. These features correspond to physical characteristics of targets through radio frequency (RF) scattering phenomenology. Physics-based electromagnetic scattering models developed from the geometrical theory of diffraction are utilized for the information analysis presented here. Information is quantified by the error of target parameter estimates from noisy radar signals when phase is either retained or discarded. Operating conditions (OCs) of signal-tonoise ratio (SNR) and bandwidth are considered. Second, we investigate the value of phase in 1-D radar returns with respect to the ability to correctly classify canonical targets. Classification performance is evaluated via logistic regression for three targets (sphere, plate, tophat). Phase information is demonstrated to improve radar target classification rates, particularly at low SNRs and low bandwidths.
Discovery of Empirical Components by Information Theory
2016-08-10
AFRL-AFOSR-VA-TR-2016-0289 Discovery of Empirical Components by Information Theory Amit Singer TRUSTEES OF PRINCETON UNIVERSITY 1 NASSAU HALL...3. DATES COVERED (From - To) 15 Feb 2013 to 14 Feb 2016 5a. CONTRACT NUMBER Discovery of Empirical Components by Information Theory 5b. GRANT...they draw not only from traditional linear algebra based numerical analysis or approximation theory , but also from information theory , graph theory
Nonlinear system theory: Another look at dependence
Wu, Wei Biao
2005-01-01
Based on the nonlinear system theory, we introduce previously undescribed dependence measures for stationary causal processes. Our physical and predictive dependence measures quantify the degree of dependence of outputs on inputs in physical systems. The proposed dependence measures provide a natural framework for a limit theory for stationary processes. In particular, under conditions with quite simple forms, we present limit theorems for partial sums, empirical processes, and kernel density estimates. The conditions are mild and easily verifiable because they are directly related to the data-generating mechanisms. PMID:16179388
Spectral Entropies as Information-Theoretic Tools for Complex Network Comparison
NASA Astrophysics Data System (ADS)
De Domenico, Manlio; Biamonte, Jacob
2016-10-01
Any physical system can be viewed from the perspective that information is implicitly represented in its state. However, the quantification of this information when it comes to complex networks has remained largely elusive. In this work, we use techniques inspired by quantum statistical mechanics to define an entropy measure for complex networks and to develop a set of information-theoretic tools, based on network spectral properties, such as Rényi q entropy, generalized Kullback-Leibler and Jensen-Shannon divergences, the latter allowing us to define a natural distance measure between complex networks. First, we show that by minimizing the Kullback-Leibler divergence between an observed network and a parametric network model, inference of model parameter(s) by means of maximum-likelihood estimation can be achieved and model selection can be performed with appropriate information criteria. Second, we show that the information-theoretic metric quantifies the distance between pairs of networks and we can use it, for instance, to cluster the layers of a multilayer system. By applying this framework to networks corresponding to sites of the human microbiome, we perform hierarchical cluster analysis and recover with high accuracy existing community-based associations. Our results imply that spectral-based statistical inference in complex networks results in demonstrably superior performance as well as a conceptual backbone, filling a gap towards a network information theory.
A Gaussian Approximation Approach for Value of Information Analysis.
Jalal, Hawre; Alarid-Escudero, Fernando
2018-02-01
Most decisions are associated with uncertainty. Value of information (VOI) analysis quantifies the opportunity loss associated with choosing a suboptimal intervention based on current imperfect information. VOI can inform the value of collecting additional information, resource allocation, research prioritization, and future research designs. However, in practice, VOI remains underused due to many conceptual and computational challenges associated with its application. Expected value of sample information (EVSI) is rooted in Bayesian statistical decision theory and measures the value of information from a finite sample. The past few years have witnessed a dramatic growth in computationally efficient methods to calculate EVSI, including metamodeling. However, little research has been done to simplify the experimental data collection step inherent to all EVSI computations, especially for correlated model parameters. This article proposes a general Gaussian approximation (GA) of the traditional Bayesian updating approach based on the original work by Raiffa and Schlaifer to compute EVSI. The proposed approach uses a single probabilistic sensitivity analysis (PSA) data set and involves 2 steps: 1) a linear metamodel step to compute the EVSI on the preposterior distributions and 2) a GA step to compute the preposterior distribution of the parameters of interest. The proposed approach is efficient and can be applied for a wide range of data collection designs involving multiple non-Gaussian parameters and unbalanced study designs. Our approach is particularly useful when the parameters of an economic evaluation are correlated or interact.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiana-Alsina, J.; Torrent, M. C.; Masoller, C.
Low-frequency fluctuations (LFFs) represent a dynamical instability that occurs in semiconductor lasers when they are operated near the lasing threshold and subject to moderate optical feedback. LFFs consist of sudden power dropouts followed by gradual, stepwise recoveries. We analyze experimental time series of intensity dropouts and quantify the complexity of the underlying dynamics employing two tools from information theory, namely, Shannon's entropy and the Martin, Plastino, and Rosso statistical complexity measure. These measures are computed using a method based on ordinal patterns, by which the relative length and ordering of consecutive interdropout intervals (i.e., the time intervals between consecutive intensitymore » dropouts) are analyzed, disregarding the precise timing of the dropouts and the absolute durations of the interdropout intervals. We show that this methodology is suitable for quantifying subtle characteristics of the LFFs, and in particular the transition to fully developed chaos that takes place when the laser's pump current is increased. Our method shows that the statistical complexity of the laser does not increase continuously with the pump current, but levels off before reaching the coherence collapse regime. This behavior coincides with that of the first- and second-order correlations of the interdropout intervals, suggesting that these correlations, and not the chaotic behavior, are what determine the level of complexity of the laser's dynamics. These results hold for two different dynamical regimes, namely, sustained LFFs and coexistence between LFFs and steady-state emission.« less
Dimitriadis, Stavros I.; Salis, Christos; Tarnanas, Ioannis; Linden, David E.
2017-01-01
The human brain is a large-scale system of functionally connected brain regions. This system can be modeled as a network, or graph, by dividing the brain into a set of regions, or “nodes,” and quantifying the strength of the connections between nodes, or “edges,” as the temporal correlation in their patterns of activity. Network analysis, a part of graph theory, provides a set of summary statistics that can be used to describe complex brain networks in a meaningful way. The large-scale organization of the brain has features of complex networks that can be quantified using network measures from graph theory. The adaptation of both bivariate (mutual information) and multivariate (Granger causality) connectivity estimators to quantify the synchronization between multichannel recordings yields a fully connected, weighted, (a)symmetric functional connectivity graph (FCG), representing the associations among all brain areas. The aforementioned procedure leads to an extremely dense network of tens up to a few hundreds of weights. Therefore, this FCG must be filtered out so that the “true” connectivity pattern can emerge. Here, we compared a large number of well-known topological thresholding techniques with the novel proposed data-driven scheme based on orthogonal minimal spanning trees (OMSTs). OMSTs filter brain connectivity networks based on the optimization between the global efficiency of the network and the cost preserving its wiring. We demonstrated the proposed method in a large EEG database (N = 101 subjects) with eyes-open (EO) and eyes-closed (EC) tasks by adopting a time-varying approach with the main goal to extract features that can totally distinguish each subject from the rest of the set. Additionally, the reliability of the proposed scheme was estimated in a second case study of fMRI resting-state activity with multiple scans. Our results demonstrated clearly that the proposed thresholding scheme outperformed a large list of thresholding schemes based on the recognition accuracy of each subject compared to the rest of the cohort (EEG). Additionally, the reliability of the network metrics based on the fMRI static networks was improved based on the proposed topological filtering scheme. Overall, the proposed algorithm could be used across neuroimaging and multimodal studies as a common computationally efficient standardized tool for a great number of neuroscientists and physicists working on numerous of projects. PMID:28491032
NASA Astrophysics Data System (ADS)
Qi, D.; Majda, A.
2017-12-01
A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with distinct statistical structures.
Combining complex networks and data mining: Why and how
NASA Astrophysics Data System (ADS)
Zanin, M.; Papo, D.; Sousa, P. A.; Menasalvas, E.; Nicchi, A.; Kubik, E.; Boccaletti, S.
2016-05-01
The increasing power of computer technology does not dispense with the need to extract meaningful information out of data sets of ever growing size, and indeed typically exacerbates the complexity of this task. To tackle this general problem, two methods have emerged, at chronologically different times, that are now commonly used in the scientific community: data mining and complex network theory. Not only do complex network analysis and data mining share the same general goal, that of extracting information from complex systems to ultimately create a new compact quantifiable representation, but they also often address similar problems too. In the face of that, a surprisingly low number of researchers turn out to resort to both methodologies. One may then be tempted to conclude that these two fields are either largely redundant or totally antithetic. The starting point of this review is that this state of affairs should be put down to contingent rather than conceptual differences, and that these two fields can in fact advantageously be used in a synergistic manner. An overview of both fields is first provided, some fundamental concepts of which are illustrated. A variety of contexts in which complex network theory and data mining have been used in a synergistic manner are then presented. Contexts in which the appropriate integration of complex network metrics can lead to improved classification rates with respect to classical data mining algorithms and, conversely, contexts in which data mining can be used to tackle important issues in complex network theory applications are illustrated. Finally, ways to achieve a tighter integration between complex networks and data mining, and open lines of research are discussed.
Severtson, Dolores J; Baumann, Linda C; Brown, Roger L
2006-04-01
The common sense model (CSM) shows how people process information to construct representations, or mental models, that guide responses to health threats. We applied the CSM to understand how people responded to information about arsenic-contaminated well water. Constructs included external information (arsenic level and information use), experience (perceived water quality and arsenic-related health effects), representations, safety judgments, opinions about policies to mitigate environmental arsenic, and protective behavior. Of 649 surveys mailed to private well users with arsenic levels exceeding the maximum contaminant level, 545 (84%) were analyzed. Structural equation modeling quantified CSM relationships. Both external information and experience had substantial effects on behavior. Participants who identified a water problem were more likely to reduce exposure to arsenic. However, about 60% perceived good water quality and 60% safe water. Participants with higher arsenic levels selected higher personal safety thresholds and 20% reported a lower arsenic level than indicated by their well test. These beliefs would support judgments of safe water. A variety of psychological and contextual factors may explain judgments of safe water when information suggested otherwise. Information use had an indirect effect on policy beliefs through understanding environmental causes of arsenic. People need concrete information about environmental risk at both personal and environmental-systems levels to promote a comprehensive understanding and response. The CSM explained responses to arsenic information and may have application to other environmental risks.
NASA Astrophysics Data System (ADS)
Song, Y.; Yao, Q.; Wang, G.; Yang, X.; Mayes, M. A.
2017-12-01
Increasing evidences is indicating that soil organic matter (SOM) decomposition and stabilization process is a continuum process and controlled by both microbial functions and their interaction with minerals (known as the microbial efficiency-matrix stabilization theory (MEMS)). Our metagenomics analysis of soil samples from both P-deficit and P-fertilization sites in Panama has demonstrated that community-level enzyme functions could adapt to maximize the acquisition of limiting nutrients and minimize energy demand for foraging (known as the optimal foraging theory). This optimization scheme can mitigate the imbalance of C/P ratio between soil substrate and microbial community and relieve the P limitation on microbial carbon use efficiency over the time. Dynamic allocation of multiple enzyme groups and their interaction with microbial/substrate stoichiometry has rarely been considered in biogeochemical models due to the difficulties in identifying microbial functional groups and quantifying the change in enzyme expression in response to soil nutrient availability. This study aims to represent the omics-informed optimal foraging theory in the Continuum Microbial ENzyme Decomposition model (CoMEND), which was developed to represent the continuum SOM decomposition process following the MEMS theory. The SOM pools in the model are classified based on soil chemical composition (i.e. Carbohydrates, lignin, N-rich SOM and P-rich SOM) and the degree of SOM depolymerization. The enzyme functional groups for decomposition of each SOM pool and N/P mineralization are identified by the relative composition of gene copy numbers. The responses of microbial activities and SOM decomposition to nutrient availability are simulated by optimizing the allocation of enzyme functional groups following the optimal foraging theory. The modeled dynamic enzyme allocation in response to P availability is evaluated by the metagenomics data measured from P addition and P-deficit soil samples in Panama sites.The implementation of dynamic enzyme allocation in response to nutrient availability in the CoMEND model enables us to capture the varying microbial C/P ratio and soil carbon dynamics in response to shifting nutrient constraints over time in tropical soils.
Wu, Wenjie; Wu, Zemin; Rong, Chunying; Lu, Tian; Huang, Ying; Liu, Shubin
2015-07-23
The electrophilic aromatic substitution for nitration, halogenation, sulfonation, and acylation is a vastly important category of chemical transformation. Its reactivity and regioselectivity is predominantly determined by nucleophilicity of carbon atoms on the aromatic ring, which in return is immensely influenced by the group that is attached to the aromatic ring a priori. In this work, taking advantage of recent developments in quantifying nucleophilicity (electrophilicity) with descriptors from the information-theoretic approach in density functional reactivity theory, we examine the reactivity properties of this reaction system from three perspectives. These include scaling patterns of information-theoretic quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy and information gain at both molecular and atomic levels, quantitative predictions of the barrier height with both Hirshfeld charge and information gain, and energetic decomposition analyses of the barrier height for the reactions. To that end, we focused in this work on the identity reaction of the monosubstituted-benzene molecule reacting with hydrogen fluoride using boron trifluoride as the catalyst in the gas phase. We also considered 19 substituting groups, 9 of which are ortho/para directing and the other 9 meta directing, besides the case of R = -H. Similar scaling patterns for these information-theoretic quantities found for stable species elsewhere were disclosed for these reactions systems. We also unveiled novel scaling patterns for information gain at the atomic level. The barrier height of the reactions can reliably be predicted by using both the Hirshfeld charge and information gain at the regioselective carbon atom. The energy decomposition analysis ensued yields an unambiguous picture about the origin of the barrier height, where we showed that it is the electrostatic interaction that plays the dominant role, while the roles played by exchange-correlation and steric effects are minor but indispensable. Results obtained in this work should shed new light for better understanding of the factors governing the reactivity for this class of reactions and assisting ongoing efforts for the design of new and more efficient catalysts for such kind of transformations.
Biomass accessibility analysis using electron tomography
Hinkle, Jacob D.; Ciesielski, Peter N.; Gruchalla, Kenny; ...
2015-12-25
Substrate accessibility to catalysts has been a dominant theme in theories of biomass deconstruction. Furthermore, current methods of quantifying accessibility do not elucidate mechanisms for increased accessibility due to changes in microstructure following pretreatment.
NASA Astrophysics Data System (ADS)
Wang, Duan; Podobnik, Boris; Horvatić, Davor; Stanley, H. Eugene
2011-04-01
We propose a modified time lag random matrix theory in order to study time-lag cross correlations in multiple time series. We apply the method to 48 world indices, one for each of 48 different countries. We find long-range power-law cross correlations in the absolute values of returns that quantify risk, and find that they decay much more slowly than cross correlations between the returns. The magnitude of the cross correlations constitutes “bad news” for international investment managers who may believe that risk is reduced by diversifying across countries. We find that when a market shock is transmitted around the world, the risk decays very slowly. We explain these time-lag cross correlations by introducing a global factor model (GFM) in which all index returns fluctuate in response to a single global factor. For each pair of individual time series of returns, the cross correlations between returns (or magnitudes) can be modeled with the autocorrelations of the global factor returns (or magnitudes). We estimate the global factor using principal component analysis, which minimizes the variance of the residuals after removing the global trend. Using random matrix theory, a significant fraction of the world index cross correlations can be explained by the global factor, which supports the utility of the GFM. We demonstrate applications of the GFM in forecasting risks at the world level, and in finding uncorrelated individual indices. We find ten indices that are practically uncorrelated with the global factor and with the remainder of the world indices, which is relevant information for world managers in reducing their portfolio risk. Finally, we argue that this general method can be applied to a wide range of phenomena in which time series are measured, ranging from seismology and physiology to atmospheric geophysics.
Wang, Duan; Podobnik, Boris; Horvatić, Davor; Stanley, H Eugene
2011-04-01
We propose a modified time lag random matrix theory in order to study time-lag cross correlations in multiple time series. We apply the method to 48 world indices, one for each of 48 different countries. We find long-range power-law cross correlations in the absolute values of returns that quantify risk, and find that they decay much more slowly than cross correlations between the returns. The magnitude of the cross correlations constitutes "bad news" for international investment managers who may believe that risk is reduced by diversifying across countries. We find that when a market shock is transmitted around the world, the risk decays very slowly. We explain these time-lag cross correlations by introducing a global factor model (GFM) in which all index returns fluctuate in response to a single global factor. For each pair of individual time series of returns, the cross correlations between returns (or magnitudes) can be modeled with the autocorrelations of the global factor returns (or magnitudes). We estimate the global factor using principal component analysis, which minimizes the variance of the residuals after removing the global trend. Using random matrix theory, a significant fraction of the world index cross correlations can be explained by the global factor, which supports the utility of the GFM. We demonstrate applications of the GFM in forecasting risks at the world level, and in finding uncorrelated individual indices. We find ten indices that are practically uncorrelated with the global factor and with the remainder of the world indices, which is relevant information for world managers in reducing their portfolio risk. Finally, we argue that this general method can be applied to a wide range of phenomena in which time series are measured, ranging from seismology and physiology to atmospheric geophysics.
NASA Astrophysics Data System (ADS)
Allegra, Michele; Giorda, Paolo; Lloyd, Seth
2016-04-01
Assessing the role of interference in natural and artificial quantum dynamical processes is a crucial task in quantum information theory. To this aim, an appropriate formalism is provided by the decoherent histories framework. While this approach has been deeply explored from different theoretical perspectives, it still lacks of a comprehensive set of tools able to concisely quantify the amount of coherence developed by a given dynamics. In this paper, we introduce and test different measures of the (average) coherence present in dissipative (Markovian) quantum evolutions, at various time scales and for different levels of environmentally induced decoherence. In order to show the effectiveness of the introduced tools, we apply them to a paradigmatic quantum process where the role of coherence is being hotly debated: exciton transport in photosynthetic complexes. To spot out the essential features that may determine the performance of the transport, we focus on a relevant trimeric subunit of the Fenna-Matthews-Olson complex and we use a simplified (Haken-Strobl) model for the system-bath interaction. Our analysis illustrates how the high efficiency of environmentally assisted transport can be traced back to a quantum recoil avoiding effect on the exciton dynamics, that preserves and sustains the benefits of the initial fast quantum delocalization of the exciton over the network. Indeed, for intermediate levels of decoherence, the bath is seen to selectively kill the negative interference between different exciton pathways, while retaining the initial positive one. The concepts and tools here developed show how the decoherent histories approach can be used to quantify the relation between coherence and efficiency in quantum dynamical processes.
NASA Astrophysics Data System (ADS)
Rohmer, J.; Dewez, D.
2014-09-01
Over the last decade, many cliff erosion studies have focused on frequency-size statistics using inventories of sea cliff retreat sizes. By comparison, only a few paid attention to quantifying the spatial and temporal organisation of erosion scars over a cliff face. Yet, this spatial organisation carries essential information about the external processes and the environmental conditions that promote or initiate sea-cliff instabilities. In this article, we use summary statistics of spatial point process theory as a tool to examine the spatial and temporal pattern of a rockfall inventory recorded with repeated terrestrial laser scanning surveys at the chalk coastal cliff site of Mesnil-Val (Normandy, France). Results show that: (1) the spatial density of erosion scars is specifically conditioned alongshore by the distance to an engineered concrete groin, with an exponential-like decreasing trend, and vertically focused both at wave breaker height and on strong lithological contrasts; (2) small erosion scars (10-3-10-2 m3) aggregate in clusters within a radius of 5 to 10 m, which suggests some sort of attraction or focused causative process, and disperse above this critical distance; (3) on the contrary, larger erosion scars (10-2-101 m3) tend to disperse above a radius of 1 to 5 m, possibly due to the spreading of successive failures across the cliff face; (4) large scars significantly occur albeit moderately, where previous large rockfalls have occurred during preceeding winter; (5) this temporal trend is not apparent for small events. In conclusion, this study shows, with a worked example, how spatial point process summary statistics are a tool to test and quantify the significance of geomorphological observation organisation.
NASA Astrophysics Data System (ADS)
Rohmer, J.; Dewez, T.
2015-02-01
Over the last decade, many cliff erosion studies have focused on frequency-size statistics using inventories of sea cliff retreat sizes. By comparison, only a few paid attention to quantifying the spatial and temporal organisation of erosion scars over a cliff face. Yet, this spatial organisation carries essential information about the external processes and the environmental conditions that promote or initiate sea-cliff instabilities. In this article, we use summary statistics of spatial point process theory as a tool to examine the spatial and temporal pattern of a rockfall inventory recorded with repeated terrestrial laser scanning surveys at the chalk coastal cliff site of Mesnil-Val (Normandy, France). Results show that: (1) the spatial density of erosion scars is specifically conditioned alongshore by the distance to an engineered concrete groyne, with an exponential-like decreasing trend, and vertically focused both at wave breaker height and on strong lithological contrasts; (2) small erosion scars (10-3 to 10-2 m3) aggregate in clusters within a radius of 5 to 10 m, which suggests some sort of attraction or focused causative process, and disperse above this critical distance; (3) on the contrary, larger erosion scars (10-2 to 101 m3) tend to disperse above a radius of 1 to 5 m, possibly due to the spreading of successive failures across the cliff face; (4) large scars significantly occur albeit moderately, where previous large rockfalls have occurred during preceding winter; (5) this temporal trend is not apparent for small events. In conclusion, this study shows, with a worked example, how spatial point process summary statistics are a tool to test and quantify the significance of geomorphological observation organisation.
Analyzing the management and disturbance in European forest based on self-thinning theory
NASA Astrophysics Data System (ADS)
Yan, Y.; Gielen, B.; Schelhaas, M.; Mohren, F.; Luyssaert, S.; Janssens, I. A.
2012-04-01
There is increasing awareness that natural and anthropogenic disturbance in forests affects exchange of CO2, H2O and energy between the ecosystem and the atmosphere. Consequently quantification of land use and disturbance intensity is one of the next steps needed to improve our understanding of the carbon cycle, its interactions with the atmosphere and its main drivers at local as well as at global level. The conventional NPP-based approaches to quantify the intensity of land management are limited because they lack a sound ecological basis. Here we apply a new way of characterising the degree of management and disturbance in forests using the self- thinning theory and observations of diameter at breast height and stand density. We used plot level information on dominant tree species, diameter at breast height, stand density and soil type from the French national forest inventory from 2005 to 2010. Stand density and diameter at breast height were used to parameterize the intercept of the self-thinning relationship and combined with theoretical slope to obtain an upper boundary for stand productivity given its density. Subsequently, we tested the sensitivity of the self-thinning relationship for tree species, soil type, climate and other environmental characteristics. We could find statistical differences in the self-thinning relationship between species and soil types, mainly due to the large uncertainty of the parameter estimates. Deviation from the theoretical self-thinning line defined as DBH=αN-3/4, was used as a proxy for disturbances, allowing to make spatially explicit maps of forest disturbance over France. The same framework was used to quantify the density-DBH trajectory of even-aged stand management of beech and oak over France. These trajectories will be used as a driver of forest management in the land surface model ORCHIDEE.
Redundant information encoding in primary motor cortex during natural and prosthetic motor control.
So, Kelvin; Ganguly, Karunesh; Jimenez, Jessica; Gastpar, Michael C; Carmena, Jose M
2012-06-01
Redundant encoding of information facilitates reliable distributed information processing. To explore this hypothesis in the motor system, we applied concepts from information theory to quantify the redundancy of movement-related information encoded in the macaque primary motor cortex (M1) during natural and neuroprosthetic control. Two macaque monkeys were trained to perform a delay center-out reaching task controlling a computer cursor under natural arm movement (manual control, 'MC'), and using a brain-machine interface (BMI) via volitional control of neural ensemble activity (brain control, 'BC'). During MC, we found neurons in contralateral M1 to contain higher and more redundant information about target direction than ipsilateral M1 neurons, consistent with the laterality of movement control. During BC, we found that the M1 neurons directly incorporated into the BMI ('direct' neurons) contained the highest and most redundant target information compared to neurons that were not incorporated into the BMI ('indirect' neurons). This effect was even more significant when comparing to M1 neurons of the opposite hemisphere. Interestingly, when we retrained the BMI to use ipsilateral M1 activity, we found that these neurons were more redundant and contained higher information than contralateral M1 neurons, even though ensembles from this hemisphere were previously less redundant during natural arm movement. These results indicate that ensembles most associated to movement contain highest redundancy and information encoding, which suggests a role for redundancy in proficient natural and prosthetic motor control.
Brodski-Guerniero, Alla; Naumer, Marcus J; Moliadze, Vera; Chan, Jason; Althen, Heike; Ferreira-Santos, Fernando; Lizier, Joseph T; Schlitt, Sabine; Kitzerow, Janina; Schütz, Magdalena; Langer, Anne; Kaiser, Jochen; Freitag, Christine M; Wibral, Michael
2018-04-04
The neurophysiological underpinnings of the nonsocial symptoms of autism spectrum disorder (ASD) which include sensory and perceptual atypicalities remain poorly understood. Well-known accounts of less dominant top-down influences and more dominant bottom-up processes compete to explain these characteristics. These accounts have been recently embedded in the popular framework of predictive coding theory. To differentiate between competing accounts, we studied altered information dynamics in ASD by quantifying predictable information in neural signals. Predictable information in neural signals measures the amount of stored information that is used for the next time step of a neural process. Thus, predictable information limits the (prior) information which might be available for other brain areas, for example, to build predictions for upcoming sensory information. We studied predictable information in neural signals based on resting-state magnetoencephalography (MEG) recordings of 19 ASD patients and 19 neurotypical controls aged between 14 and 27 years. Using whole-brain beamformer source analysis, we found reduced predictable information in ASD patients across the whole brain, but in particular in posterior regions of the default mode network. In these regions, epoch-by-epoch predictable information was positively correlated with source power in the alpha and beta frequency range as well as autocorrelation decay time. Predictable information in precuneus and cerebellum was negatively associated with nonsocial symptom severity, indicating a relevance of the analysis of predictable information for clinical research in ASD. Our findings are compatible with the assumption that use or precision of prior knowledge is reduced in ASD patients. © 2018 Wiley Periodicals, Inc.
Optimal Signal Processing in Small Stochastic Biochemical Networks
Ziv, Etay; Nemenman, Ilya; Wiggins, Chris H.
2007-01-01
We quantify the influence of the topology of a transcriptional regulatory network on its ability to process environmental signals. By posing the problem in terms of information theory, we do this without specifying the function performed by the network. Specifically, we study the maximum mutual information between the input (chemical) signal and the output (genetic) response attainable by the network in the context of an analytic model of particle number fluctuations. We perform this analysis for all biochemical circuits, including various feedback loops, that can be built out of 3 chemical species, each under the control of one regulator. We find that a generic network, constrained to low molecule numbers and reasonable response times, can transduce more information than a simple binary switch and, in fact, manages to achieve close to the optimal information transmission fidelity. These high-information solutions are robust to tenfold changes in most of the networks' biochemical parameters; moreover they are easier to achieve in networks containing cycles with an odd number of negative regulators (overall negative feedback) due to their decreased molecular noise (a result which we derive analytically). Finally, we demonstrate that a single circuit can support multiple high-information solutions. These findings suggest a potential resolution of the “cross-talk” phenomenon as well as the previously unexplained observation that transcription factors that undergo proteolysis are more likely to be auto-repressive. PMID:17957259
Wang, Jianzhou; Niu, Tong; Wang, Rui
2017-03-02
The worsening atmospheric pollution increases the necessity of air quality early warning systems (EWSs). Despite the fact that a massive amount of investigation about EWS in theory and practicality has been conducted by numerous researchers, studies concerning the quantification of uncertain information and comprehensive evaluation are still lacking, which impedes further development in the area. In this paper, firstly a comprehensive warning system is proposed, which consists of two vital indispensable modules, namely effective forecasting and scientific evaluation, respectively. For the forecasting module, a novel hybrid model combining the theory of data preprocessing and numerical optimization is first developed to implement effective forecasting for air pollutant concentration. Especially, in order to further enhance the accuracy and robustness of the warning system, interval forecasting is implemented to quantify the uncertainties generated by forecasts, which can provide significant risk signals by using point forecasting for decision-makers. For the evaluation module, a cloud model, based on probability and fuzzy set theory, is developed to perform comprehensive evaluations of air quality, which can realize the transformation between qualitative concept and quantitative data. To verify the effectiveness and efficiency of the warning system, extensive simulations based on air pollutants data from Dalian in China were effectively implemented, which illustrate that the warning system is not only remarkably high-performance, but also widely applicable.
Yu, Min; Doak, Peter; Tamblyn, Isaac; Neaton, Jeffrey B
2013-05-16
Functional hybrid interfaces between organic molecules and semiconductors are central to many emerging information and solar energy conversion technologies. Here we demonstrate a general, empirical parameter-free approach for computing and understanding frontier orbital energies - or redox levels - of a broad class of covalently bonded organic-semiconductor surfaces. We develop this framework in the context of specific density functional theory (DFT) and many-body perturbation theory calculations, within the GW approximation, of an exemplar interface, thiophene-functionalized silicon (111). Through detailed calculations taking into account structural and binding energetics of mixed-monolayers consisting of both covalently attached thiophene and hydrogen, chlorine, methyl, and other passivating groups, we quantify the impact of coverage, nonlocal polarization, and interface dipole effects on the alignment of the thiophene frontier orbital energies with the silicon band edges. For thiophene adsorbate frontier orbital energies, we observe significant corrections to standard DFT (∼1 eV), including large nonlocal electrostatic polarization effects (∼1.6 eV). Importantly, both results can be rationalized from knowledge of the electronic structure of the isolated thiophene molecule and silicon substrate systems. Silicon band edge energies are predicted to vary by more than 2.5 eV, while molecular orbital energies stay similar, with the different functional groups studied, suggesting the prospect of tuning energy alignment over a wide range for photoelectrochemistry and other applications.
Wang, Jianzhou; Niu, Tong; Wang, Rui
2017-01-01
The worsening atmospheric pollution increases the necessity of air quality early warning systems (EWSs). Despite the fact that a massive amount of investigation about EWS in theory and practicality has been conducted by numerous researchers, studies concerning the quantification of uncertain information and comprehensive evaluation are still lacking, which impedes further development in the area. In this paper, firstly a comprehensive warning system is proposed, which consists of two vital indispensable modules, namely effective forecasting and scientific evaluation, respectively. For the forecasting module, a novel hybrid model combining the theory of data preprocessing and numerical optimization is first developed to implement effective forecasting for air pollutant concentration. Especially, in order to further enhance the accuracy and robustness of the warning system, interval forecasting is implemented to quantify the uncertainties generated by forecasts, which can provide significant risk signals by using point forecasting for decision-makers. For the evaluation module, a cloud model, based on probability and fuzzy set theory, is developed to perform comprehensive evaluations of air quality, which can realize the transformation between qualitative concept and quantitative data. To verify the effectiveness and efficiency of the warning system, extensive simulations based on air pollutants data from Dalian in China were effectively implemented, which illustrate that the warning system is not only remarkably high-performance, but also widely applicable. PMID:28257122
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liangzhe Zhang; Anthony D. Rollett; Timothy Bartel
2012-02-01
A calibrated Monte Carlo (cMC) approach, which quantifies grain boundary kinetics within a generic setting, is presented. The influence of misorientation is captured by adding a scaling coefficient in the spin flipping probability equation, while the contribution of different driving forces is weighted using a partition function. The calibration process relies on the established parametric links between Monte Carlo (MC) and sharp-interface models. The cMC algorithm quantifies microstructural evolution under complex thermomechanical environments and remedies some of the difficulties associated with conventional MC models. After validation, the cMC approach is applied to quantify the texture development of polycrystalline materials withmore » influences of misorientation and inhomogeneous bulk energy across grain boundaries. The results are in good agreement with theory and experiments.« less
ERIC Educational Resources Information Center
MACCIA, ELIZABETH S.; AND OTHERS
AN ANNOTATED BIBLIOGRAPHY OF 20 ITEMS AND A DISCUSSION OF ITS SIGNIFICANCE WAS PRESENTED TO DESCRIBE CURRENT UTILIZATION OF SUBJECT THEORIES IN THE CONSTRUCTION OF AN EDUCATIONAL THEORY. ALSO, A THEORY MODEL WAS USED TO DEMONSTRATE CONSTRUCTION OF A SCIENTIFIC EDUCATIONAL THEORY. THE THEORY MODEL INCORPORATED SET THEORY (S), INFORMATION THEORY…
NASA Technical Reports Server (NTRS)
1976-01-01
The feasibility of systematically quantifying the economic benefits of secondary applications of NASA related R and D was investigated. Based upon the tools of economic theory and econometric analysis, a set of empirical methods was developed and selected applications were made to demonstrate their workability. Analyses of the technological developments related to integrated circuits, cryogenic insulation, gas turbines, and computer programs for structural analysis indicated substantial secondary benefits accruing from NASA's R and D in these areas.
Quantifying the benefits to the national economy from secondary applications of NASA technology
NASA Technical Reports Server (NTRS)
1976-01-01
The feasibility of systematically quantifying the economic benefits of secondary applications of NASA related R and D is investigated. Based upon the tools of economic theory and econometric analysis, it develops a set of empirical methods and makes selected applications to demonstrate their workability. Analyses of the technological developments related to integrated circuits, cryogenic insulation, gas turbines, and computer programs for structural analysis indicated substantial secondary benefits accruing from NASA's R and D in these areas.
NASA Astrophysics Data System (ADS)
Desai, Parth Rakesh; Sinha, Shayandev; Das, Siddhartha
2018-03-01
We employ molecular dynamics (MD) simulations and develop scaling theories to quantify the equilibrium behavior of polyelectrolyte (PE) brush bilayers (BBLs) in the weakly interpenetrated regime, which is characterized by d0
Negations in syllogistic reasoning: evidence for a heuristic-analytic conflict.
Stupple, Edward J N; Waterhouse, Eleanor F
2009-08-01
An experiment utilizing response time measures was conducted to test dominant processing strategies in syllogistic reasoning with the expanded quantifier set proposed by Roberts (2005). Through adding negations to existing quantifiers it is possible to change problem surface features without altering logical validity. Biases based on surface features such as atmosphere, matching, and the probability heuristics model (PHM; Chater & Oaksford, 1999; Wetherick & Gilhooly, 1995) would not be expected to show variance in response latencies, but participant responses should be highly sensitive to changes in the surface features of the quantifiers. In contrast, according to analytic accounts such as mental models theory and mental logic (e.g., Johnson-Laird & Byrne, 1991; Rips, 1994) participants should exhibit increased response times for negated premises, but not be overly impacted upon by the surface features of the conclusion. Data indicated that the dominant response strategy was based on a matching heuristic, but also provided evidence of a resource-demanding analytic procedure for dealing with double negatives. The authors propose that dual-process theories offer a stronger account of these data whereby participants employ competing heuristic and analytic strategies and fall back on a heuristic response when analytic processing fails.
Demystifying theory and its use in improvement
Davidoff, Frank; Dixon-Woods, Mary; Leviton, Laura; Michie, Susan
2015-01-01
The role and value of theory in improvement work in healthcare has been seriously underrecognised. We join others in proposing that more informed use of theory can strengthen improvement programmes and facilitate the evaluation of their effectiveness. Many professionals, including improvement practitioners, are unfortunately mystified—and alienated—by theory, which discourages them from using it in their work. In an effort to demystify theory we make the point in this paper that, far from being discretionary or superfluous, theory (‘reason-giving’), both informal and formal, is intimately woven into virtually all human endeavour. We explore the special characteristics of grand, mid-range and programme theory; consider the consequences of misusing theory or failing to use it; review the process of developing and applying programme theory; examine some emerging criteria of ‘good’ theory; and emphasise the value, as well as the challenge, of combining informal experience-based theory with formal, publicly developed theory. We conclude that although informal theory is always at work in improvement, practitioners are often not aware of it or do not make it explicit. The germane issue for improvement practitioners, therefore, is not whether they use theory but whether they make explicit the particular theory or theories, informal and formal, they actually use. PMID:25616279
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1980-01-01
Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Fouks, J D; Besnard, S; Signac, L; Meurice, J C; Neau, J P; Paquereau, J
2004-04-01
The present paper exposes algorithmic results providing a vision about sleep functions which complements biological theory and experiments. Derived from the algorithmic theory of information, the theory of adaptation aims at quantifying how an inherited or acquired piece of knowledge helps individuals to survive. It gives a scale of complexity for survival problems and proves that some of them can only be solved by a dynamical management of memory associating continuous learning and forgetting methods. In this paper we explain how a virtual robot "Picota" has been designed to simulate the behavior of a living hen. In order to survive in its synthetical environment, our robot must recognize good seeds from bad ones, and should take rest during night periods. Within this frame, and facing the rapid evolution of to-be-recognized forms, the best way to equilibrate the energetic needs of the robot and ensure survival is to use the nightly rest to reorganize the pieces of data acquired during the daily learning, and to trash the less useful ones. Thanks to this time sharing, the same circuits can be used for both daily learning and nightly forgetting and thus costs are lower; however, this also forces the system to "paralyse" the virtual robot, and therefore the night algorithm is reminiscent of paradoxical (REM) sleep. The algorithm of the robot takes advantage of the alternation between wakefulness or activity and the rest period. This diagram quite accurately recalls the REM period. In the future, the convergence between the neurophysiology of sleep and the theory of complexity may give us a new line of research in order to elucidate sleep functions.
A computational framework for modeling targets as complex adaptive systems
NASA Astrophysics Data System (ADS)
Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh
2017-05-01
Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.
Einav, Liran; Finkelstein, Amy; Schrimpf, Paul
2009-01-01
Much of the extensive empirical literature on insurance markets has focused on whether adverse selection can be detected. Once detected, however, there has been little attempt to quantify its welfare cost, or to assess whether and what potential government interventions may reduce these costs. To do so, we develop a model of annuity contract choice and estimate it using data from the U.K. annuity market. The model allows for private information about mortality risk as well as heterogeneity in preferences over different contract options. We focus on the choice of length of guarantee among individuals who are required to buy annuities. The results suggest that asymmetric information along the guarantee margin reduces welfare relative to a first best symmetric information benchmark by about £127 million per year, or about 2 percent of annuitized wealth. We also find that by requiring that individuals choose the longest guarantee period allowed, mandates could achieve the first-best allocation. However, we estimate that other mandated guarantee lengths would have detrimental effects on welfare. Since determining the optimal mandate is empirically difficult, our findings suggest that achieving welfare gains through mandatory social insurance may be harder in practice than simple theory may suggest. PMID:20592943
Revealing Relationships among Relevant Climate Variables with Information Theory
NASA Technical Reports Server (NTRS)
Knuth, Kevin H.; Golera, Anthony; Curry, Charles T.; Huyser, Karen A.; Kevin R. Wheeler; Rossow, William B.
2005-01-01
The primary objective of the NASA Earth-Sun Exploration Technology Office is to understand the observed Earth climate variability, thus enabling the determination and prediction of the climate's response to both natural and human-induced forcing. We are currently developing a suite of computational tools that will allow researchers to calculate, from data, a variety of information-theoretic quantities such as mutual information, which can be used to identify relationships among climate variables, and transfer entropy, which indicates the possibility of causal interactions. Our tools estimate these quantities along with their associated error bars, the latter of which is critical for describing the degree of uncertainty in the estimates. This work is based upon optimal binning techniques that we have developed for piecewise-constant, histogram-style models of the underlying density functions. Two useful side benefits have already been discovered. The first allows a researcher to determine whether there exist sufficient data to estimate the underlying probability density. The second permits one to determine an acceptable degree of round-off when compressing data for efficient transfer and storage. We also demonstrate how mutual information and transfer entropy can be applied so as to allow researchers not only to identify relations among climate variables, but also to characterize and quantify their possible causal interactions.
Neural correlates of distraction and conflict resolution for nonverbal auditory events.
Stewart, Hannah J; Amitay, Sygal; Alain, Claude
2017-05-09
In everyday situations auditory selective attention requires listeners to suppress task-irrelevant stimuli and to resolve conflicting information in order to make appropriate goal-directed decisions. Traditionally, these two processes (i.e. distractor suppression and conflict resolution) have been studied separately. In the present study we measured neuroelectric activity while participants performed a new paradigm in which both processes are quantified. In separate block of trials, participants indicate whether two sequential tones share the same pitch or location depending on the block's instruction. For the distraction measure, a positive component peaking at ~250 ms was found - a distraction positivity. Brain electrical source analysis of this component suggests different generators when listeners attended to frequency and location, with the distraction by location more posterior than the distraction by frequency, providing support for the dual-pathway theory. For the conflict resolution measure, a negative frontocentral component (270-450 ms) was found, which showed similarities with that of prior studies on auditory and visual conflict resolution tasks. The timing and distribution are consistent with two distinct neural processes with suppression of task-irrelevant information occurring before conflict resolution. This new paradigm may prove useful in clinical populations to assess impairments in filtering out task-irrelevant information and/or resolving conflicting information.
Einav, Liran; Finkelstein, Amy; Schrimpf, Paul
2010-05-01
Much of the extensive empirical literature on insurance markets has focused on whether adverse selection can be detected. Once detected, however, there has been little attempt to quantify its welfare cost, or to assess whether and what potential government interventions may reduce these costs. To do so, we develop a model of annuity contract choice and estimate it using data from the U.K. annuity market. The model allows for private information about mortality risk as well as heterogeneity in preferences over different contract options. We focus on the choice of length of guarantee among individuals who are required to buy annuities. The results suggest that asymmetric information along the guarantee margin reduces welfare relative to a first best symmetric information benchmark by about £127 million per year, or about 2 percent of annuitized wealth. We also find that by requiring that individuals choose the longest guarantee period allowed, mandates could achieve the first-best allocation. However, we estimate that other mandated guarantee lengths would have detrimental effects on welfare. Since determining the optimal mandate is empirically difficult, our findings suggest that achieving welfare gains through mandatory social insurance may be harder in practice than simple theory may suggest.
Information Foraging Theory: A Framework for Intelligence Analysis
2014-11-01
oceanographic information, human intelligence (HUMINT), open-source intelligence ( OSINT ), and information provided by other governmental departments [1][5...Human Intelligence IFT Information Foraging Theory LSA Latent Semantic Similarity MVT Marginal Value Theorem OFT Optimal Foraging Theory OSINT
Quantifying highly efficient incoherent energy transfer in perylene-based multichromophore arrays.
Webb, James E A; Chen, Kai; Prasad, Shyamal K K; Wojciechowski, Jonathan P; Falber, Alexander; Thordarson, Pall; Hodgkiss, Justin M
2016-01-21
Multichromophore perylene arrays were designed and synthesized to have extremely efficient resonance energy transfer. Using broadband ultrafast photoluminescence and transient absorption spectroscopies, transfer timescales of approximately 1 picosecond were resolved, corresponding to efficiencies of up to 99.98%. The broadband measurements also revealed spectra corresponding to incoherent transfer between localized states. Polarization resolved spectroscopy was used to measure the dipolar angles between donor and acceptor chromophores, thereby enabling geometric factors to be fixed when assessing the validity of Förster theory in this regime. Förster theory was found to predict the correct magnitude of transfer rates, with measured ∼2-fold deviations consistent with the breakdown of the point-dipole approximation at close approach. The materials presented, along with the novel methods for quantifying ultrahigh energy transfer efficiencies, will be valuable for applications demanding extremely efficient energy transfer, including fluorescent solar concentrators, optical gain, and photonic logic devices.
Ko, Linda K; Turner-McGrievy, Gabrielle M; Campbell, Marci K
2014-04-01
Podcasting is an emerging technology, and previous interventions have shown promising results using theory-based podcast for weight loss among overweight and obese individuals. This study investigated whether constructs of social cognitive theory and information processing theories (IPTs) mediate the effect of a podcast intervention on weight loss among overweight individuals. Data are from Pounds off Digitally, a study testing the efficacy of two weight loss podcast interventions (control podcast and theory-based podcast). Path models were constructed (n = 66). The IPTs, elaboration likelihood model, information control theory, and cognitive load theory mediated the effect of a theory-based podcast on weight loss. The intervention was significantly associated with all IPTs. Information control theory and cognitive load theory were related to elaboration, and elaboration was associated with weight loss. Social cognitive theory constructs did not mediate weight loss. Future podcast interventions grounded in theory may be effective in promoting weight loss.
Combining a dispersal model with network theory to assess habitat connectivity.
Lookingbill, Todd R; Gardner, Robert H; Ferrari, Joseph R; Keller, Cherry E
2010-03-01
Assessing the potential for threatened species to persist and spread within fragmented landscapes requires the identification of core areas that can sustain resident populations and dispersal corridors that can link these core areas with isolated patches of remnant habitat. We developed a set of GIS tools, simulation methods, and network analysis procedures to assess potential landscape connectivity for the Delmarva fox squirrel (DFS; Sciurus niger cinereus), an endangered species inhabiting forested areas on the Delmarva Peninsula, USA. Information on the DFS's life history and dispersal characteristics, together with data on the composition and configuration of land cover on the peninsula, were used as input data for an individual-based model to simulate dispersal patterns of millions of squirrels. Simulation results were then assessed using methods from graph theory, which quantifies habitat attributes associated with local and global connectivity. Several bottlenecks to dispersal were identified that were not apparent from simple distance-based metrics, highlighting specific locations for landscape conservation, restoration, and/or squirrel translocations. Our approach links simulation models, network analysis, and available field data in an efficient and general manner, making these methods useful and appropriate for assessing the movement dynamics of threatened species within landscapes being altered by human and natural disturbances.
Individual differences in personality change across the adult life span.
Schwaba, Ted; Bleidorn, Wiebke
2018-06-01
A precise and comprehensive description of personality continuity and change across the life span is the bedrock upon which theories of personality development are built. Little research has quantified the degree to which individuals deviate from mean-level developmental trends. In this study, we addressed this gap by examining individual differences in personality trait change across the life span. Data came from a nationally representative sample of 9,636 Dutch participants who provided Big Five self-reports at five assessment waves across 7 years. We divided our sample into 14 age groups (ages 16-84 at initial measurement) and estimated latent growth curve models to describe individual differences in personality change across the study period for each trait and age group. Across the adult life span, individual differences in personality change were small but significant until old age. For Openness, Conscientiousness, Extraversion, and Agreeableness, individual differences in change were most pronounced in emerging adulthood and decreased throughout midlife and old age. For Emotional Stability, individual differences in change were relatively consistent across the life span. These results inform theories of life span development and provide future directions for research on the causes and conditions of personality change. © 2017 Wiley Periodicals, Inc.
Eigenpairs of Toeplitz and Disordered Toeplitz Matrices with a Fisher-Hartwig Symbol
NASA Astrophysics Data System (ADS)
Movassagh, Ramis; Kadanoff, Leo P.
2017-05-01
Toeplitz matrices have entries that are constant along diagonals. They model directed transport, are at the heart of correlation function calculations of the two-dimensional Ising model, and have applications in quantum information science. We derive their eigenvalues and eigenvectors when the symbol is singular Fisher-Hartwig. We then add diagonal disorder and study the resulting eigenpairs. We find that there is a "bulk" behavior that is well captured by second order perturbation theory of non-Hermitian matrices. The non-perturbative behavior is classified into two classes: Runaways type I leave the complex-valued spectrum and become completely real because of eigenvalue attraction. Runaways type II leave the bulk and move very rapidly in response to perturbations. These have high condition numbers and can be predicted. Localization of the eigenvectors are then quantified using entropies and inverse participation ratios. Eigenvectors corresponding to Runaways type II are most localized (i.e., super-exponential), whereas Runaways type I are less localized than the unperturbed counterparts and have most of their probability mass in the interior with algebraic decays. The results are corroborated by applying free probability theory and various other supporting numerical studies.
Harold, Meredith Poore; Barlow, Steven M.
2012-01-01
The vocalizations and jaw kinematics of 30 infants aged 6–8 months were recorded using a Motion Analysis System and audiovisual technologies. This study represents the first attempt to determine the effect of play environment on infants’ rate of vocalization and jaw movement. Four play conditions were compared: watching videos, social contingent reinforcement and vocal modeling with an adult, playing alone with small toys, and playing alone with large toys. The fewest vocalizations and spontaneous movement were observed when infants were watching videos or interacting with an adult. Infants vocalized most when playing with large toys. The small toys, which naturally elicited gross motor movement (e.g., waving, banging, shaking), educed fewer vocalizations. This study was also the first to quantify the kinematics of vocalized and non-vocalized jaw movements of 6–8 month-old infants. Jaw kinematics did not differentiate infants who produced canonical syllables from those who did not. All infants produced many jaw movements without vocalization. However, during vocalization, infants were unlikely to move their jaw. This contradicts current theories that infant protophonic vocalizations are jaw dominant. Results of the current study can inform socio-linguistic and kinematic theories of canonical babbling. PMID:23261792
Utility and limitations of measures of health inequities: a theoretical perspective
Alonge, Olakunle; Peters, David H.
2015-01-01
Summary box What is already known on this subject? Various measures have been used in quantifying health inequities among populations in recent times; most of these measures were derived to capture the socioeconomic inequalities in health. These different measures do not always lend themselves to common interpretation by policy makers and health managers because they each reflect limited aspects of the concept of health inequities. What does this study add? To inform a more appropriate application of the different measures currently used in quantifying health inequities, this article explicates common theories underlying the definition of health inequities and uses this understanding to show the utility and limitations of these different measures. It also suggests some key features of an ideal indicator based on the conceptual understanding, with the hope of influencing future efforts in developing more robust measures of health inequities. The article also provides a conceptual ‘product label’ for the common measures of health inequities to guide users and ‘consumers’ in making more robust inferences and conclusions. This paper examines common approaches for quantifying health inequities and assesses the extent to which they incorporate key theories necessary for explicating the definition of health inequity. The first theoretical analysis examined the distinction between inter-individual and inter-group health inequalities as measures of health inequities. The second analysis considered the notion of fairness in health inequalities from different philosophical perspectives. To understand the extent to which different measures of health inequities incorporate these theoretical explanations, four criteria were used to assess each measure: 1) Does the indicator demonstrate inter-group or inter-individual health inequalities or both; 2) Does it reflect health inequalities in relation to socioeconomic position; 3) Is it sensitive to the absolute transfer of health (outcomes, services, or both) or income/wealth between groups; 4) Could it be used to capture inequalities in relation to other population groupings (other than socioeconomic status)? The measures assessed include: before and after measures within only the disadvantaged population, range, Gini coefficient, Pseudo-Gini coefficient, index of dissimilarity, concentration index, slope and relative indices of inequality, and regression techniques. None of these measures satisfied all the four criteria, except the range. Whereas each measure quantifies a different perspective in health inequities, using a measure within only the disadvantaged population does not measure health inequities in a meaningful way, even using before and after changes. For a more complete assessment of how programs affect health inequities, it may be useful to use more than one measure. PMID:26361347
Thermalization and confinement in strongly coupled gauge theories
NASA Astrophysics Data System (ADS)
Ishii, Takaaki; Kiritsis, Elias; Rosen, Christopher
2016-11-01
Quantum field theories of strongly interacting matter sometimes have a useful holographic description in terms of the variables of a gravitational theory in higher dimensions. This duality maps time dependent physics in the gauge theory to time dependent solutions of the Einstein equations in the gravity theory. In order to better understand the process by which "real world" theories such as QCD behave out of thermodynamic equilibrium, we study time dependent perturbations to states in a model of a confining, strongly coupled gauge theory via holography. Operationally, this involves solving a set of non-linear Einstein equations supplemented with specific time dependent boundary conditions. The resulting solutions allow one to comment on the timescale by which the perturbed states thermalize, as well as to quantify the properties of the final state as a function of the perturbation parameters. We comment on the influence of the dual gauge theory's confinement scale on these results, as well as the appearance of a previously anticipated universal scaling regime in the "abrupt quench" limit.
Toward Question-Asking Machines: The Logic of Questions and the Inquiry Calculus
NASA Technical Reports Server (NTRS)
Knuth,Kevin H.
2005-01-01
For over a century, the study of logic has focused on the algebra of logical statements. This work, first performed by George Boole, has led to the development of modern computers, and was shown by Richard T. Cox to be the foundation of Bayesian inference. Meanwhile the logic of questions has been much neglected. For our computing machines to be truly intelligent, they need to be able to ask relevant questions. In this paper I will show how the Boolean lattice of logical statements gives rise to the free distributive lattice of questions thus defining their algebra. Furthermore, there exists a quantity analogous to probability, called relevance, which quantifies the degree to which one question answers another. I will show that relevance is not only a natural generalization of information theory, but also forms its foundation.
Observable measure of quantum coherence in finite dimensional systems.
Girolami, Davide
2014-10-24
Quantum coherence is the key resource for quantum technology, with applications in quantum optics, information processing, metrology, and cryptography. Yet, there is no universally efficient method for quantifying coherence either in theoretical or in experimental practice. I introduce a framework for measuring quantum coherence in finite dimensional systems. I define a theoretical measure which satisfies the reliability criteria established in the context of quantum resource theories. Then, I present an experimental scheme implementable with current technology which evaluates the quantum coherence of an unknown state of a d-dimensional system by performing two programmable measurements on an ancillary qubit, in place of the O(d2) direct measurements required by full state reconstruction. The result yields a benchmark for monitoring quantum effects in complex systems, e.g., certifying nonclassicality in quantum protocols and probing the quantum behavior of biological complexes.
Blauch, A J; Schiano, J L; Ginsberg, M D
2000-06-01
The performance of a nuclear resonance detection system can be quantified using binary detection theory. Within this framework, signal averaging increases the probability of a correct detection and decreases the probability of a false alarm by reducing the variance of the noise in the average signal. In conjunction with signal averaging, we propose another method based on feedback control concepts that further improves detection performance. By maximizing the nuclear resonance signal amplitude, feedback raises the probability of correct detection. Furthermore, information generated by the feedback algorithm can be used to reduce the probability of false alarm. We discuss the advantages afforded by feedback that cannot be obtained using signal averaging. As an example, we show how this method is applicable to the detection of explosives using nuclear quadrupole resonance. Copyright 2000 Academic Press.
Rodríguez-Martín, Beatriz; Martínez-Andrés, María; Cervera-Monteagudo, Beatriz; Notario-Pacheco, Blanca; Martínez-Vizcaíno, Vicente
2013-06-28
The quality of care in nursing homes is weakly defined, and has traditionally focused on quantify nursing homes outputs and on comparison of nursing homes' resources. Rarely the point of view of clients has been taken into account. The aim of this study was to ascertain what means "quality of care" for residents of nursing homes. Grounded theory was used to design and analyze a qualitative study based on in-depth interviews with a theoretical sampling including 20 persons aged over 65 years with no cognitive impairment and eight proxy informants of residents with cognitive impairment, institutionalized at a public nursing home in Spain. Our analysis revealed that participants perceived the quality of care in two ways, as aspects related to the persons providing care and as institutional aspects of the care's process. All participants agreed that aspects related to the persons providing care was a pillar of quality, something that, in turn, embodied a series of emotional and technical professional competences. Regarding the institutional aspects of the care's process, participants laid emphasis on round-the-clock access to health care services and on professional's job stability. This paper includes perspectives of the nursing homes residents, which are largely absent. Incorporating residents' standpoints as a complement to traditional institutional criteria would furnish health providers and funding agencies with key information when it came to designing action plans and interventions aimed at achieving excellence in health care.
Demystifying theory and its use in improvement.
Davidoff, Frank; Dixon-Woods, Mary; Leviton, Laura; Michie, Susan
2015-03-01
The role and value of theory in improvement work in healthcare has been seriously underrecognised. We join others in proposing that more informed use of theory can strengthen improvement programmes and facilitate the evaluation of their effectiveness. Many professionals, including improvement practitioners, are unfortunately mystified-and alienated-by theory, which discourages them from using it in their work. In an effort to demystify theory we make the point in this paper that, far from being discretionary or superfluous, theory ('reason-giving'), both informal and formal, is intimately woven into virtually all human endeavour. We explore the special characteristics of grand, mid-range and programme theory; consider the consequences of misusing theory or failing to use it; review the process of developing and applying programme theory; examine some emerging criteria of 'good' theory; and emphasise the value, as well as the challenge, of combining informal experience-based theory with formal, publicly developed theory. We conclude that although informal theory is always at work in improvement, practitioners are often not aware of it or do not make it explicit. The germane issue for improvement practitioners, therefore, is not whether they use theory but whether they make explicit the particular theory or theories, informal and formal, they actually use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Shapes, scents and sounds: quantifying the full multi-sensory basis of conceptual knowledge.
Hoffman, Paul; Lambon Ralph, Matthew A
2013-01-01
Contemporary neuroscience theories assume that concepts are formed through experience in multiple sensory-motor modalities. Quantifying the contribution of each modality to different object categories is critical to understanding the structure of the conceptual system and to explaining category-specific knowledge deficits. Verbal feature listing is typically used to elicit this information but has a number of drawbacks: sensory knowledge often cannot easily be translated into verbal features and many features are experienced in multiple modalities. Here, we employed a more direct approach in which subjects rated their knowledge of objects in each sensory-motor modality separately. Compared with these ratings, feature listing over-estimated the importance of visual form and functional knowledge and under-estimated the contributions of other sensory channels. An item's sensory rating proved to be a better predictor of lexical-semantic processing speed than the number of features it possessed, suggesting that ratings better capture the overall quantity of sensory information associated with a concept. Finally, the richer, multi-modal rating data not only replicated the sensory-functional distinction between animals and non-living things but also revealed novel distinctions between different types of artefact. Hierarchical cluster analyses indicated that mechanical devices (e.g., vehicles) were distinct from other non-living objects because they had strong sound and motion characteristics, making them more similar to animals in this respect. Taken together, the ratings align with neuroscience evidence in suggesting that a number of distinct sensory processing channels make important contributions to object knowledge. Multi-modal ratings for 160 objects are provided as supplementary materials. Copyright © 2012 Elsevier Ltd. All rights reserved.
Krstacic, Goran; Krstacic, Antonija; Smalcelj, Anton; Milicic, Davor; Jembrek-Gostovic, Mirjana
2007-04-01
Dynamic analysis techniques may quantify abnormalities in heart rate variability (HRV) based on nonlinear and fractal analysis (chaos theory). The article emphasizes clinical and prognostic significance of dynamic changes in short-time series applied on patients with coronary heart disease (CHD) during the exercise electrocardiograph (ECG) test. The subjects were included in the series after complete cardiovascular diagnostic data. Series of R-R and ST-T intervals were obtained from exercise ECG data after sampling digitally. The range rescaled analysis method determined the fractal dimension of the intervals. To quantify fractal long-range correlation's properties of heart rate variability, the detrended fluctuation analysis technique was used. Approximate entropy (ApEn) was applied to quantify the regularity and complexity of time series, as well as unpredictability of fluctuations in time series. It was found that the short-term fractal scaling exponent (alpha(1)) is significantly lower in patients with CHD (0.93 +/- 0.07 vs 1.09 +/- 0.04; P < 0.001). The patients with CHD had higher fractal dimension in each exercise test program separately, as well as in exercise program at all. ApEn was significant lower in CHD group in both RR and ST-T ECG intervals (P < 0.001). The nonlinear dynamic methods could have clinical and prognostic applicability also in short-time ECG series. Dynamic analysis based on chaos theory during the exercise ECG test point out the multifractal time series in CHD patients who loss normal fractal characteristics and regularity in HRV. Nonlinear analysis technique may complement traditional ECG analysis.
Covariant information-density cutoff in curved space-time.
Kempf, Achim
2004-06-04
In information theory, the link between continuous information and discrete information is established through well-known sampling theorems. Sampling theory explains, for example, how frequency-filtered music signals are reconstructible perfectly from discrete samples. In this Letter, sampling theory is generalized to pseudo-Riemannian manifolds. This provides a new set of mathematical tools for the study of space-time at the Planck scale: theories formulated on a differentiable space-time manifold can be equivalent to lattice theories. There is a close connection to generalized uncertainty relations which have appeared in string theory and other studies of quantum gravity.
Khadem, Ali; Hossein-Zadeh, Gholam-Ali; Khorrami, Anahita
2016-03-01
The majority of previous functional/effective connectivity studies conducted on the autistic patients converged to the underconnectivity theory of ASD: "long-range underconnectivity and sometimes short-rang overconnectivity". However, to the best of our knowledge the total (linear and nonlinear) predictive information transfers (PITs) of autistic patients have not been investigated yet. Also, EEG data have rarely been used for exploring the information processing deficits in autistic subjects. This study is aimed at comparing the total (linear and nonlinear) PITs of autistic and typically developing healthy youths during human face processing by using EEG data. The ERPs of 12 autistic youths and 19 age-matched healthy control (HC) subjects were recorded while they were watching upright and inverted human face images. The PITs among EEG channels were quantified using two measures separately: transfer entropy with self-prediction optimality (TESPO), and modified transfer entropy with self-prediction optimality (MTESPO). Afterwards, the directed differential connectivity graphs (dDCGs) were constructed to characterize the significant changes in the estimated PITs of autistic subjects compared with HC ones. By using both TESPO and MTESPO, long-range reduction of PITs of ASD group during face processing was revealed (particularly from frontal channels to right temporal channels). Also, it seemed the orientation of face images (upright or upside down) did not modulate the binary pattern of PIT-based dDCGs, significantly. Moreover, compared with TESPO, the results of MTESPO were more compatible with the underconnectivity theory of ASD in the sense that MTESPO showed no long-range increase in PIT. It is also noteworthy that to the best of our knowledge it is the first time that a version of MTE is applied for patients (here ASD) and it is also its first use for EEG data analysis.
NASA Astrophysics Data System (ADS)
Stillwater, Tai
A large body of evidence suggests that drivers who receive real-time fuel economy information can increase their vehicle fuel economy by 5%, a process commonly known as ecodriving. However, few studies have directly addressed the human side of the feedback, that is, why drivers would (or would not) be motivated to change their behavior and how to design feedback devices to maximize the motivation to ecodrive. This dissertation approaches the question using a mixed qualitative and quantitative approach to explore driver responses and psychology as well as to quantify the process of behavior change. The first chapter discusses the use of mile-per-gallon fuel economy as a metric for driver feedback and finds that an alternative energy economy metric is superior for real-time feedback. The second chapter reviews behavioral theories and proposes a number of practical solutions for the ecodriving context. In the third chapter the theory of planned behavior is tested against driver responses to an existing feedback system available in the 2008 model Toyota Prius. The fourth chapter presents a novel feedback design based on behavioral theories and drivers' responses to the feedback. Finally, chapter five presents the quantitative results of a natural-driving study of fuel economy feedback. The dissertation findings suggest that behavior theories such as the Theory of Planned Behavior can provide important improvements to existing feedback designs. In addition, a careful analysis of vehicle energy flows indicates that the mile-per-gallon metric is deeply flawed as a real-time feedback metric, and should be replaced. Chapters 2 and 3 conclude that behavior theories have both a theoretical and highly practical role in feedback design, although the driving context requires just as much care in the application. Chapters 4 and 5 find that a theory-inspired interface provides drivers with engaging and motivating feedback, and that integrating personal goal into the feedback is the most motivating theory-based addition. Finally, the behavioral model results in chapter 5 suggest that driver goals not only influence in-vehicle energy use, but are themselves flexible constructs that can be directly influenced by energy feedback.
The potential role of sea spray droplets in facilitating air-sea gas transfer
NASA Astrophysics Data System (ADS)
Andreas, E. L.; Vlahos, P.; Monahan, E. C.
2016-05-01
For over 30 years, air-sea interaction specialists have been evaluating and parameterizing the role of whitecap bubbles in air-sea gas exchange. To our knowledge, no one, however, has studied the mirror image process of whether sea spray droplets can facilitate air-sea gas exchange. We are therefore using theory, data analysis, and numerical modeling to quantify the role of spray on air-sea gas transfer. In this, our first formal work on this subject, we seek the rate-limiting step in spray-mediated gas transfer by evaluating the three time scales that govern the exchange: τ air , which quantifies the rate of transfer between the atmospheric gas reservoir and the surface of the droplet; τ int , which quantifies the exchange rate across the air-droplet interface; and τ aq , which quantifies gas mixing within the aqueous solution droplet.
Finite-block-length analysis in classical and quantum information theory.
Hayashi, Masahito
2017-01-01
Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects.
Finite-block-length analysis in classical and quantum information theory
HAYASHI, Masahito
2017-01-01
Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects. PMID:28302962
Processing of Numerical and Proportional Quantifiers
ERIC Educational Resources Information Center
Shikhare, Sailee; Heim, Stefan; Klein, Elise; Huber, Stefan; Willmes, Klaus
2015-01-01
Quantifier expressions like "many" and "at least" are part of a rich repository of words in language representing magnitude information. The role of numerical processing in comprehending quantifiers was studied in a semantic truth value judgment task, asking adults to quickly verify sentences about visual displays using…
Evolutionary significance of ageing in the wild.
Kowald, Axel; Kirkwood, Thomas B L
2015-11-01
Human lifespan has risen dramatically over the last 150 years, leading to a significant increase in the fraction of aged people in the population. Until recently it was believed that this contrasted strongly with the situation in wild populations of animals, where the likelihood of encountering demonstrably senescent individuals was believed to be negligible. Over the recent years, however, a series of field studies has appeared that shows ageing can also be observed for many species in the wild. We discuss here the relevance of this finding for the different evolutionary theories of ageing, since it has been claimed that ageing in the wild is incompatible with the so-called non-adaptive (non-programmed) theories, i.e. those in which ageing is presumed not to offer a direct selection benefit. We show that a certain proportion of aged individuals in the population is fully compatible with the antagonistic pleiotropy and the disposable soma theories, while it is difficult to reconcile with the mutation accumulation theory. We also quantify the costs of ageing using life history data from recent field studies and a range of possible metrics. We discuss the merits and problems of the different metrics and also introduce a new metric, yearly death toll, that aims directly at quantifying the deaths caused by the ageing process. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Harper, Marc Allen
2009-01-01
This work attempts to explain the relationships between natural selection, information theory, and statistical inference. In particular, a geometric formulation of information theory known as information geometry and its deep connections to evolutionary game theory inform the role of natural selection in evolutionary processes. The goals of this…
NASA Astrophysics Data System (ADS)
Dall'Arno, Michele; Brandsen, Sarah; Tosini, Alessandro; Buscemi, Francesco; Vedral, Vlatko
2017-07-01
A paramount topic in quantum foundations, rooted in the study of the Einstein-Podolsky-Rosen (EPR) paradox and Bell inequalities, is that of characterizing quantum theory in terms of the spacelike correlations it allows. Here, we show that to focus only on spacelike correlations is not enough: we explicitly construct a toy model theory that, while not contradicting classical and quantum theories at the level of spacelike correlations, still displays an anomalous behavior in its timelike correlations. We call this anomaly, quantified in terms of a specific communication game, the "hypersignaling" phenomena. We hence conclude that the "principle of quantumness," if it exists, cannot be found in spacelike correlations alone: nontrivial constraints need to be imposed also on timelike correlations, in order to exclude hypersignaling theories.
Dall'Arno, Michele; Brandsen, Sarah; Tosini, Alessandro; Buscemi, Francesco; Vedral, Vlatko
2017-07-14
A paramount topic in quantum foundations, rooted in the study of the Einstein-Podolsky-Rosen (EPR) paradox and Bell inequalities, is that of characterizing quantum theory in terms of the spacelike correlations it allows. Here, we show that to focus only on spacelike correlations is not enough: we explicitly construct a toy model theory that, while not contradicting classical and quantum theories at the level of spacelike correlations, still displays an anomalous behavior in its timelike correlations. We call this anomaly, quantified in terms of a specific communication game, the "hypersignaling" phenomena. We hence conclude that the "principle of quantumness," if it exists, cannot be found in spacelike correlations alone: nontrivial constraints need to be imposed also on timelike correlations, in order to exclude hypersignaling theories.
Elliptic genera and 3d gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benjamin, Nathan; Cheng, Miranda C. N.; Kachru, Shamit
Here, we describe general constraints on the elliptic genus of a 2d supersymmetric conformal field theory which has a gravity dual with large radius in Planck units. We give examples of theories which do and do not satisfy the bounds we derive, by describing the elliptic genera of symmetric product orbifolds of K 3, product manifolds, certain simple families of Calabi–Yau hypersurfaces, and symmetric products of the “Monster CFT”. We discuss the distinction between theories with supergravity duals and those whose duals have strings at the scale set by the AdS curvature. Under natural assumptions, we attempt to quantify themore » fraction of (2,2) supersymmetric conformal theories which admit a weakly curved gravity description, at large central charge.« less
Elliptic genera and 3d gravity
Benjamin, Nathan; Cheng, Miranda C. N.; Kachru, Shamit; ...
2016-03-30
Here, we describe general constraints on the elliptic genus of a 2d supersymmetric conformal field theory which has a gravity dual with large radius in Planck units. We give examples of theories which do and do not satisfy the bounds we derive, by describing the elliptic genera of symmetric product orbifolds of K 3, product manifolds, certain simple families of Calabi–Yau hypersurfaces, and symmetric products of the “Monster CFT”. We discuss the distinction between theories with supergravity duals and those whose duals have strings at the scale set by the AdS curvature. Under natural assumptions, we attempt to quantify themore » fraction of (2,2) supersymmetric conformal theories which admit a weakly curved gravity description, at large central charge.« less
Measuring the hierarchy of feedforward networks
NASA Astrophysics Data System (ADS)
Corominas-Murtra, Bernat; Rodríguez-Caso, Carlos; Goñi, Joaquín; Solé, Ricard
2011-03-01
In this paper we explore the concept of hierarchy as a quantifiable descriptor of ordered structures, departing from the definition of three conditions to be satisfied for a hierarchical structure: order, predictability, and pyramidal structure. According to these principles, we define a hierarchical index taking concepts from graph and information theory. This estimator allows to quantify the hierarchical character of any system susceptible to be abstracted in a feedforward causal graph, i.e., a directed acyclic graph defined in a single connected structure. Our hierarchical index is a balance between this predictability and pyramidal condition by the definition of two entropies: one attending the onward flow and the other for the backward reversion. We show how this index allows to identify hierarchical, antihierarchical, and nonhierarchical structures. Our formalism reveals that departing from the defined conditions for a hierarchical structure, feedforward trees and the inverted tree graphs emerge as the only causal structures of maximal hierarchical and antihierarchical systems respectively. Conversely, null values of the hierarchical index are attributed to a number of different configuration networks; from linear chains, due to their lack of pyramid structure, to full-connected feedforward graphs where the diversity of onward pathways is canceled by the uncertainty (lack of predictability) when going backward. Some illustrative examples are provided for the distinction among these three types of hierarchical causal graphs.
Physico-Chemical and Structural Interpretation of Discrete Derivative Indices on N-Tuples Atoms
Martínez-Santiago, Oscar; Marrero-Ponce, Yovani; Barigye, Stephen J.; Le Thi Thu, Huong; Torres, F. Javier; Zambrano, Cesar H.; Muñiz Olite, Jorge L.; Cruz-Monteagudo, Maykel; Vivas-Reyes, Ricardo; Vázquez Infante, Liliana; Artiles Martínez, Luis M.
2016-01-01
This report examines the interpretation of the Graph Derivative Indices (GDIs) from three different perspectives (i.e., in structural, steric and electronic terms). It is found that the individual vertex frequencies may be expressed in terms of the geometrical and electronic reactivity of the atoms and bonds, respectively. On the other hand, it is demonstrated that the GDIs are sensitive to progressive structural modifications in terms of: size, ramifications, electronic richness, conjugation effects and molecular symmetry. Moreover, it is observed that the GDIs quantify the interaction capacity among molecules and codify information on the activation entropy. A structure property relationship study reveals that there exists a direct correspondence between the individual frequencies of atoms and Hückel’s Free Valence, as well as between the atomic GDIs and the chemical shift in NMR, which collectively validates the theory that these indices codify steric and electronic information of the atoms in a molecule. Taking in consideration the regularity and coherence found in experiments performed with the GDIs, it is possible to say that GDIs possess plausible interpretation in structural and physicochemical terms. PMID:27240357
Probing the holographic principle using dynamical gauge effects from open spin-orbit coupling
NASA Astrophysics Data System (ADS)
Zhao, Jianshi; Price, Craig; Liu, Qi; Gemelke, Nathan
2016-05-01
Dynamical gauge fields result from locally defined symmetries and an effective over-labeling of quantum states. Coupling atoms weakly to a reservoir of laser modes can create an effective dynamical gauge field purely due to the disregard of information in the optical states. Here we report measurements revealing effects of open spin-orbit coupling in a system where an effective model can be formed from a non-abelian SU(2) × U(1) field theory following the Yang-Mills construct. Forming a close analogy to dynamical gauge effects in quantum chromodynamics, we extract a measure of atomic motion which reveals the analog of a closing mass gap for the relevant gauge boson, shedding insight on long standing open problems in gauge-fixing scale anomalies. Using arguments following the holographic principle, we measure scaling relations which can be understood by quantifying information present in the local potential. New prospects using these techniques for developing fractionalization of multi-particle and macroscopic systems using dissipative and non-abelian gauge fields will also be discussed. We acknowledge support from NSF Award No. 1068570, and the Charles E. Kaufman Foundation.
Loce, R P; Jodoin, R E
1990-09-10
Using the tools of Fourier analysis, a sampling requirement is derived that assures that sufficient information is contained within the samples of a distribution to calculate accurately geometric moments of that distribution. The derivation follows the standard textbook derivation of the Whittaker-Shannon sampling theorem, which is used for reconstruction, but further insight leads to a coarser minimum sampling interval for moment determination. The need for fewer samples to determine moments agrees with intuition since less information should be required to determine a characteristic of a distribution compared with that required to construct the distribution. A formula for calculation of the moments from these samples is also derived. A numerical analysis is performed to quantify the accuracy of the calculated first moment for practical nonideal sampling conditions. The theory is applied to a high speed laser beam position detector, which uses the normalized first moment to measure raster line positional accuracy in a laser printer. The effects of the laser irradiance profile, sampling aperture, number of samples acquired, quantization, and noise are taken into account.
Suppressed neural complexity during ketamine- and propofol-induced unconsciousness.
Wang, Jisung; Noh, Gyu-Jeong; Choi, Byung-Moon; Ku, Seung-Woo; Joo, Pangyu; Jung, Woo-Sung; Kim, Seunghwan; Lee, Heonsoo
2017-07-13
Ketamine and propofol have distinctively different molecular mechanisms of action and neurophysiological features, although both induce loss of consciousness. Therefore, identifying a common feature of ketamine- and propofol-induced unconsciousness would provide insight into the underlying mechanism of losing consciousness. In this study we search for a common feature by applying the concept of type-II complexity, and argue that neural complexity is essential for a brain to maintain consciousness. To test this hypothesis, we show that complexity is suppressed during loss of consciousness induced by ketamine or propofol. We analyzed the randomness (type-I complexity) and complexity (type-II complexity) of electroencephalogram (EEG) signals before and after bolus injection of ketamine or propofol. For the analysis, we use Mean Information Gain (MIG) and Fluctuation Complexity (FC), which are information-theory-based measures that quantify disorder and complexity of dynamics respectively. Both ketamine and propofol reduced the complexity of the EEG signal, but ketamine increased the randomness of the signal and propofol decreased it. The finding supports our claim and suggests EEG complexity as a candidate for a consciousness indicator. Copyright © 2017 Elsevier B.V. All rights reserved.
Information theoretical assessment of visual communication with subband coding
NASA Astrophysics Data System (ADS)
Rahman, Zia-ur; Fales, Carl L.; Huck, Friedrich O.
1994-09-01
A well-designed visual communication channel is one which transmits the most information about a radiance field with the fewest artifacts. The role of image processing, encoding and restoration is to improve the quality of visual communication channels by minimizing the error in the transmitted data. Conventionally this role has been analyzed strictly in the digital domain neglecting the effects of image-gathering and image-display devices on the quality of the image. This results in the design of a visual communication channel which is `suboptimal.' We propose an end-to-end assessment of the imaging process which incorporates the influences of these devices in the design of the encoder and the restoration process. This assessment combines Shannon's communication theory with Wiener's restoration filter and with the critical design factors of the image gathering and display devices, thus providing the metrics needed to quantify and optimize the end-to-end performance of the visual communication channel. Results show that the design of the image-gathering device plays a significant role in determining the quality of the visual communication channel and in designing the analysis filters for subband encoding.
ERIC Educational Resources Information Center
Grenn, Michael W.
2013-01-01
This dissertation introduces a theory of information quality to explain macroscopic behavior observed in the systems engineering process. The theory extends principles of Shannon's mathematical theory of communication [1948] and statistical mechanics to information development processes concerned with the flow, transformation, and meaning of…
A quantum Rosetta Stone for the information paradox
NASA Astrophysics Data System (ADS)
Pando Zayas, Leopoldo A.
2014-11-01
The black hole information loss paradox epitomizes the contradictions between general relativity and quantum field theory. The AdS/conformal field theory (CFT) correspondence provides an implicit answer for the information loss paradox in black hole physics by equating a gravity theory with an explicitly unitary field theory. Gravitational collapse in asymptotically AdS spacetimes is generically turbulent. Given that the mechanism to read out the information about correlations functions in the field theory side is plagued by deterministic classical chaos, we argue that quantum chaos might provide the true Rosetta Stone for answering the information paradox in the context of the AdS/CFT correspondence.
Tackling Information Asymmetry in Networks: A New Entropy-Based Ranking Index
NASA Astrophysics Data System (ADS)
Barucca, Paolo; Caldarelli, Guido; Squartini, Tiziano
2018-06-01
Information is a valuable asset in socio-economic systems, a significant part of which is entailed into the network of connections between agents. The different interlinkages patterns that agents establish may, in fact, lead to asymmetries in the knowledge of the network structure; since this entails a different ability of quantifying relevant, systemic properties (e.g. the risk of contagion in a network of liabilities), agents capable of providing a better estimation of (otherwise) inaccessible network properties, ultimately have a competitive advantage. In this paper, we address the issue of quantifying the information asymmetry of nodes: to this aim, we define a novel index—InfoRank—intended to rank nodes according to their information content. In order to do so, each node ego-network is enforced as a constraint of an entropy-maximization problem and the subsequent uncertainty reduction is used to quantify the node-specific accessible information. We, then, test the performance of our ranking procedure in terms of reconstruction accuracy and show that it outperforms other centrality measures in identifying the "most informative" nodes. Finally, we discuss the socio-economic implications of network information asymmetry.
Quantitative approaches for assessing ecological and community resilience
Scholars from many different intellectual disciplines have attempted to measure, estimate, or quantify resilience. However, there is growing concern that lack of clarity on the operationalization of the concept will limit its application. In this paper, we discuss the theory, res...
Enhancing quantitative approaches for assessing community resilience
Scholars from many different intellectual disciplines have attempted to measure, estimate, or quantify resilience. However, there is growing concern that lack of clarity on the operationalization of the concept will limit its application. In this paper, we discuss the theory, res...
Does the planck mass run on the cosmological-horizon scale?
Robbers, Georg; Afshordi, Niayesh; Doran, Michael
2008-03-21
Einstein's theory of general relativity contains a universal value of the Planck mass. However, one may envisage that in alternative theories of gravity the effective value of the Planck mass (or Newton's constant), which quantifies the coupling of matter to metric perturbations, can run on the cosmological-horizon scale. In this Letter, we study the consequences of a glitch in the Planck mass from subhorizon to superhorizon scales. We show that current cosmological observations severely constrain this glitch to less than 1.2%.
Ray-optical theory of broadband partially coherent emission
NASA Astrophysics Data System (ADS)
Epstein, Ariel; Tessler, Nir; Einziger, Pinchas D.
2013-04-01
We present a rigorous formulation of the effects of spectral broadening on emission of partially coherent source ensembles embedded in multilayered formations with arbitrarily shaped interfaces, provided geometrical optics is valid. The resulting ray-optical theory, applicable to a variety of optical systems from terahertz lenses to photovoltaic cells, quantifies the fundamental interplay between bandwidth and layer dimensions, and sheds light on common practices in optical analysis of statistical fields, e.g., disregarding multiple reflections or neglecting interference cross terms.
Ko, Linda K.; Turner-McGrievy, Gabrielle; Campbell, Marci K.
2016-01-01
Podcasting is an emerging technology, and previous interventions have shown promising results using theory-based podcast for weight loss among overweight and obese individuals. This study investigated whether constructs of social cognitive theory and information processing theories (IPTs) mediate the effect of a podcast intervention on weight loss among overweight individuals. Data are from Pounds off Digitally, a study testing the efficacy of two weight loss podcast interventions (control podcast and theory-based podcast). Path models were constructed (n = 66). The IPTs—elaboration likelihood model, information control theory, and cognitive load theory—mediated the effect of a theory-based podcast on weight loss. The intervention was significantly associated with all IPTs. Information control theory and cognitive load theory were related to elaboration, and elaboration was associated with weight loss. Social cognitive theory constructs did not mediate weight loss. Future podcast interventions grounded in theory may be effective in promoting weight loss. PMID:24082027
A Mathematical Theory of System Information Flow
2016-06-27
AFRL-AFOSR-VA-TR-2016-0232 A Mathematical Theory of System Information Flow Michael Mislove ADMINISTRATORS OF THE TULANE EDUCATIONAL FUND THE 6823...MM-YYYY) 17-06-2016 2. REPORT TYPE Final 3. DATES COVERED (From - To) 27MAR2013 - 31MAR2016 4. TITLE AND SUBTITLE A Mathematical Theory of System...systems using techniques from information theory , domain theory and other areas of mathematics and computer science. Over time, the focus shifted
Creativity, information, and consciousness: The information dynamics of thinking.
Wiggins, Geraint A
2018-05-07
This paper presents a theory of the basic operation of mind, Information Dynamics of Thinking, which is intended for computational implementation and thence empirical testing. It is based on the information theory of Shannon, and treats the mind/brain as an information processing organ that aims to be information-efficient, in that it predicts its world, so as to use information efficiently, and regularly re-represents it, so as to store information efficiently. The theory is presented in context of a background review of various research areas that impinge upon its development. Consequences of the theory and testable hypotheses arising from it are discussed. Copyright © 2018. Published by Elsevier B.V.
Food webs for parasitologists: a review.
Sukhdeo, Michael V K
2010-04-01
This review examines the historical origins of food web theory and explores the reasons why parasites have traditionally been left out of food web studies. Current paradigms may still be an impediment because, despite several attempts, it remains virtually impossible to retrofit parasites into food web theory in any satisfactory manner. It seems clear that parasitologists must return to first principles to solve how best to incorporate parasites into ecological food webs, and a first step in changing paradigms will be to include parasites in the classic ecological patterns that inform food web theory. The limitations of current food web models are discussed with respect to their logistic exclusion of parasites, and the traditional matrix approach in food web studies is critically examined. The well-known energetic perspective on ecosystem organization is presented as a viable alternative to the matrix approach because it provides an intellectually powerful theoretical paradigm for generating testable hypotheses on true food web structure. This review proposes that to make significant contributions to the food web debate, parasitologists must work from the standpoint of natural history to elucidate patterns of biomass, species abundance, and interaction strengths in real food webs, and these will provide the basis for more realistic models that incorporate parasite dynamics into the overall functional dynamics of the whole web. A general conclusion is that only by quantifying the effects of parasites in terms of energy flows (or biomass) will we be able to correctly place parasites into food webs.
Optimization of pressure gauge locations for water distribution systems using entropy theory.
Yoo, Do Guen; Chang, Dong Eil; Jun, Hwandon; Kim, Joong Hoon
2012-12-01
It is essential to select the optimal pressure gauge location for effective management and maintenance of water distribution systems. This study proposes an objective and quantified standard for selecting the optimal pressure gauge location by defining the pressure change at other nodes as a result of demand change at a specific node using entropy theory. Two cases are considered in terms of demand change: that in which demand at all nodes shows peak load by using a peak factor and that comprising the demand change of the normal distribution whose average is the base demand. The actual pressure change pattern is determined by using the emitter function of EPANET to reflect the pressure that changes practically at each node. The optimal pressure gauge location is determined by prioritizing the node that processes the largest amount of information it gives to (giving entropy) and receives from (receiving entropy) the whole system according to the entropy standard. The suggested model is applied to one virtual and one real pipe network, and the optimal pressure gauge location combination is calculated by implementing the sensitivity analysis based on the study results. These analysis results support the following two conclusions. Firstly, the installation priority of the pressure gauge in water distribution networks can be determined with a more objective standard through the entropy theory. Secondly, the model can be used as an efficient decision-making guide for gauge installation in water distribution systems.
Critical Theory and Information Studies: A Marcusean Infusion
ERIC Educational Resources Information Center
Pyati, Ajit K.
2006-01-01
In the field of library and information science, also known as information studies, critical theory is often not included in debates about the discipline's theoretical foundations. This paper argues that the critical theory of Herbert Marcuse, in particular, has a significant contribution to make to the field of information studies. Marcuse's…
Client-Controlled Case Information: A General System Theory Perspective
ERIC Educational Resources Information Center
Fitch, Dale
2004-01-01
The author proposes a model for client control of case information via the World Wide Web built on principles of general system theory. It incorporates the client into the design, resulting in an information structure that differs from traditional human services information-sharing practices. Referencing general system theory, the concepts of…
Different methodologies to quantify uncertainties of air emissions.
Romano, Daniela; Bernetti, Antonella; De Lauretis, Riccardo
2004-10-01
Characterization of the uncertainty associated with air emission estimates is of critical importance especially in the compilation of air emission inventories. In this paper, two different theories are discussed and applied to evaluate air emissions uncertainty. In addition to numerical analysis, which is also recommended in the framework of the United Nation Convention on Climate Change guidelines with reference to Monte Carlo and Bootstrap simulation models, fuzzy analysis is also proposed. The methodologies are discussed and applied to an Italian example case study. Air concentration values are measured from two electric power plants: a coal plant, consisting of two boilers and a fuel oil plant, of four boilers; the pollutants considered are sulphur dioxide (SO(2)), nitrogen oxides (NO(X)), carbon monoxide (CO) and particulate matter (PM). Monte Carlo, Bootstrap and fuzzy methods have been applied to estimate uncertainty of these data. Regarding Monte Carlo, the most accurate results apply to Gaussian distributions; a good approximation is also observed for other distributions with almost regular features either positive asymmetrical or negative asymmetrical. Bootstrap, on the other hand, gives a good uncertainty estimation for irregular and asymmetrical distributions. The logic of fuzzy analysis, where data are represented as vague and indefinite in opposition to the traditional conception of neatness, certain classification and exactness of the data, follows a different description. In addition to randomness (stochastic variability) only, fuzzy theory deals with imprecision (vagueness) of data. Fuzzy variance of the data set was calculated; the results cannot be directly compared with empirical data but the overall performance of the theory is analysed. Fuzzy theory may appear more suitable for qualitative reasoning than for a quantitative estimation of uncertainty, but it suits well when little information and few measurements are available and when distributions of data are not properly known.
Inhibitory mechanism of the matching heuristic in syllogistic reasoning.
Tse, Ping Ping; Moreno Ríos, Sergio; García-Madruga, Juan Antonio; Bajo Molina, María Teresa
2014-11-01
A number of heuristic-based hypotheses have been proposed to explain how people solve syllogisms with automatic processes. In particular, the matching heuristic employs the congruency of the quantifiers in a syllogism—by matching the quantifier of the conclusion with those of the two premises. When the heuristic leads to an invalid conclusion, successful solving of these conflict problems requires the inhibition of automatic heuristic processing. Accordingly, if the automatic processing were based on processing the set of quantifiers, no semantic contents would be inhibited. The mental model theory, however, suggests that people reason using mental models, which always involves semantic processing. Therefore, whatever inhibition occurs in the processing implies the inhibition of the semantic contents. We manipulated the validity of the syllogism and the congruency of the quantifier of its conclusion with those of the two premises according to the matching heuristic. A subsequent lexical decision task (LDT) with related words in the conclusion was used to test any inhibition of the semantic contents after each syllogistic evaluation trial. In the LDT, the facilitation effect of semantic priming diminished after correctly solved conflict syllogisms (match-invalid or mismatch-valid), but was intact after no-conflict syllogisms. The results suggest the involvement of an inhibitory mechanism of semantic contents in syllogistic reasoning when there is a conflict between the output of the syntactic heuristic and actual validity. Our results do not support a uniquely syntactic process of syllogistic reasoning but fit with the predictions based on mental model theory. Copyright © 2014 Elsevier B.V. All rights reserved.
Dacia M. Meneguzzo; Mark H. Hansen
2009-01-01
Fragmentation metrics provide a means of quantifying and describing forest fragmentation. The most common method of calculating these metrics is through the use of Geographic Information System software to analyze raster data, such as a satellite or aerial image of the study area; however, the spatial resolution of the imagery has a significant impact on the results....
Integrated Information Increases with Fitness in the Evolution of Animats
Edlund, Jeffrey A.; Chaumont, Nicolas; Hintze, Arend; Koch, Christof; Tononi, Giulio; Adami, Christoph
2011-01-01
One of the hallmarks of biological organisms is their ability to integrate disparate information sources to optimize their behavior in complex environments. How this capability can be quantified and related to the functional complexity of an organism remains a challenging problem, in particular since organismal functional complexity is not well-defined. We present here several candidate measures that quantify information and integration, and study their dependence on fitness as an artificial agent (“animat”) evolves over thousands of generations to solve a navigation task in a simple, simulated environment. We compare the ability of these measures to predict high fitness with more conventional information-theoretic processing measures. As the animat adapts by increasing its “fit” to the world, information integration and processing increase commensurately along the evolutionary line of descent. We suggest that the correlation of fitness with information integration and with processing measures implies that high fitness requires both information processing as well as integration, but that information integration may be a better measure when the task requires memory. A correlation of measures of information integration (but also information processing) and fitness strongly suggests that these measures reflect the functional complexity of the animat, and that such measures can be used to quantify functional complexity even in the absence of fitness data. PMID:22028639
Information theory in systems biology. Part I: Gene regulatory and metabolic networks.
Mousavian, Zaynab; Kavousi, Kaveh; Masoudi-Nejad, Ali
2016-03-01
"A Mathematical Theory of Communication", was published in 1948 by Claude Shannon to establish a framework that is now known as information theory. In recent decades, information theory has gained much attention in the area of systems biology. The aim of this paper is to provide a systematic review of those contributions that have applied information theory in inferring or understanding of biological systems. Based on the type of system components and the interactions between them, we classify the biological systems into 4 main classes: gene regulatory, metabolic, protein-protein interaction and signaling networks. In the first part of this review, we attempt to introduce most of the existing studies on two types of biological networks, including gene regulatory and metabolic networks, which are founded on the concepts of information theory. Copyright © 2015 Elsevier Ltd. All rights reserved.
[New idea of traditional Chinese medicine quality control based on "composition structure" theory].
Liu, Dan; Jia, Xiaobin; Yu, Danhong
2012-03-01
On the road of the modern Chinese medicine developing internationally, there is a key issues that setting up a reasonable, accurate and be quantified quality evaluation system which is comply with the basic theory of Chinese medicine. Based on the overall understanding of the role of traditional Chinese medicine components, author suggested that the idea of "structural components" theory should be embedded into the system and thought the Chinese medicine play a multi-target, multi-channel pharmacodynamic effects founded on the specific microcosmic structural relationship between the components and the components within the group. At present, the way of Chinese pharmacopoeia checking the quality of Chinese medicine is mainly depends on controlling the single or multiple targets of ingredients. In fact, this way is out of the overall effectiveness of the Chinese medicine, so we can not thoroughly controlling the quality of Chinese medicine from the essence of the Chinese medicine. Secondly, it's only macro-structural quantity that the Chinese pharmacopoeia just controlling the less effective ingredients, this is not enough to reflect the internal microstructure of the integrity and systematic. In other words, this cannot reflect the structural components of the Chinese medicine (the essence of traditional Chinese medicine). In view of above mentioned reasons, the author propose the new idea on the quality control in the medicine that quantify the ratio structural relationship in component and the ingredients of the components, set the optimal controlling proportion between the components and ingredients. At the same time, author thought we should conduct the depth study in the micro-quantified the multi-component and multi-ingredient, in the process of studying the material basis of Chinese medicine. Therefore, it could establish a more rational basis for the Chinese medicine quality controlling system.
A direct passive method for measuring water and contaminant fluxes in porous media
NASA Astrophysics Data System (ADS)
Hatfield, Kirk; Annable, Michael; Cho, Jaehyun; Rao, P. S. C.; Klammler, Harald
2004-12-01
This paper introduces a new direct method for measuring water and contaminant fluxes in porous media. The method uses a passive flux meter (PFM), which is essentially a self-contained permeable unit properly sized to fit tightly in a screened well or boring. The meter is designed to accommodate a mixed medium of hydrophobic and/or hydrophilic permeable sorbents, which retain dissolved organic/inorganic contaminants present in the groundwater flowing passively through the meter. The contaminant mass intercepted and retained on the sorbent is used to quantify cumulative contaminant mass flux. The sorptive matrix is also impregnated with known amounts of one or more water soluble 'resident tracers'. These tracers are displaced from the sorbent at rates proportional to the groundwater flux; hence, in the current meter design, the resident tracers are used to quantify cumulative groundwater flux. Theory is presented and quantitative tools are developed to interpret the water flux from tracers possessing linear and nonlinear elution profiles. The same theory is extended to derive functional relationships useful for quantifying cumulative contaminant mass flux. To validate theory and demonstrate the passive flux meter, results of multiple box-aquifer experiments are presented and discussed. From these experiments, it is seen that accurate water flux measurements are obtained when the tracer used in calculations resides in the meter at levels representing 20 to 70 percent of the initial condition. 2,4-Dimethyl-3-pentanol (DMP) is used as a surrogate groundwater contaminant in the box aquifer experiments. Cumulative DMP fluxes are measured within 5% of known fluxes. The accuracy of these estimates generally increases with the total volume of water intercepted.
Coarse-grained theory of a realistic tetrahedral liquid model
NASA Astrophysics Data System (ADS)
Procaccia, I.; Regev, I.
2012-02-01
Tetrahedral liquids such as water and silica-melt show unusual thermodynamic behavior such as a density maximum and an increase in specific heat when cooled to low temperatures. Previous work had shown that Monte Carlo and mean-field solutions of a lattice model can exhibit these anomalous properties with or without a phase transition, depending on the values of the different terms in the Hamiltonian. Here we use a somewhat different approach, where we start from a very popular empirical model of tetrahedral liquids —the Stillinger-Weber model— and construct a coarse-grained theory which directly quantifies the local structure of the liquid as a function of volume and temperature. We compare the theory to molecular-dynamics simulations and show that the theory can rationalize the simulation results and the anomalous behavior.
Computational Complexity and Human Decision-Making.
Bossaerts, Peter; Murawski, Carsten
2017-12-01
The rationality principle postulates that decision-makers always choose the best action available to them. It underlies most modern theories of decision-making. The principle does not take into account the difficulty of finding the best option. Here, we propose that computational complexity theory (CCT) provides a framework for defining and quantifying the difficulty of decisions. We review evidence showing that human decision-making is affected by computational complexity. Building on this evidence, we argue that most models of decision-making, and metacognition, are intractable from a computational perspective. To be plausible, future theories of decision-making will need to take into account both the resources required for implementing the computations implied by the theory, and the resource constraints imposed on the decision-maker by biology. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Optimal free descriptions of many-body theories
NASA Astrophysics Data System (ADS)
Turner, Christopher J.; Meichanetzidis, Konstantinos; Papić, Zlatko; Pachos, Jiannis K.
2017-04-01
Interacting bosons or fermions give rise to some of the most fascinating phases of matter, including high-temperature superconductivity, the fractional quantum Hall effect, quantum spin liquids and Mott insulators. Although these systems are promising for technological applications, they also present conceptual challenges, as they require approaches beyond mean-field and perturbation theory. Here we develop a general framework for identifying the free theory that is closest to a given interacting model in terms of their ground-state correlations. Moreover, we quantify the distance between them using the entanglement spectrum. When this interaction distance is small, the optimal free theory provides an effective description of the low-energy physics of the interacting model. Our construction of the optimal free model is non-perturbative in nature; thus, it offers a theoretical framework for investigating strongly correlated systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erol, V.; Netas Telecommunication Inc., Istanbul
Entanglement has been studied extensively for understanding the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known monotones for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. The study on these monotones has been a hot topic in quantum information [1-7] in order to understand the role of entanglement in this discipline. It can be observed that from any arbitrary quantum pure state a mixed state can obtained. A natural generalization of this observation would be to consider local operations classical communication (LOCC)more » transformations between general pure states of two parties. Although this question is a little more difficult, a complete solution has been developed using the mathematical framework of the majorization theory [8]. In this work, we analyze the relation between entanglement monotones concurrence and negativity with respect to majorization for general two-level quantum systems of two particles.« less
Unraveling dynamics of human physical activity patterns in chronic pain conditions
NASA Astrophysics Data System (ADS)
Paraschiv-Ionescu, Anisoara; Buchser, Eric; Aminian, Kamiar
2013-06-01
Chronic pain is a complex disabling experience that negatively affects the cognitive, affective and physical functions as well as behavior. Although the interaction between chronic pain and physical functioning is a well-accepted paradigm in clinical research, the understanding of how pain affects individuals' daily life behavior remains a challenging task. Here we develop a methodological framework allowing to objectively document disruptive pain related interferences on real-life physical activity. The results reveal that meaningful information is contained in the temporal dynamics of activity patterns and an analytical model based on the theory of bivariate point processes can be used to describe physical activity behavior. The model parameters capture the dynamic interdependence between periods and events and determine a `signature' of activity pattern. The study is likely to contribute to the clinical understanding of complex pain/disease-related behaviors and establish a unified mathematical framework to quantify the complex dynamics of various human activities.
Measuring the impact of final demand on global production system based on Markov process
NASA Astrophysics Data System (ADS)
Xing, Lizhi; Guan, Jun; Wu, Shan
2018-07-01
Input-output table is a comprehensive and detailed in describing the national economic systems, consisting of supply and demand information among various industrial sectors. The complex network, a theory and method for measuring the structure of complex system, can depict the structural properties of social and economic systems, and reveal the complicated relationships between the inner hierarchies and the external macroeconomic functions. This paper tried to measure the globalization degree of industrial sectors on the global value chain. Firstly, it constructed inter-country input-output network models to reproduce the topological structure of global economic system. Secondly, it regarded the propagation of intermediate goods on the global value chain as Markov process and introduced counting first passage betweenness to quantify the added processing amount when globally final demand stimulates this production system. Thirdly, it analyzed the features of globalization at both global and country-sector level
An ultrasound-guided fluorescence tomography system: design and specification
NASA Astrophysics Data System (ADS)
D'Souza, Alisha V.; Flynn, Brendan P.; Kanick, Stephen C.; Torosean, Sason; Davis, Scott C.; Maytin, Edward V.; Hasan, Tayyaba; Pogue, Brian W.
2013-03-01
An ultrasound-guided fluorescence molecular tomography system is under development for in vivo quantification of Protoporphyrin IX (PpIX) during Aminolevulinic Acid - Photodynamic Therapy (ALA-PDT) of Basal Cell Carcinoma. The system is designed to combine fiber-based spectral sampling of PPIX fluorescence emission with co-registered ultrasound images to quantify local fluorophore concentration. A single white light source is used to provide an estimate of the bulk optical properties of tissue. Optical data is obtained by sequential illumination of a 633nm laser source at 4 linear locations with parallel detection at 5 locations interspersed between the sources. Tissue regions from segmented ultrasound images, optical boundary data, white light-informed optical properties and diffusion theory are used to estimate the fluorophore concentration in these regions. Our system and methods allow interrogation of both superficial and deep tissue locations up to PpIX concentrations of 0.025ug/ml.
Structures of Neural Correlation and How They Favor Coding
Franke, Felix; Fiscella, Michele; Sevelev, Maksim; Roska, Botond; Hierlemann, Andreas; da Silveira, Rava Azeredo
2017-01-01
Summary The neural representation of information suffers from “noise”—the trial-to-trial variability in the response of neurons. The impact of correlated noise upon population coding has been debated, but a direct connection between theory and experiment remains tenuous. Here, we substantiate this connection and propose a refined theoretical picture. Using simultaneous recordings from a population of direction-selective retinal ganglion cells, we demonstrate that coding benefits from noise correlations. The effect is appreciable already in small populations, yet it is a collective phenomenon. Furthermore, the stimulus-dependent structure of correlation is key. We develop simple functional models that capture the stimulus-dependent statistics. We then use them to quantify the performance of population coding, which depends upon interplays of feature sensitivities and noise correlations in the population. Because favorable structures of correlation emerge robustly in circuits with noisy, nonlinear elements, they will arise and benefit coding beyond the confines of retina. PMID:26796692
NASA Astrophysics Data System (ADS)
Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele
2018-05-01
A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.
Quantum discord bounds the amount of distributed entanglement.
Chuan, T K; Maillard, J; Modi, K; Paterek, T; Paternostro, M; Piani, M
2012-08-17
The ability to distribute quantum entanglement is a prerequisite for many fundamental tests of quantum theory and numerous quantum information protocols. Two distant parties can increase the amount of entanglement between them by means of quantum communication encoded in a carrier that is sent from one party to the other. Intriguingly, entanglement can be increased even when the exchanged carrier is not entangled with the parties. However, in light of the defining property of entanglement stating that it cannot increase under classical communication, the carrier must be quantum. Here we show that, in general, the increase of relative entropy of entanglement between two remote parties is bounded by the amount of nonclassical correlations of the carrier with the parties as quantified by the relative entropy of discord. We study implications of this bound, provide new examples of entanglement distribution via unentangled states, and put further limits on this phenomenon.
NASA Astrophysics Data System (ADS)
Moslehi, Mahsa; de Barros, Felipe P. J.
2017-01-01
We investigate how the uncertainty stemming from disordered porous media that display long-range correlation in the hydraulic conductivity (K) field propagates to predictions of environmental performance metrics (EPMs). In this study, the EPMs are quantities that are of relevance to risk analysis and remediation, such as peak flux-averaged concentration, early and late arrival times among others. By using stochastic simulations, we quantify the uncertainty associated with the EPMs for a given disordered spatial structure of the K-field and identify the probability distribution function (PDF) model that best captures the statistics of the EPMs of interest. Results indicate that the probabilistic distribution of the EPMs considered in this study follows lognormal PDF. Finally, through the use of information theory, we reveal how the persistent/anti-persistent correlation structure of the K-field influences the EPMs and corresponding uncertainties.
Displacement Models for THUNDER Actuators having General Loads and Boundary Conditions
NASA Technical Reports Server (NTRS)
Wieman, Robert; Smith, Ralph C.; Kackley, Tyson; Ounaies, Zoubeida; Bernd, Jeff; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
This paper summarizes techniques for quantifying the displacements generated in THUNDER actuators in response to applied voltages for a variety of boundary conditions and exogenous loads. The PDE (partial differential equations) models for the actuators are constructed in two steps. In the first, previously developed theory quantifying thermal and electrostatic strains is employed to model the actuator shapes which result from the manufacturing process and subsequent repoling. Newtonian principles are then employed to develop PDE models which quantify displacements in the actuator due to voltage inputs to the piezoceramic patch. For this analysis, drive levels are assumed to be moderate so that linear piezoelectric relations can be employed. Finite element methods for discretizing the models are developed and the performance of the discretized models are illustrated through comparison with experimental data.
A Thermodynamically General Theory for Convective Circulations and Vortices
NASA Astrophysics Data System (ADS)
Renno, N. O.
2007-12-01
Convective circulations and vortices are common features of atmospheres that absorb low-entropy-energy at higher temperatures than they reject high-entropy-energy to space. These circulations range from small to planetary-scale and play an important role in the vertical transport of heat, momentum, and tracer species. Thus, the development of theoretical models for convective phenomena is important to our understanding of many basic features of planetary atmospheres. A thermodynamically general theory for convective circulations and vortices is proposed. The theory includes irreversible processes and quantifies the pressure drop between the environment and any point in a convective updraft. The article's main result is that the proposed theory provides an expression for the pressure drop along streamlines or streamtubes that is a generalization of Bernoulli's equation to convective circulations. We speculate that the proposed theory not only explains the intensity, but also shed light on other basic features of convective circulations and vortices.
Client-controlled case information: a general system theory perspective.
Fitch, Dale
2004-07-01
The author proposes a model for client control of case information via the World Wide Web built on principles of general system theory. It incorporates the client into the design, resulting in an information structure that differs from traditional human services information-sharing practices. Referencing general system theory, the concepts of controller and controlled system, as well as entropy and negentropy, are applied to the information flow and autopoietic behavior as they relate to the boundary-maintaining functions of today's organizations. The author's conclusions synthesize general system theory and human services values to lay the foundation for an information-sharing framework for human services in the 21st century.
Bayesian Methods for Effective Field Theories
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah
Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that incorporate the prior pdfs. Problems of model selection, such as distinguishing between competing EFT implementations, are also natural in a Bayesian framework. In this thesis we focus on two complementary topics for EFT UQ using Bayesian methods--quantifying EFT truncation uncertainty and parameter estimation for LECs. Using the order-by-order calculations and underlying EFT constraints as prior information, we show how to estimate EFT truncation uncertainties. We then apply the result to calculating truncation uncertainties on predictions of nucleon-nucleon scattering in chiral effective field theory. We apply model-checking diagnostics to our calculations to ensure that the statistical model of truncation uncertainty produces consistent results. A framework for EFT parameter estimation based on EFT convergence properties and naturalness is developed which includes a series of diagnostics to ensure the extraction of the maximum amount of available information from data to estimate LECs with minimal bias. We develop this framework using model EFTs and apply it to the problem of extrapolating lattice quantum chromodynamics results for the nucleon mass. We then apply aspects of the parameter estimation framework to perform case studies in chiral EFT parameter estimation, investigating a possible operator redundancy at fourth order in the chiral expansion and the appropriate inclusion of truncation uncertainty in estimating LECs.
The Fallacy of Quantifying Risk
2012-09-01
Defense AT&L: September–October 2012 18 The Fallacy of Quantifying Risk David E. Frick, Ph.D. Frick is a 35-year veteran of the Department of...a key to risk analysis was “choosing the right technique” of quantifying risk . The weakness in this argument stems not from the assertion that one...of information about the enemy), yet achiev- ing great outcomes. Attempts at quantifying risk are not, in and of themselves, objectionable. Prudence
The design of patient decision support interventions: addressing the theory-practice gap.
Elwyn, Glyn; Stiel, Mareike; Durand, Marie-Anne; Boivin, Jacky
2011-08-01
Although an increasing number of decision support interventions for patients (including decision aids) are produced, few make explicit use of theory. We argue the importance of using theory to guide design. The aim of this work was to address this theory-practice gap and to examine how a range of selected decision-making theories could inform the design and evaluation of decision support interventions. We reviewed the decision-making literature and selected relevant theories. We assessed their key principles, theoretical pathways and predictions in order to determine how they could inform the design of two core components of decision support interventions, namely, information and deliberation components and to specify theory-based outcome measures. Eight theories were selected: (1) the expected utility theory; (2) the conflict model of decision making; (3) prospect theory; (4) fuzzy-trace theory; (5) the differentiation and consolidation theory; (6) the ecological rationality theory; (7) the rational-emotional model of decision avoidance; and finally, (8) the Attend, React, Explain, Adapt model of affective forecasting. Some theories have strong relevance to the information design (e.g. prospect theory); some are more relevant to deliberation processes (conflict theory, differentiation theory and ecological validity). None of the theories in isolation was sufficient to inform the design of all the necessary components of decision support interventions. It was also clear that most work in theory-building has focused on explaining or describing how humans think rather than on how tools could be designed to help humans make good decisions. It is not surprising therefore that a large theory-practice gap exists as we consider decision support for patients. There was no relevant theory that integrated all the necessary contributions to the task of making good decisions in collaborative interactions. Initiatives such as the International Patient Decision Aids Standards Collaboration influence standards for the design of decision support interventions. However, this analysis points to the need to undertake more work in providing theoretical foundations for these interventions. © 2010 Blackwell Publishing Ltd.
Guo, Huan; Morales-Bayuelo, Alejandro; Xu, Tianlv; Momen, Roya; Wang, Lingling; Yang, Ping; Kirk, Steven R; Jenkins, Samantha
2016-12-05
Currently the theories to explain and predict the classification of the electronic reorganization due to the torquoselectivity of a ring-opening reaction cannot accommodate the directional character of the reaction pathway; the torquoselectivity is a type of stereoselectivity and therefore is dependent on the pathway. Therefore, in this investigation we introduced new measures from quantum theory of atoms in molecules and the stress tensor to clearly distinguish and quantify the transition states of the inward (TSIC) and outward (TSOC) conrotations of competitive ring-opening reactions of 3-(trifluoromethyl)cyclobut-1-ene and 1-cyano-1-methylcyclobutene. We find the metallicity ξ(r b ) of the ring-opening bond does not occur exactly at the transition state in agreement with transition state theory. The vector-based stress tensor response β σ was used to distinguish the effect of the CN, CH 3 , and CF 3 groups on the TSIC and TSOC paths that was consistent with the ellipticity ε, the total local energy density H(r b ) and the stress tensor stiffness S σ . We determine the directional properties of the TSIC and TSOC ring-opening reactions by constructing a stress tensor UσTS space with trajectories TσTS (s) with length l in real space, longer l correlated with the lowest density functional theory-evaluated total energy barrier and hence will be more thermodynamically favored. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DuFour, Mark R.; May, Cassandra J.; Roseman, Edward F.; Ludsin, Stuart A.; Vandergoot, Christopher S.; Pritt, Jeremy J.; Fraker, Michael E.; Davis, Jeremiah J.; Tyson, Jeffery T.; Miner, Jeffery G.; Marschall, Elizabeth A.; Mayer, Christine M.
2015-01-01
Habitat degradation and harvest have upset the natural buffering mechanism (i.e., portfolio effects) of many large-scale multi-stock fisheries by reducing spawning stock diversity that is vital for generating population stability and resilience. The application of portfolio theory offers a means to guide management activities by quantifying the importance of multi-stock dynamics and suggesting conservation and restoration strategies to improve naturally occurring portfolio effects. Our application of portfolio theory to Lake Erie Sander vitreus (walleye), a large population that is supported by riverine and open-lake reef spawning stocks, has shown that portfolio effects generated by annual inter-stock larval fish production are currently suboptimal when compared to potential buffering capacity. Reduced production from riverine stocks has resulted in a single open-lake reef stock dominating larval production, and in turn, high inter-annual recruitment variability during recent years. Our analyses have shown (1) a weak average correlation between annual river and reef larval production (ρ̄ = 0.24), suggesting that a natural buffering capacity exists in the population, and (2) expanded annual production of larvae (potential recruits) from riverine stocks could stabilize the fishery by dampening inter-annual recruitment variation. Ultimately, our results demonstrate how portfolio theory can be used to quantify the importance of spawning stock diversity and guide management on ecologically relevant scales (i.e., spawning stocks) leading to greater stability and resilience of multi-stock populations and fisheries.
NASA Astrophysics Data System (ADS)
Ruiz-Cabello, F. Javier Montes; Maroni, Plinio; Borkovec, Michal
2013-06-01
Force measurements between three types of latex particles of diameters down to 1 μm with sulfate and carboxyl surface functionalities were carried out with the multi-particle colloidal probe technique. The experiments were performed in monovalent electrolyte up to concentrations of about 5 mM. The force profiles could be quantified with the theory of Derjaguin, Landau, Verwey, and Overbeek (DLVO) by invoking non-retarded van der Waals forces and the Poisson-Boltzmann description of double layer forces within the constant regulation approximation. The forces measured in the symmetric systems were used to extract particle and surface properties, namely, the Hamaker constant, surface potentials, and regulation parameters. The regulation parameter is found to be independent of solution composition. With these values at hand, the DLVO theory is capable to accurately predict the measured forces in the asymmetric systems down to distances of 2-3 nm without adjustable parameters. This success indicates that DLVO theory is highly reliable to quantify interaction forces in such systems. However, charge regulation effects are found to be important, and they must be considered to obtain correct description of the forces. The use of the classical constant charge or constant potential boundary conditions may lead to erroneous results. To make reliable predictions of the force profiles, the surface potentials must be extracted from direct force measurements too. For highly charged surfaces, the commonly used electrophoresis techniques are found to yield incorrect estimates of this quantity.
Montes Ruiz-Cabello, F Javier; Maroni, Plinio; Borkovec, Michal
2013-06-21
Force measurements between three types of latex particles of diameters down to 1 μm with sulfate and carboxyl surface functionalities were carried out with the multi-particle colloidal probe technique. The experiments were performed in monovalent electrolyte up to concentrations of about 5 mM. The force profiles could be quantified with the theory of Derjaguin, Landau, Verwey, and Overbeek (DLVO) by invoking non-retarded van der Waals forces and the Poisson-Boltzmann description of double layer forces within the constant regulation approximation. The forces measured in the symmetric systems were used to extract particle and surface properties, namely, the Hamaker constant, surface potentials, and regulation parameters. The regulation parameter is found to be independent of solution composition. With these values at hand, the DLVO theory is capable to accurately predict the measured forces in the asymmetric systems down to distances of 2-3 nm without adjustable parameters. This success indicates that DLVO theory is highly reliable to quantify interaction forces in such systems. However, charge regulation effects are found to be important, and they must be considered to obtain correct description of the forces. The use of the classical constant charge or constant potential boundary conditions may lead to erroneous results. To make reliable predictions of the force profiles, the surface potentials must be extracted from direct force measurements too. For highly charged surfaces, the commonly used electrophoresis techniques are found to yield incorrect estimates of this quantity.
USDA-ARS?s Scientific Manuscript database
Spatio-temporal measurements of landform evolution provide the basis for process-based theory formulation and validation. Overtime, field measurement of landforms has increased significantly worldwide, driven primarily by the availability of new surveying technologies. However, there is not a standa...
ESRP approach to using final ecosystem services
The U.S. Environmental Protection Agency has developed the ecosystem Services Research Program (ESRP) as one of its major research efforts. The goal of this program is to create “A comprehensive theory and practice for quantifying ecosystem services so that their value and their...
Cancer biomarker discovery: the entropic hallmark.
Berretta, Regina; Moscato, Pablo
2010-08-18
It is a commonly accepted belief that cancer cells modify their transcriptional state during the progression of the disease. We propose that the progression of cancer cells towards malignant phenotypes can be efficiently tracked using high-throughput technologies that follow the gradual changes observed in the gene expression profiles by employing Shannon's mathematical theory of communication. Methods based on Information Theory can then quantify the divergence of cancer cells' transcriptional profiles from those of normally appearing cells of the originating tissues. The relevance of the proposed methods can be evaluated using microarray datasets available in the public domain but the method is in principle applicable to other high-throughput methods. Using melanoma and prostate cancer datasets we illustrate how it is possible to employ Shannon Entropy and the Jensen-Shannon divergence to trace the transcriptional changes progression of the disease. We establish how the variations of these two measures correlate with established biomarkers of cancer progression. The Information Theory measures allow us to identify novel biomarkers for both progressive and relatively more sudden transcriptional changes leading to malignant phenotypes. At the same time, the methodology was able to validate a large number of genes and processes that seem to be implicated in the progression of melanoma and prostate cancer. We thus present a quantitative guiding rule, a new unifying hallmark of cancer: the cancer cell's transcriptome changes lead to measurable observed transitions of Normalized Shannon Entropy values (as measured by high-throughput technologies). At the same time, tumor cells increment their divergence from the normal tissue profile increasing their disorder via creation of states that we might not directly measure. This unifying hallmark allows, via the the Jensen-Shannon divergence, to identify the arrow of time of the processes from the gene expression profiles, and helps to map the phenotypical and molecular hallmarks of specific cancer subtypes. The deep mathematical basis of the approach allows us to suggest that this principle is, hopefully, of general applicability for other diseases.
Application of the Hard and Soft, Acids and Bases (HSAB) theory to toxicant--target interactions.
Lopachin, Richard M; Gavin, Terrence; Decaprio, Anthony; Barber, David S
2012-02-20
Many chemical toxicants and/or their active metabolites are electrophiles that cause cell injury by forming covalent bonds with nucleophilic targets on biological macromolecules. Covalent reactions between nucleophilic and electrophilic reagents are, however, discriminatory since there is a significant degree of selectivity associated with these interactions. Over the course of the past few decades, the theory of Hard and Soft, Acids and Bases (HSAB) has proven to be a useful tool in predicting the outcome of such reactions. This concept utilizes the inherent electronic characteristic of polarizability to define, for example, reacting electrophiles and nucleophiles as either hard or soft. These HSAB definitions have been successfully applied to chemical-induced toxicity in biological systems. Thus, according to this principle, a toxic electrophile reacts preferentially with biological targets of similar hardness or softness. The soft/hard classification of a xenobiotic electrophile has obvious utility in discerning plausible biological targets and molecular mechanisms of toxicity. The purpose of this perspective is to discuss the HSAB theory of electrophiles and nucleophiles within a toxicological framework. In principle, covalent bond formation can be described by using the properties of their outermost or frontier orbitals. Because these orbital energies for most chemicals can be calculated using quantum mechanical models, it is possible to quantify the relative softness (σ) or hardness (η) of electrophiles or nucleophiles and to subsequently convert this information into useful indices of reactivity. This atomic level information can provide insight into the design of corroborative laboratory research and thereby help investigators discern corresponding molecular sites and mechanisms of toxicant action. The use of HSAB parameters has also been instrumental in the development and identification of potential nucleophilic cytoprotectants that can scavenge toxic electrophiles. Clearly, the difficult task of delineating molecular sites and mechanisms of toxicant action can be facilitated by the application of this quantitative approach.
APPLICATION OF THE HARD AND SOFT, ACIDS AND BASES (HSAB) THEORY TO TOXICANT-TARGET INTERACTIONS
LoPachin, Richard M.; Gavin, Terrence; DeCaprio, Anthony; Barber, David S.
2011-01-01
Many chemical toxicants and/or their active metabolites are electrophiles that cause cell injury by forming covalent bonds with nucleophilic targets on biological macromolecules. Covalent reactions between nucleophilic and electrophilic reagents are however discriminatory, since there is a significant degree of selectivity associated with these interactions. Over the course of the past few decades, the theory of Hard and Soft, Acid and Bases (HSAB) has proven to be a useful tool in predicting the outcome of such reactions. This concept utilizes the inherent electronic characteristic of polarizability to define, for example, reacting electrophiles and nucleophiles as either hard or soft. These HSAB definitions have been successfully applied to chemical-induced toxicity in biological systems. Thus, according to this principle, a toxic electrophile reacts preferentially with biological targets of similar hardness or softness. The soft/hard classification of a xenobiotic electrophile has obvious utility in discerning plausible biological targets and molecular mechanisms of toxicity. The purpose of this Perspective is to discuss the HSAB theory of electrophiles and nucleophiles within a toxicological framework. In principle, covalent bond formation can be described by using the properties of their outermost or frontier orbitals. Because these orbital energies for most chemicals can be calculated using quantum mechanical models, it is possible to quantify the relative softness (σ) or hardness (η) of electrophiles or nucleophiles and to subsequently convert this information into useful indices of reactivity. This atomic level information can provide insight into the design of corroborative laboratory research and thereby help investigators discern corresponding molecular sites and mechanisms of toxicant action. The use of HSAB parameters has also been instrumental in the development and identification of potential nucleophilic cytoprotectants that can scavenge toxic electrophiles. Clearly, the difficult task of delineating molecular sites and mechanisms of toxicant action can be facilitated by the application of this quantitative approach. PMID:22053936
A Probabilistic Framework for the Validation and Certification of Computer Simulations
NASA Technical Reports Server (NTRS)
Ghanem, Roger; Knio, Omar
2000-01-01
The paper presents a methodology for quantifying, propagating, and managing the uncertainty in the data required to initialize computer simulations of complex phenomena. The purpose of the methodology is to permit the quantitative assessment of a certification level to be associated with the predictions from the simulations, as well as the design of a data acquisition strategy to achieve a target level of certification. The value of a methodology that can address the above issues is obvious, specially in light of the trend in the availability of computational resources, as well as the trend in sensor technology. These two trends make it possible to probe physical phenomena both with physical sensors, as well as with complex models, at previously inconceivable levels. With these new abilities arises the need to develop the knowledge to integrate the information from sensors and computer simulations. This is achieved in the present work by tracing both activities back to a level of abstraction that highlights their commonalities, thus allowing them to be manipulated in a mathematically consistent fashion. In particular, the mathematical theory underlying computer simulations has long been associated with partial differential equations and functional analysis concepts such as Hilbert spares and orthogonal projections. By relying on a probabilistic framework for the modeling of data, a Hilbert space framework emerges that permits the modeling of coefficients in the governing equations as random variables, or equivalently, as elements in a Hilbert space. This permits the development of an approximation theory for probabilistic problems that parallels that of deterministic approximation theory. According to this formalism, the solution of the problem is identified by its projection on a basis in the Hilbert space of random variables, as opposed to more traditional techniques where the solution is approximated by its first or second-order statistics. The present representation, in addition to capturing significantly more information than the traditional approach, facilitates the linkage between different interacting stochastic systems as is typically observed in real-life situations.
Quantifying evolutionary dynamics from variant-frequency time series
NASA Astrophysics Data System (ADS)
Khatri, Bhavin S.
2016-09-01
From Kimura’s neutral theory of protein evolution to Hubbell’s neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher’s angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series.
Quantifying evolutionary dynamics from variant-frequency time series.
Khatri, Bhavin S
2016-09-12
From Kimura's neutral theory of protein evolution to Hubbell's neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher's angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series.
Pekala, Ronald J
2016-01-01
Wickramasekera II (2015) has penned a comprehensive and thoughtful review article demonstrating how empathy is intimately involved in the psychology and neurophysiology of hypnosis and the self. Hypnosis is a very "mental" or subjective phenomenon for both the client and the research participant. To better assess the mind of the client/participant during hypnosis, it is my belief that we need to generate more "precise" phenomenological descriptors of the mind during hypnosis and related empathic conditions, as Wickramasekera II (2015) has suggested in his article. Although any phenomenological methodology will have its limits and disadvantages, noetics (as defined in the article below) can help us better understand hypnosis, empathic involvement theory, and the brain/mind/behavior interface. By quantifying the mind in a comprehensive manner, just as the brain is comprehensively quantified via fMRI and qEEG technologies, noetic analysis can help us more precisely assess the mind and relate it to the brain and human behavior and experience.
2014-01-01
Classification confidence, or informative content of the subsets, is quantified by the Information Divergence. Our approach relates to active learning , semi-supervised learning, mixed generative/discriminative learning.
ERIC Educational Resources Information Center
Wang, Lin
2013-01-01
Background: Cultural-historical activity theory is an important theory in modern psychology. In recent years, it has drawn more attention from related disciplines including information science. Argument: This paper argues that activity theory and domain analysis which uses the theory as one of its bases could bring about some important…
Measures and applications of quantum correlations
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Bromley, Thomas R.; Cianciaruso, Marco
2016-11-01
Quantum information theory is built upon the realisation that quantum resources like coherence and entanglement can be exploited for novel or enhanced ways of transmitting and manipulating information, such as quantum cryptography, teleportation, and quantum computing. We now know that there is potentially much more than entanglement behind the power of quantum information processing. There exist more general forms of non-classical correlations, stemming from fundamental principles such as the necessary disturbance induced by a local measurement, or the persistence of quantum coherence in all possible local bases. These signatures can be identified and are resilient in almost all quantum states, and have been linked to the enhanced performance of certain quantum protocols over classical ones in noisy conditions. Their presence represents, among other things, one of the most essential manifestations of quantumness in cooperative systems, from the subatomic to the macroscopic domain. In this work we give an overview of the current quest for a proper understanding and characterisation of the frontier between classical and quantum correlations (QCs) in composite states. We focus on various approaches to define and quantify general QCs, based on different yet interlinked physical perspectives, and comment on the operational significance of the ensuing measures for quantum technology tasks such as information encoding, distribution, discrimination and metrology. We then provide a broader outlook of a few applications in which quantumness beyond entanglement looks fit to play a key role.
Information-Theoretic Benchmarking of Land Surface Models
NASA Astrophysics Data System (ADS)
Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong
2016-04-01
Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed about 40%. There was relatively little difference between the different models. 1. G. Abramowitz, R. Leuning, M. Clark, A. Pitman, Evaluating the performance of land surface models. Journal of Climate 21, (2008). 2. W. Gong, H. V. Gupta, D. Yang, K. Sricharan, A. O. Hero, Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach. Water Resources Research 49, 2253-2273 (2013). 3. G. S. Nearing, H. V. Gupta, The quantity and quality of information in hydrologic models. Water Resources Research 51, 524-538 (2015). 4. H. V. Gupta, G. S. Nearing, Using models and data to learn: A systems theoretic perspective on the future of hydrological science. Water Resources Research 50(6), 5351-5359 (2014). 5. H. V. Gupta et al., Large-sample hydrology: a need to balance depth with breadth. Hydrology and Earth System Sciences Discussions 10, 9147-9189 (2013).
The Impact of Cognitive Load Theory on Learning Astronomy
NASA Astrophysics Data System (ADS)
Foster, Thomas M.
2010-01-01
Every student is different, which is the challenge of astronomy education research (AER) and teaching astronomy. This difference also provides the greatest goal for education researchers - our GUT - we need to be able to quantify these differences and provide explanatory and predictive theories to curriculum developers and teachers. One educational theory that holds promise is Cognitive Load Theory. Cognitive Load Theory begins with the well-established fact that everyone's working memory can hold 7 ± 2 unique items. This quirk of the human brain is why phone numbers are 7 digits long. This quirk is also why we forget peoples’ names after just meeting them, leave the iron on when we leave the house, and become overwhelmed as students of new material. Once the intricacies of Cognitive Load are understood, it becomes possible to design learning environments to marshal the resources students have and guide them to success. Lessons learned from Cognitive Load Theory can and should be applied to learning astronomy. Classroom-ready ideas will be presented.
Facial patterns in a tropical social wasp correlate with colony membership
NASA Astrophysics Data System (ADS)
Baracchi, David; Turillazzi, Stefano; Chittka, Lars
2016-10-01
Social insects excel in discriminating nestmates from intruders, typically relying on colony odours. Remarkably, some wasp species achieve such discrimination using visual information. However, while it is universally accepted that odours mediate a group level recognition, the ability to recognise colony members visually has been considered possible only via individual recognition by which wasps discriminate `friends' and `foes'. Using geometric morphometric analysis, which is a technique based on a rigorous statistical theory of shape allowing quantitative multivariate analyses on structure shapes, we first quantified facial marking variation of Liostenogaster flavolineata wasps. We then compared this facial variation with that of chemical profiles (generated by cuticular hydrocarbons) within and between colonies. Principal component analysis and discriminant analysis applied to sets of variables containing pure shape information showed that despite appreciable intra-colony variation, the faces of females belonging to the same colony resemble one another more than those of outsiders. This colony-specific variation in facial patterns was on a par with that observed for odours. While the occurrence of face discrimination at the colony level remains to be tested by behavioural experiments, overall our results suggest that, in this species, wasp faces display adequate information that might be potentially perceived and used by wasps for colony level recognition.
Quantifying Contributions to Transport in Ionic Polymers Across Multiple Length Scales
NASA Astrophysics Data System (ADS)
Madsen, Louis
Self-organized polymer membranes conduct mobile species (ions, water, alcohols, etc.) according to a hierarchy of structural motifs that span sub-nm to >10 μm in length scale. In order to comprehensively understand such materials, our group combines multiple types of NMR dynamics and transport measurements (spectroscopy, diffusometry, relaxometry, imaging) with structural information from scattering and microscopy as well as with theories of porous media,1 electrolytic transport, and oriented matter.2 In this presentation, I will discuss quantitative separation of the phenomena that govern transport in polymer membranes, from intermolecular interactions (<= 2 nm),3 to locally ordered polymer nanochannels (a few to 10s of nm),2 to larger polymer domain structures (10s of nm and larger).1 Using this multi-scale information, we seek to give informed feedback on the design of polymer membranes for use in, e . g . , efficient batteries, fuel cells, and mechanical actuators. References: [1] J. Hou, J. Li, D. Mountz, M. Hull, and L. A. Madsen. Journal of Membrane Science448, 292-298 (2013). [2] J. Li, J. K. Park, R. B. Moore, and L. A. Madsen. Nature Materials 10, 507-511 (2011). [3] M. D. Lingwood, Z. Zhang, B. E. Kidd, K. B. McCreary, J. Hou, and L. A. Madsen. Chemical Communications 49, 4283 - 4285 (2013).
Rare variation facilitates inferences of fine-scale population structure in humans.
O'Connor, Timothy D; Fu, Wenqing; Mychaleckyj, Josyf C; Logsdon, Benjamin; Auer, Paul; Carlson, Christopher S; Leal, Suzanne M; Smith, Joshua D; Rieder, Mark J; Bamshad, Michael J; Nickerson, Deborah A; Akey, Joshua M
2015-03-01
Understanding the genetic structure of human populations has important implications for the design and interpretation of disease mapping studies and reconstructing human evolutionary history. To date, inferences of human population structure have primarily been made with common variants. However, recent large-scale resequencing studies have shown an abundance of rare variation in humans, which may be particularly useful for making inferences of fine-scale population structure. To this end, we used an information theory framework and extensive coalescent simulations to rigorously quantify the informativeness of rare and common variation to detect signatures of fine-scale population structure. We show that rare variation affords unique insights into patterns of recent population structure. Furthermore, to empirically assess our theoretical findings, we analyzed high-coverage exome sequences in 6,515 European and African American individuals. As predicted, rare variants are more informative than common polymorphisms in revealing a distinct cluster of European-American individuals, and subsequent analyses demonstrate that these individuals are likely of Ashkenazi Jewish ancestry. Our results provide new insights into the population structure using rare variation, which will be an important factor to account for in rare variant association studies. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Will it Blend? Visualization and Accuracy Evaluation of High-Resolution Fuzzy Vegetation Maps
NASA Astrophysics Data System (ADS)
Zlinszky, A.; Kania, A.
2016-06-01
Instead of assigning every map pixel to a single class, fuzzy classification includes information on the class assigned to each pixel but also the certainty of this class and the alternative possible classes based on fuzzy set theory. The advantages of fuzzy classification for vegetation mapping are well recognized, but the accuracy and uncertainty of fuzzy maps cannot be directly quantified with indices developed for hard-boundary categorizations. The rich information in such a map is impossible to convey with a single map product or accuracy figure. Here we introduce a suite of evaluation indices and visualization products for fuzzy maps generated with ensemble classifiers. We also propose a way of evaluating classwise prediction certainty with "dominance profiles" visualizing the number of pixels in bins according to the probability of the dominant class, also showing the probability of all the other classes. Together, these data products allow a quantitative understanding of the rich information in a fuzzy raster map both for individual classes and in terms of variability in space, and also establish the connection between spatially explicit class certainty and traditional accuracy metrics. These map products are directly comparable to widely used hard boundary evaluation procedures, support active learning-based iterative classification and can be applied for operational use.
Inferring epidemiological parameters from phylogenetic information for the HIV-1 epidemic among MSM
NASA Astrophysics Data System (ADS)
Quax, Rick; van de Vijver, David A. M. C.; Frentz, Dineke; Sloot, Peter M. A.
2013-09-01
The HIV-1 epidemic in Europe is primarily sustained by a dynamic topology of sexual interactions among MSM who have individual immune systems and behavior. This epidemiological process shapes the phylogeny of the virus population. Both fields of epidemic modeling and phylogenetics have a long history, however it remains difficult to use phylogenetic data to infer epidemiological parameters such as the structure of the sexual network and the per-act infectiousness. This is because phylogenetic data is necessarily incomplete and ambiguous. Here we show that the cluster-size distribution indeed contains information about epidemiological parameters using detailed numberical experiments. We simulate the HIV epidemic among MSM many times using the Monte Carlo method with all parameter values and their ranges taken from literature. For each simulation and the corresponding set of parameter values we calculate the likelihood of reproducing an observed cluster-size distribution. The result is an estimated likelihood distribution of all parameters from the phylogenetic data, in particular the structure of the sexual network, the per-act infectiousness, and the risk behavior reduction upon diagnosis. These likelihood distributions encode the knowledge provided by the observed cluster-size distrbution, which we quantify using information theory. Our work suggests that the growing body of genetic data of patients can be exploited to understand the underlying epidemiological process.
The computational neurobiology of learning and reward.
Daw, Nathaniel D; Doya, Kenji
2006-04-01
Following the suggestion that midbrain dopaminergic neurons encode a signal, known as a 'reward prediction error', used by artificial intelligence algorithms for learning to choose advantageous actions, the study of the neural substrates for reward-based learning has been strongly influenced by computational theories. In recent work, such theories have been increasingly integrated into experimental design and analysis. Such hybrid approaches have offered detailed new insights into the function of a number of brain areas, especially the cortex and basal ganglia. In part this is because these approaches enable the study of neural correlates of subjective factors (such as a participant's beliefs about the reward to be received for performing some action) that the computational theories purport to quantify.
Generalized second law of thermodynamics in f(R,T) theory of gravity
NASA Astrophysics Data System (ADS)
Momeni, D.; Moraes, P. H. R. S.; Myrzakulov, R.
2016-07-01
We present a study of the generalized second law of thermodynamics in the scope of the f(R,T) theory of gravity, with R and T representing the Ricci scalar and trace of the energy-momentum tensor, respectively. From the energy-momentum tensor equation for the f(R,T)=R+f(T) case, we calculate the form of the geometric entropy in such a theory. Then, the generalized second law of thermodynamics is quantified and some relations for its obedience in f(R,T) gravity are presented. Those relations depend on some cosmological quantities, as the Hubble and deceleration parameters, and also on the form of f(T).
Magnetic-field-modulated resonant tunneling in ferromagnetic-insulator-nonmagnetic junctions.
Song, Yang; Dery, Hanan
2014-07-25
We present a theory for resonance-tunneling magnetoresistance (MR) in ferromagnetic-insulator-nonmagnetic junctions. The theory sheds light on many of the recent electrical spin injection experiments, suggesting that this MR effect rather than spin accumulation in the nonmagnetic channel corresponds to the electrically detected signal. We quantify the dependence of the tunnel current on the magnetic field by quantum rate equations derived from the Anderson impurity model, with the important addition of impurity spin interactions. Considering the on-site Coulomb correlation, the MR effect is caused by competition between the field, spin interactions, and coupling to the magnetic lead. By extending the theory, we present a basis for operation of novel nanometer-size memories.
Implications of Information Theory for Computational Modeling of Schizophrenia.
Silverstein, Steven M; Wibral, Michael; Phillips, William A
2017-10-01
Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory-such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio-can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development.
NASA Astrophysics Data System (ADS)
Dahms, Rainer N.; Oefelein, Joseph C.
2013-09-01
A theory that explains the operating pressures where liquid injection processes transition from exhibiting classical two-phase spray atomization phenomena to single-phase diffusion-dominated mixing is presented. Imaging from a variety of experiments have long shown that under certain conditions, typically when the pressure of the working fluid exceeds the thermodynamic critical pressure of the liquid phase, the presence of discrete two-phase flow processes become diminished. Instead, the classical gas-liquid interface is replaced by diffusion-dominated mixing. When and how this transition occurs, however, is not well understood. Modern theory still lacks a physically based model to quantify this transition and the precise mechanisms that lead to it. In this paper, we derive a new model that explains how the transition occurs in multicomponent fluids and present a detailed analysis to quantify it. The model applies a detailed property evaluation scheme based on a modified 32-term Benedict-Webb-Rubin equation of state that accounts for the relevant real-fluid thermodynamic and transport properties of the multicomponent system. This framework is combined with Linear Gradient Theory, which describes the detailed molecular structure of the vapor-liquid interface region. Our analysis reveals that the two-phase interface breaks down not necessarily due to vanishing surface tension forces, but due to thickened interfaces at high subcritical temperatures coupled with an inherent reduction of the mean free molecular path. At a certain point, the combination of reduced surface tension, the thicker interface, and reduced mean free molecular path enter the continuum length scale regime. When this occurs, inter-molecular forces approach that of the multicomponent continuum where transport processes dominate across the interfacial region. This leads to a continuous phase transition from compressed liquid to supercritical mixture states. Based on this theory, a regime diagram for liquid injection is developed that quantifies the conditions under which classical sprays transition to dense-fluid jets. It is shown that the chamber pressure required to support diffusion-dominated mixing dynamics depends on the composition and temperature of the injected liquid and ambient gas. To illustrate the method and analysis, we use conditions typical of diesel engine injection. We also present a companion set of high-speed images to provide experimental validation of the presented theory. The basic theory is quite general and applies to a wide range of modern propulsion and power systems such as liquid rockets, gas turbines, and reciprocating engines. Interestingly, the regime diagram associated with diesel engine injection suggests that classical spray phenomena at typical injection conditions do not occur.
An Approach to Greater Specificity for Glucocorticoids
Chow, Carson C.; Simons, S. Stoney
2018-01-01
Glucocorticoid steroids are among the most prescribed drugs each year. Nonetheless, the many undesirable side effects, and lack of selectivity, restrict their greater usage. Research to increase glucocorticoid specificity has spanned many years. These efforts have been hampered by the ability of glucocorticoids to both induce and repress gene transcription and also by the lack of success in defining any predictable properties that control glucocorticoid specificity. Correlations of transcriptional specificity have been observed with changes in steroid structure, receptor and chromatin conformation, DNA sequence for receptor binding, and associated cofactors. However, none of these studies have progressed to the point of being able to offer guidance for increased specificity. We summarize here a mathematical theory that allows a novel and quantifiable approach to increase selectivity. The theory applies to all three major actions of glucocorticoid receptors: induction by agonists, induction by antagonists, and repression by agonists. Simple graphical analysis of competition assays involving any two factors (steroid, chemical, peptide, protein, DNA, etc.) yields information (1) about the kinetically described mechanism of action for each factor at that step where the factor acts in the overall reaction sequence and (2) about the relative position of that step where each factor acts. These two pieces of information uniquely provide direction for increasing the specificity of glucocorticoid action. Consideration of all three modes of action indicate that the most promising approach for increased specificity is to vary the concentrations of those cofactors/pharmaceuticals that act closest to the observed end point. The potential for selectivity is even greater when varying cofactors/pharmaceuticals in conjunction with a select class of antagonists. PMID:29593646
2013-01-01
Background The quality of care in nursing homes is weakly defined, and has traditionally focused on quantify nursing homes outputs and on comparison of nursing homes’ resources. Rarely the point of view of clients has been taken into account. The aim of this study was to ascertain what means “quality of care” for residents of nursing homes. Methods Grounded theory was used to design and analyze a qualitative study based on in-depth interviews with a theoretical sampling including 20 persons aged over 65 years with no cognitive impairment and eight proxy informants of residents with cognitive impairment, institutionalized at a public nursing home in Spain. Results Our analysis revealed that participants perceived the quality of care in two ways, as aspects related to the persons providing care and as institutional aspects of the care’s process. All participants agreed that aspects related to the persons providing care was a pillar of quality, something that, in turn, embodied a series of emotional and technical professional competences. Regarding the institutional aspects of the care’s process, participants laid emphasis on round-the-clock access to health care services and on professional’s job stability. Conclusions This paper includes perspectives of the nursing homes residents, which are largely absent. Incorporating residents’ standpoints as a complement to traditional institutional criteria would furnish health providers and funding agencies with key information when it came to designing action plans and interventions aimed at achieving excellence in health care. PMID:23809066
Modeling the adiabatic connection in H2.
Peach, Michael J G; Teale, Andrew M; Tozer, David J
2007-06-28
Full configuration interaction (FCI) data are used to quantify the accuracy of approximate adiabatic connection (AC) forms in describing the ground state potential energy curve of H2, within spin-restricted density functional theory (DFT). For each internuclear separation R, accurate properties of the AC are determined from large basis set FCI calculations. The parameters in the approximate AC form are then determined so as to reproduce these FCI values exactly, yielding an exchange-correlation energy expressed entirely in terms of FCI-derived quantities. This is combined with other FCI-derived energy components to give the total electronic energy; comparison with the FCI energy quantifies the accuracy of the AC form. Initial calculations focus on a [1/1]-Padé-based form. The potential energy curve determined using the procedure is a notable improvement over those from existing DFT functionals. The accuracy near equilibrium is quantified by calculating the bond length and vibrational wave numbers; errors in the latter are below 0.5%. The molecule dissociates correctly, which can be traced to the use of virtual orbital eigenvalues in the slope in the noninteracting limit, capturing static correlation. At intermediate R, the potential energy curve exhibits an unphysical barrier, similar to that noted previously using the random phase approximation. Alternative forms of the AC are also considered, paying attention to size extensivity and the behavior in the strong-interaction limit; none provide an accurate potential energy curve for all R, although good accuracy can be achieved near equilibrium. The study demonstrates how data from correlated ab initio calculations can provide valuable information about AC forms and highlight areas where further theoretical progress is required.
NASA Astrophysics Data System (ADS)
Perrier, E. M. A.; Bird, N. R. A.; Rieutord, T. B.
2010-04-01
Quantifying the connectivity of pore networks is a key issue not only for modelling fluid flow and solute transport in porous media but also for assessing the ability of soil ecosystems to filter bacteria, viruses and any type of living microorganisms as well inert particles which pose a contamination risk. Straining is the main mechanical component of filtration processes: it is due to size effects, when a given soil retains a conveyed entity larger than the pores through which it is attempting to pass. We postulate that the range of sizes of entities which can be trapped inside soils has to be associated with the large range of scales involved in natural soil structures and that information on the pore size distribution has to be complemented by information on a Critical Filtration Size (CFS) delimiting the transition between percolating and non percolating regimes in multiscale pore networks. We show that the mass fractal dimensions which are classically used in soil science to quantify scaling laws in observed pore size distributions can also be used to build 3-D multiscale models of pore networks exhibiting such a critical transition. We extend to the 3-D case a new theoretical approach recently developed to address the connectivity of 2-D fractal networks (Bird and Perrier, 2009). Theoretical arguments based on renormalisation functions provide insight into multi-scale connectivity and a first estimation of CFS. Numerical experiments on 3-D prefractal media confirm the qualitative theory. These results open the way towards a new methodology to estimate soil filtration efficiency from the construction of soil structural models to be calibrated on available multiscale data.
NASA Astrophysics Data System (ADS)
Perrier, E. M. A.; Bird, N. R. A.; Rieutord, T. B.
2010-10-01
Quantifying the connectivity of pore networks is a key issue not only for modelling fluid flow and solute transport in porous media but also for assessing the ability of soil ecosystems to filter bacteria, viruses and any type of living microorganisms as well inert particles which pose a contamination risk. Straining is the main mechanical component of filtration processes: it is due to size effects, when a given soil retains a conveyed entity larger than the pores through which it is attempting to pass. We postulate that the range of sizes of entities which can be trapped inside soils has to be associated with the large range of scales involved in natural soil structures and that information on the pore size distribution has to be complemented by information on a critical filtration size (CFS) delimiting the transition between percolating and non percolating regimes in multiscale pore networks. We show that the mass fractal dimensions which are classically used in soil science to quantify scaling laws in observed pore size distributions can also be used to build 3-D multiscale models of pore networks exhibiting such a critical transition. We extend to the 3-D case a new theoretical approach recently developed to address the connectivity of 2-D fractal networks (Bird and Perrier, 2009). Theoretical arguments based on renormalisation functions provide insight into multi-scale connectivity and a first estimation of CFS. Numerical experiments on 3-D prefractal media confirm the qualitative theory. These results open the way towards a new methodology to estimate soil filtration efficiency from the construction of soil structural models to be calibrated on available multiscale data.
Modeling spatial accessibility to parks: a national study.
Zhang, Xingyou; Lu, Hua; Holt, James B
2011-05-09
Parks provide ideal open spaces for leisure-time physical activity and important venues to promote physical activity. The spatial configuration of parks, the number of parks and their spatial distribution across neighborhood areas or local regions, represents the basic park access potential for their residential populations. A new measure of spatial access to parks, population-weighted distance (PWD) to parks, combines the advantages of current park access approaches and incorporates the information processing theory and probability access surface model to more accurately quantify residential population's potential spatial access to parks. The PWD was constructed at the basic level of US census geography - blocks - using US park and population data. This new measure of population park accessibility was aggregated to census tract, county, state and national levels. On average, US residential populations are expected to travel 6.7 miles to access their local neighborhood parks. There are significant differences in the PWD to local parks among states. The District of Columbia and Connecticut have the best access to local neighborhood parks with PWD of 0.6 miles and 1.8 miles, respectively. Alaska, Montana, and Wyoming have the largest PWDs of 62.0, 37.4, and 32.8 miles, respectively. Rural states in the western and Midwestern US have lower neighborhood park access, while urban states have relatively higher park access. The PWD to parks provides a consistent platform for evaluating spatial equity of park access and linking with population health outcomes. It could be an informative evaluation tool for health professionals and policy makers. This new method could be applied to quantify geographic accessibility of other types of services or destinations, such as food, alcohol, and tobacco outlets.
Analysis to Quantify Significant Contribution
This Technical Support Document provides information that supports EPA’s analysis to quantify upwind state emissions that significantly contribute to nonattainment or interfere with maintenance of National Ambient Air Quality Standards in downwind states.
Variability of multilevel switching in scaled hybrid RS/CMOS nanoelectronic circuits: theory
NASA Astrophysics Data System (ADS)
Heittmann, Arne; Noll, Tobias G.
2013-07-01
A theory is presented which describes the variability of multilevel switching in scaled hybrid resistive-switching/CMOS nanoelectronic circuits. Variability is quantified in terms of conductance variation using the first two moments derived from the probability density function (PDF) of the RS conductance. For RS, which are based on the electrochemical metallization effect (ECM), this variability is - to some extent - caused by discrete events such as electrochemical reactions, which occur on atomic scale and are at random. The theory shows that the conductance variation depends on the joint interaction between the programming circuit and the resistive switch (RS), and explicitly quantifies the impact of RS device parameters and parameters of the programming circuit on the conductance variance. Using a current mirror as an exemplary programming circuit an upper limit of 2-4 bits (dependent on the filament surface area) is estimated as the storage capacity exploiting the multilevel capabilities of an ECM cell. The theoretical results were verified by Monte Carlo circuit simulations on a standard circuit simulation environment using an ECM device model which models the filament growth by a Poisson process. Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.
An information theory account of cognitive control.
Fan, Jin
2014-01-01
Our ability to efficiently process information and generate appropriate responses depends on the processes collectively called cognitive control. Despite a considerable focus in the literature on the cognitive control of information processing, neural mechanisms underlying control are still unclear, and have not been characterized by considering the quantity of information to be processed. A novel and comprehensive account of cognitive control is proposed using concepts from information theory, which is concerned with communication system analysis and the quantification of information. This account treats the brain as an information-processing entity where cognitive control and its underlying brain networks play a pivotal role in dealing with conditions of uncertainty. This hypothesis and theory article justifies the validity and properties of such an account and relates experimental findings to the frontoparietal network under the framework of information theory.
Attenuation of seismic waves in rocks saturated with multiphase fluids: theory and experiments
NASA Astrophysics Data System (ADS)
Tisato, N.; Quintal, B.; Chapman, S.; Podladchikov, Y.; Burg, J. P.
2016-12-01
Albeit seismic tomography could provide a detailed image of subsurface fluid distribution, the interpretation of the tomographic signals is often controversial and fails in providing a conclusive map of the subsurface saturation. However, tomographic information is important because the upward migration of multiphase fluids through the crust of the Earth can cause hazardous events such as eruptions, explosions, soil-pollution and earthquakes. In addition, multiphase fluids, such as hydrocarbons, represent important resources for economy. Seismic tomography can be improved considering complex elastic moduli and the attenuation of seismic waves (1/Q) that quantifies the energy lost by propagating elastic waves. In particular, a significant portion of the energy carried by the propagating wave is dissipated in saturated media by the wave-induced-fluid-flow (WIFF) and the wave-induced-gas-exsolution-dissolution (WIGED) mechanism. The latter describes how a propagating wave modifies the thermodynamic equilibrium between different fluid phases causing exsolution and dissolution of gas bubbles in the liquid, which in turn causes a significant frequency-dependent 1/Q and moduli dispersion. The WIGED theory was initially postulated for bubbly magmas but was only recently demonstrated and extended to bubbly water. We report the theory and laboratory experiments that have been performed to confirm the WIGED theory. In particular, we present i) attenuation measurements performed by means of the Broad Band Attenuation Vessel on porous media saturated with water and different gases, and ii) numerical experiments validating the laboratory observations. Then, we extend the theory to fluids and pressure-temperature conditions which are typical of phreatomagmatic and hydrocarbon domains and we compare the propagation of seismic waves in bubble-free and bubble-bearing subsurface domains. This work etends the knowledge of attenuation in rocks saturated with multiphase fluid and emphasizes that the WIGED mechanism is very important to image subsurface gas plumes.
ERIC Educational Resources Information Center
Vrieze, Scott I.
2012-01-01
This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important…
What is Informal Learning and What are its Antecedents? An Integrative and Meta-Analytic Review
2014-07-01
formal training. Unfortunately, theory and research surrounding informal learning remains fragmented. Given that there has been little systematic...future-oriented. Applying this framework, the construct domain of informal learning in organizations is articulated. Second, an interactionist theory ...theoretical framework and outline an agenda for future theory development, research, and application of informal learning principles in organizations
ERIC Educational Resources Information Center
Tavani, Herman T.
2002-01-01
Discusses the debate over intellectual property rights for digital media. Topics include why intellectual property should be protected; the evolution of copyright law; fair use doctrine; case studies; the philosophical theories of property, including labor theory, utilitarian theory, and personality theory; natural law theory; the social role of…
Quantifying Complexity in Quantum Phase Transitions via Mutual Information Complex Networks
NASA Astrophysics Data System (ADS)
Valdez, Marc Andrew; Jaschke, Daniel; Vargas, David L.; Carr, Lincoln D.
2017-12-01
We quantify the emergent complexity of quantum states near quantum critical points on regular 1D lattices, via complex network measures based on quantum mutual information as the adjacency matrix, in direct analogy to quantifying the complexity of electroencephalogram or functional magnetic resonance imaging measurements of the brain. Using matrix product state methods, we show that network density, clustering, disparity, and Pearson's correlation obtain the critical point for both quantum Ising and Bose-Hubbard models to a high degree of accuracy in finite-size scaling for three classes of quantum phase transitions, Z2, mean field superfluid to Mott insulator, and a Berzinskii-Kosterlitz-Thouless crossover.
Kothari, Anita; Boyko, Jennifer A; Campbell-Davison, Andrea
2015-09-09
Informal knowledge is used in public health practice to make sense of research findings. Although knowledge translation theories highlight the importance of informal knowledge, it is not clear to what extent the same literature provides guidance in terms of how to use it in practice. The objective of this study was to address this gap by exploring what planned action theories suggest in terms of using three types of informal knowledge: local, experiential and expert. We carried out an exploratory secondary analysis of the planned action theories that informed the development of a popular knowledge translation theory. Our sample included twenty-nine (n = 29) papers. We extracted information from these papers about sources of and guidance for using informal knowledge, and then carried out a thematic analysis. We found that theories of planned action provide guidance (including sources of, methods for identifying, and suggestions for use) for using local, experiential and expert knowledge. This study builds on previous knowledge translation related work to provide insight into the practical use of informal knowledge. Public health practitioners can refer to the guidance summarized in this paper to inform their decision-making. Further research about how to use informal knowledge in public health practice is needed given the value being accorded to using informal knowledge in public health decision-making processes.
Millisecond-timescale local network coding in the rat primary somatosensory cortex.
Eldawlatly, Seif; Oweiss, Karim G
2011-01-01
Correlation among neocortical neurons is thought to play an indispensable role in mediating sensory processing of external stimuli. The role of temporal precision in this correlation has been hypothesized to enhance information flow along sensory pathways. Its role in mediating the integration of information at the output of these pathways, however, remains poorly understood. Here, we examined spike timing correlation between simultaneously recorded layer V neurons within and across columns of the primary somatosensory cortex of anesthetized rats during unilateral whisker stimulation. We used bayesian statistics and information theory to quantify the causal influence between the recorded cells with millisecond precision. For each stimulated whisker, we inferred stable, whisker-specific, dynamic bayesian networks over many repeated trials, with network similarity of 83.3±6% within whisker, compared to only 50.3±18% across whiskers. These networks further provided information about whisker identity that was approximately 6 times higher than what was provided by the latency to first spike and 13 times higher than what was provided by the spike count of individual neurons examined separately. Furthermore, prediction of individual neurons' precise firing conditioned on knowledge of putative pre-synaptic cell firing was 3 times higher than predictions conditioned on stimulus onset alone. Taken together, these results suggest the presence of a temporally precise network coding mechanism that integrates information across neighboring columns within layer V about vibrissa position and whisking kinetics to mediate whisker movement by motor areas innervated by layer V.
Quantum resource theories in the single-shot regime
NASA Astrophysics Data System (ADS)
Gour, Gilad
2017-06-01
One of the main goals of any resource theory such as entanglement, quantum thermodynamics, quantum coherence, and asymmetry, is to find necessary and sufficient conditions that determine whether one resource can be converted to another by the set of free operations. Here we find such conditions for a large class of quantum resource theories which we call affine resource theories. Affine resource theories include the resource theories of athermality, asymmetry, and coherence, but not entanglement. Remarkably, the necessary and sufficient conditions can be expressed as a family of inequalities between resource monotones (quantifiers) that are given in terms of the conditional min-entropy. The set of free operations is taken to be (1) the maximal set (i.e., consists of all resource nongenerating quantum channels) or (2) the self-dual set of free operations (i.e., consists of all resource nongenerating maps for which the dual map is also resource nongenerating). As an example, we apply our results to quantum thermodynamics with Gibbs preserving operations, and several other affine resource theories. Finally, we discuss the applications of these results to resource theories that are not affine and, along the way, provide the necessary and sufficient conditions that a quantum resource theory consists of a resource destroying map.
Quantifying and Interpreting Group Differences in Interest Profiles
ERIC Educational Resources Information Center
Armstrong, Patrick Ian; Fouad, Nadya A.; Rounds, James; Hubert, Lawrence
2010-01-01
Research on group differences in interests has often focused on structural hypotheses and mean-score differences in Holland's (1997) theory, with comparatively little research on basic interest measures. Group differences in interest profiles were examined using statistical methods for matching individuals with occupations, the C-index, Q…
Quantifying Emotional Intelligence: The Relationship between Thinking Patterns and Emotional Skills
ERIC Educational Resources Information Center
Cox, Judith E.; Nelson, Darwin B.
2008-01-01
This article explores the relationship between thinking patterns and emotional skills identified by 2 research-derived measures of emotional intelligence that reflect integrative and positive theories of human behavior. Findings suggest implications for planning educational and counseling interventions to facilitate positive growth and future…
The Ecosystem Services Research Program of the EPA Office of Research and Development envisions a comprehensive theory and practice for characterizing, quantifying and valuing ecosystem services and their relationship to human well-being. This vision of future environmental deci...
Towards Information Polycentricity Theory--Investigation of a Hospital Revenue Cycle
ERIC Educational Resources Information Center
Singh, Rajendra
2011-01-01
This research takes steps towards developing a new theory of organizational information management based on the ideas that, first, information creates ordering effects in transactions and, second, that there are multiple centers of authority in organizations. The rationale for developing this theory is the empirical observation that hospitals have…
On long-only information-based portfolio diversification framework
NASA Astrophysics Data System (ADS)
Santos, Raphael A.; Takada, Hellinton H.
2014-12-01
Using the concepts from information theory, it is possible to improve the traditional frameworks for long-only asset allocation. In modern portfolio theory, the investor has two basic procedures: the choice of a portfolio that maximizes its risk-adjusted excess return or the mixed allocation between the maximum Sharpe portfolio and the risk-free asset. In the literature, the first procedure was already addressed using information theory. One contribution of this paper is the consideration of the second procedure in the information theory context. The performance of these approaches was compared with three traditional asset allocation methodologies: the Markowitz's mean-variance, the resampled mean-variance and the equally weighted portfolio. Using simulated and real data, the information theory-based methodologies were verified to be more robust when dealing with the estimation errors.
The role of ensemble post-processing for modeling the ensemble tail
NASA Astrophysics Data System (ADS)
Van De Vyver, Hans; Van Schaeybroeck, Bert; Vannitsem, Stéphane
2016-04-01
The past decades the numerical weather prediction community has witnessed a paradigm shift from deterministic to probabilistic forecast and state estimation (Buizza and Leutbecher, 2015; Buizza et al., 2008), in an attempt to quantify the uncertainties associated with initial-condition and model errors. An important benefit of a probabilistic framework is the improved prediction of extreme events. However, one may ask to what extent such model estimates contain information on the occurrence probability of extreme events and how this information can be optimally extracted. Different approaches have been proposed and applied on real-world systems which, based on extreme value theory, allow the estimation of extreme-event probabilities conditional on forecasts and state estimates (Ferro, 2007; Friederichs, 2010). Using ensemble predictions generated with a model of low dimensionality, a thorough investigation is presented quantifying the change of predictability of extreme events associated with ensemble post-processing and other influencing factors including the finite ensemble size, lead time and model assumption and the use of different covariates (ensemble mean, maximum, spread...) for modeling the tail distribution. Tail modeling is performed by deriving extreme-quantile estimates using peak-over-threshold representation (generalized Pareto distribution) or quantile regression. Common ensemble post-processing methods aim to improve mostly the ensemble mean and spread of a raw forecast (Van Schaeybroeck and Vannitsem, 2015). Conditional tail modeling, on the other hand, is a post-processing in itself, focusing on the tails only. Therefore, it is unclear how applying ensemble post-processing prior to conditional tail modeling impacts the skill of extreme-event predictions. This work is investigating this question in details. Buizza, Leutbecher, and Isaksen, 2008: Potential use of an ensemble of analyses in the ECMWF Ensemble Prediction System, Q. J. R. Meteorol. Soc. 134: 2051-2066.Buizza and Leutbecher, 2015: The forecast skill horizon, Q. J. R. Meteorol. Soc. 141: 3366-3382.Ferro, 2007: A probability model for verifying deterministic forecasts of extreme events. Weather and Forecasting 22 (5), 1089-1100.Friederichs, 2010: Statistical downscaling of extreme precipitation events using extreme value theory. Extremes 13, 109-132.Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.
Cognitive performance modeling based on general systems performance theory.
Kondraske, George V
2010-01-01
General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).
Harold, Meredith Poore; Barlow, Steven M
2013-02-01
The vocalizations and jaw kinematics of 30 infants aged 6-8 months were recorded using a Motion Analysis System and audiovisual technologies. This study represents the first attempt to determine the effect of play environment on infants' rate of vocalization and jaw movement. Four play conditions were compared: watching videos, social contingent reinforcement and vocal modeling with an adult, playing alone with small toys, and playing alone with large toys. The fewest vocalizations and spontaneous movement were observed when infants were watching videos or interacting with an adult. Infants vocalized most when playing with large toys. The small toys, which naturally elicited gross motor movement (e.g., waving, banging, shaking), educed fewer vocalizations. This study was also the first to quantify the kinematics of vocalized and non-vocalized jaw movements of 6-8 month-old infants. Jaw kinematics did not differentiate infants who produced canonical syllables from those who did not. All infants produced many jaw movements without vocalization. However, during vocalization, infants were unlikely to move their jaw. This contradicts current theories that infant protophonic vocalizations are jaw-dominant. Results of the current study can inform socio-linguistic and kinematic theories of canonical babbling. Copyright © 2012 Elsevier Inc. All rights reserved.
Probing the BSM physics with CMB precision cosmology: an application to supersymmetry
NASA Astrophysics Data System (ADS)
Dalianis, Ioannis; Watanabe, Yuki
2018-02-01
The cosmic history before the BBN is highly determined by the physics that operates beyond the Standard Model (BSM) of particle physics and it is poorly constrained observationally. Ongoing and future precision measurements of the CMB observables can provide us with significant information about the pre-BBN era and hence possibly test the cosmological predictions of different BSM scenarios. Supersymmetry is a particularly motivated BSM theory and it is often the case that different superymmetry breaking schemes require different cosmic histories with specific reheating temperatures or low entropy production in order to be cosmologically viable. In this paper we quantify the effects of the possible alternative cosmic histories on the n s and r CMB observables assuming a generic non-thermal stage after cosmic inflation. We analyze TeV and especially multi-TeV super-symmetry breaking schemes assuming the neutralino and gravitino dark matter scenarios. We complement our analysis considering the Starobinsky R 2 inflation model to exemplify the improved CMB predictions that a unified description of the early universe cosmic evolution yields. Our analysis underlines the importance of the CMB precision measurements that can be viewed, to some extend, as complementary to the laboratory experimental searches for supersymmetry or other BSM theories.
Finke, Kathrin; Schwarzkopf, Wolfgang; Müller, Ulrich; Frodl, Thomas; Müller, Hermann J; Schneider, Werner X; Engel, Rolf R; Riedel, Michael; Möller, Hans-Jürgen; Hennig-Fast, Kristina
2011-11-01
Attention deficit hyperactivity disorder (ADHD) persists frequently into adulthood. The decomposition of endophenotypes by means of experimental neuro-cognitive assessment has the potential to improve diagnostic assessment, evaluation of treatment response, and disentanglement of genetic and environmental influences. We assessed four parameters of attentional capacity and selectivity derived from simple psychophysical tasks (verbal report of briefly presented letter displays) and based on a "theory of visual attention." These parameters are mathematically independent, quantitative measures, and previous studies have shown that they are highly sensitive for subtle attention deficits. Potential reductions of attentional capacity, that is, of perceptual processing speed and working memory storage capacity, were assessed with a whole report paradigm. Furthermore, possible pathologies of attentional selectivity, that is, selection of task-relevant information and bias in the spatial distribution of attention, were measured with a partial report paradigm. A group of 30 unmedicated adult ADHD patients and a group of 30 demographically matched healthy controls were tested. ADHD patients showed significant reductions of working memory storage capacity of a moderate to large effect size. Perceptual processing speed, task-based, and spatial selection were unaffected. The results imply a working memory deficit as an important source of behavioral impairments. The theory of visual attention parameter working memory storage capacity might constitute a quantifiable and testable endophenotype of ADHD.
Pangenesis as a source of new genetic information. The history of a now disproven theory.
Bergman, Gerald
2006-01-01
Evolution is based on natural selection of existing biological phenotypic traits. Natural selection can only eliminate traits. It cannot create new ones, requiring a theory to explain the origin of new genetic information. The theory of pangenesis was a major attempt to explain the source of new genetic information required to produce phenotypic variety. This theory, advocated by Darwin as the main source of genetic variety, has now been empirically disproved. It is currently a theory mainly of interest to science historians.
Sandars, John; Patel, Rakesh S; Goh, Poh Sun; Kokatailo, Patricia K; Lafferty, Natalie
2015-01-01
There is an increasing use of technology for teaching and learning in medical education but often the use of educational theory to inform the design is not made explicit. The educational theories, both normative and descriptive, used by medical educators determine how the technology is intended to facilitate learning and may explain why some interventions with technology may be less effective compared with others. The aim of this study is to highlight the importance of medical educators making explicit the educational theories that inform their design of interventions using technology. The use of illustrative examples of the main educational theories to demonstrate the importance of theories informing the design of interventions using technology. Highlights the use of educational theories for theory-based and realistic evaluations of the use of technology in medical education. An explicit description of the educational theories used to inform the design of an intervention with technology can provide potentially useful insights into why some interventions with technology are more effective than others. An explicit description is also an important aspect of the scholarship of using technology in medical education.
Unified framework for information integration based on information geometry
Oizumi, Masafumi; Amari, Shun-ichi
2016-01-01
Assessment of causal influences is a ubiquitous and important subject across diverse research fields. Drawn from consciousness studies, integrated information is a measure that defines integration as the degree of causal influences among elements. Whereas pairwise causal influences between elements can be quantified with existing methods, quantifying multiple influences among many elements poses two major mathematical difficulties. First, overestimation occurs due to interdependence among influences if each influence is separately quantified in a part-based manner and then simply summed over. Second, it is difficult to isolate causal influences while avoiding noncausal confounding influences. To resolve these difficulties, we propose a theoretical framework based on information geometry for the quantification of multiple causal influences with a holistic approach. We derive a measure of integrated information, which is geometrically interpreted as the divergence between the actual probability distribution of a system and an approximated probability distribution where causal influences among elements are statistically disconnected. This framework provides intuitive geometric interpretations harmonizing various information theoretic measures in a unified manner, including mutual information, transfer entropy, stochastic interaction, and integrated information, each of which is characterized by how causal influences are disconnected. In addition to the mathematical assessment of consciousness, our framework should help to analyze causal relationships in complex systems in a complete and hierarchical manner. PMID:27930289
A framework for quantifying and optimizing the value of seismic monitoring of infrastructure
NASA Astrophysics Data System (ADS)
Omenzetter, Piotr
2017-04-01
This paper outlines a framework for quantifying and optimizing the value of information from structural health monitoring (SHM) technology deployed on large infrastructure, which may sustain damage in a series of earthquakes (the main and the aftershocks). The evolution of the damage state of the infrastructure without or with SHM is presented as a time-dependent, stochastic, discrete-state, observable and controllable nonlinear dynamical system. The pre-posterior Bayesian analysis and the decision tree are used for quantifying and optimizing the value of SHM information. An optimality problem is then formulated how to decide on the adoption of SHM and how to manage optimally the usage and operations of the possibly damaged infrastructure and its repair schedule using the information from SHM. The objective function to minimize is the expected total cost or risk.
Modeling noisy resonant system response
NASA Astrophysics Data System (ADS)
Weber, Patrick Thomas; Walrath, David Edwin
2017-02-01
In this paper, a theory-based model replicating empirical acoustic resonant signals is presented and studied to understand sources of noise present in acoustic signals. Statistical properties of empirical signals are quantified and a noise amplitude parameter, which models frequency and amplitude-based noise, is created, defined, and presented. This theory-driven model isolates each phenomenon and allows for parameters to be independently studied. Using seven independent degrees of freedom, this model will accurately reproduce qualitative and quantitative properties measured from laboratory data. Results are presented and demonstrate success in replicating qualitative and quantitative properties of experimental data.
A non-asymptotic homogenization theory for periodic electromagnetic structures.
Tsukerman, Igor; Markel, Vadim A
2014-08-08
Homogenization of electromagnetic periodic composites is treated as a two-scale problem and solved by approximating the fields on both scales with eigenmodes that satisfy Maxwell's equations and boundary conditions as accurately as possible. Built into this homogenization methodology is an error indicator whose value characterizes the accuracy of homogenization. The proposed theory allows one to define not only bulk, but also position-dependent material parameters (e.g. in proximity to a physical boundary) and to quantify the trade-off between the accuracy of homogenization and its range of applicability to various illumination conditions.
Applying Information Processing Theory to Supervision: An Initial Exploration
ERIC Educational Resources Information Center
Tangen, Jodi L.; Borders, L. DiAnne
2017-01-01
Although clinical supervision is an educational endeavor (Borders & Brown, [Borders, L. D., 2005]), many scholars neglect theories of learning in working with supervisees. The authors describe 1 learning theory--information processing theory (Atkinson & Shiffrin, 1968, 1971; Schunk, 2016)--and the ways its associated interventions may…
Healey, Emma; Taylor, Natalie; Greening, Sian; Wakefield, Claire E; Warwick, Linda; Williams, Rachel; Tucker, Kathy
2017-12-01
PurposeRecommendations for BRCA1 and BRCA2 mutation carriers to disseminate information to at-risk relatives pose significant challenges. This study aimed to quantify family dissemination, to explain the differences between fully informed families (all relatives informed verbally or in writing) and partially informed families (at least one relative uninformed), and to identify dissemination barriers.MethodsBRCA1 and BRCA2 mutation carriers identified from four Australian hospitals (n=671) were invited to participate in the study. Distress was measured at consent using the Kessler psychological distress scale (K10). A structured telephone interview was used to assess the informed status of relatives, geographical location of relatives, and dissemination barriers. Family dissemination was quantified, and fully versus partially informed family differences were examined. Dissemination barriers were thematically coded and counted.ResultsA total of 165 families participated. Information had been disseminated to 81.1% of relatives. At least one relative had not been informed in 52.7% of families, 4.3% were first-degree relatives, 27.0% were second-degree relatives, and 62.0% were cousins. Partially informed families were significantly larger than fully informed families, had fewer relatives living in close proximity, and exhibited higher levels of distress. The most commonly recorded barrier to dissemination was loss of contact.ConclusionLarger, geographically diverse families have greater difficulty disseminating BRCA mutation risk information to all relatives. Understanding these challenges can inform future initiatives for communication, follow-up and support.
Relaxation spectra of binary blends: Extension of the Doi-Edwards theory
NASA Astrophysics Data System (ADS)
Tchesnokov, M. A.; Molenaar, J.; Slot, J. J. M.; Stepanyan, R.
2007-10-01
A molecular model is presented which allows the calculation of the stress relaxation function G for binary blends consisting of two monodisperse samples with arbitrary molecular weights. It extends the Doi-Edwards reptation theory (Doi M. and Edwards S. F., The Theory of Polymer Dynamics (Oxford Press, New York) 1986) to highly polydisperse melts by including constraint release (CR) and thermal fluctuations (CLF), yet making use of the same input parameters. The model reveals an explicit nonlinear dependence of CR frequency in the blend on the blend's molecular weight distribution (MWD). It provides an alternative way to quantify polydisperse systems compared to the widely used "double-reptation" theories. The results of the present model are in a good agreement with the experimental data given in Rubinstein M. and Colby R. H., J. Chem. Phys., 89 (1988) 5291.
Martin, James E.; Solis, Kyle Jameson
2015-08-07
We recently reported two methods of inducing vigorous fluid vorticity in magnetic particle suspensions. The first method employs symmetry-breaking rational fields. These fields are comprised of two orthogonal ac components whose frequencies form a rational number and an orthogonal dc field that breaks the symmetry of the biaxial ac field to create the parity required to induce deterministic vorticity. The second method is based on rational triads, which are fields comprised of three orthogonal ac components whose frequency ratios are rational (e.g., 1 : 2 : 3). For each method a symmetry theory has been developed that enables the predictionmore » of the direction and sign of vorticity as functions of the field frequencies and phases. However, this theory has its limitations. It only applies to those particular phase angles that give rise to fields whose Lissajous plots, or principal 2-d projections thereof, have a high degree of symmetry. Nor can symmetry theory provide a measure of the magnitude of the torque density induced by the field. In this paper a functional of the multiaxial magnetic field is proposed that not only is consistent with all of the predictions of the symmetry theories, but also quantifies the torque density. This functional can be applied to fields whose Lissajous plots lack symmetry and can thus be used to predict a variety of effects and trends that cannot be predicted from the symmetry theories. These trends include the dependence of the magnitude of the torque density on the various frequency ratios, the unexpected reversal of flow with increasing dc field amplitude for certain symmetry-breaking fields, and the existence of off-axis vorticity for rational triads, such as 1 : 3 : 5, that do not have the symmetry required to analyze by symmetry theory. As a result, experimental data are given that show the degree to which this functional is successful in predicting observed trends.« less
An information theory account of cognitive control
Fan, Jin
2014-01-01
Our ability to efficiently process information and generate appropriate responses depends on the processes collectively called cognitive control. Despite a considerable focus in the literature on the cognitive control of information processing, neural mechanisms underlying control are still unclear, and have not been characterized by considering the quantity of information to be processed. A novel and comprehensive account of cognitive control is proposed using concepts from information theory, which is concerned with communication system analysis and the quantification of information. This account treats the brain as an information-processing entity where cognitive control and its underlying brain networks play a pivotal role in dealing with conditions of uncertainty. This hypothesis and theory article justifies the validity and properties of such an account and relates experimental findings to the frontoparietal network under the framework of information theory. PMID:25228875
From information theory to quantitative description of steric effects.
Alipour, Mojtaba; Safari, Zahra
2016-07-21
Immense efforts have been made in the literature to apply the information theory descriptors for investigating the electronic structure theory of various systems. In the present study, the information theoretic quantities, such as Fisher information, Shannon entropy, Onicescu information energy, and Ghosh-Berkowitz-Parr entropy, have been used to present a quantitative description for one of the most widely used concepts in chemistry, namely the steric effects. Taking the experimental steric scales for the different compounds as benchmark sets, there are reasonable linear relationships between the experimental scales of the steric effects and theoretical values of steric energies calculated from information theory functionals. Perusing the results obtained from the information theoretic quantities with the two representations of electron density and shape function, the Shannon entropy has the best performance for the purpose. On the one hand, the usefulness of considering the contributions of functional groups steric energies and geometries, and on the other hand, dissecting the effects of both global and local information measures simultaneously have also been explored. Furthermore, the utility of the information functionals for the description of steric effects in several chemical transformations, such as electrophilic and nucleophilic reactions and host-guest chemistry, has been analyzed. The functionals of information theory correlate remarkably with the stability of systems and experimental scales. Overall, these findings show that the information theoretic quantities can be introduced as quantitative measures of steric effects and provide further evidences of the quality of information theory toward helping theoreticians and experimentalists to interpret different problems in real systems.
Rauscher, Emily A; Hesse, Colin
2014-01-01
Although the importance of being knowledgeable of one's family health history is widely known, very little research has investigated how families communicate about this important topic. This study investigated how young adults seek information from parents about family health history. The authors used the Theory of Motivated Information Management as a framework to understand the process of uncertainty discrepancy and emotion in seeking information about family health history. Results of this study show the Theory of Motivated Information Management to be a good model to explain the process young adults go through in deciding to seek information from parents about family health history. Results also show that emotions other than anxiety can be used with success in the Theory of Motivated Information Management framework.
Properties and relative measure for quantifying quantum synchronization
NASA Astrophysics Data System (ADS)
Li, Wenlin; Zhang, Wenzhao; Li, Chong; Song, Heshan
2017-07-01
Although quantum synchronization phenomena and corresponding measures have been widely discussed recently, it is still an open question how to characterize directly the influence of nonlocal correlation, which is the key distinction for identifying classical and quantum synchronizations. In this paper, we present basic postulates for quantifying quantum synchronization based on the related theory in Mari's work [Phys. Rev. Lett. 111, 103605 (2013), 10.1103/PhysRevLett.111.103605], and we give a general formula of a quantum synchronization measure with clear physical interpretations. By introducing Pearson's parameter, we show that the obvious characteristics of our measure are the relativity and monotonicity. As an example, the measure is applied to describe synchronization among quantum optomechanical systems under a Markovian bath. We also show the potential by quantifying generalized synchronization and discrete variable synchronization with this measure.
NREL, Johns Hopkins SAIS Develop Method to Quantify Life Cycle Land Use of
Life Cycle Land Use of Electricity from Natural Gas News Release: NREL, Johns Hopkins SAIS Develop Method to Quantify Life Cycle Land Use of Electricity from Natural Gas October 2, 2017 A case study of time provides quantifiable information on the life cycle land use of generating electricity from
Fellinger, Michael R.; Hector, Louis G.; Trinkle, Dallas R.
2016-10-28
Here, we present an efficient methodology for computing solute-induced changes in lattice parameters and elastic stiffness coefficients Cij of single crystals using density functional theory. We also introduce a solute strain misfit tensor that quantifies how solutes change lattice parameters due to the stress they induce in the host crystal. Solutes modify the elastic stiffness coefficients through volumetric changes and by altering chemical bonds. We compute each of these contributions to the elastic stiffness coefficients separately, and verify that their sum agrees with changes in the elastic stiffness coefficients computed directly using fully optimized supercells containing solutes. Computing the twomore » elastic stiffness contributions separately is more computationally efficient and provides more information on solute effects than the direct calculations. We compute the solute dependence of polycrystalline averaged shear and Young's moduli from the solute dependence of the single-crystal Cij. We then apply this methodology to substitutional Al, B, Cu, Mn, Si solutes and octahedral interstitial C and N solutes in bcc Fe. Comparison with experimental data indicates that our approach accurately predicts solute-induced changes in the lattice parameter and elastic coefficients. The computed data can be used to quantify solute-induced changes in mechanical properties such as strength and ductility, and can be incorporated into mesoscale models to improve their predictive capabilities.« less
Robust quantum network architectures and topologies for entanglement distribution
NASA Astrophysics Data System (ADS)
Das, Siddhartha; Khatri, Sumeet; Dowling, Jonathan P.
2018-01-01
Entanglement distribution is a prerequisite for several important quantum information processing and computing tasks, such as quantum teleportation, quantum key distribution, and distributed quantum computing. In this work, we focus on two-dimensional quantum networks based on optical quantum technologies using dual-rail photonic qubits for the building of a fail-safe quantum internet. We lay out a quantum network architecture for entanglement distribution between distant parties using a Bravais lattice topology, with the technological constraint that quantum repeaters equipped with quantum memories are not easily accessible. We provide a robust protocol for simultaneous entanglement distribution between two distant groups of parties on this network. We also discuss a memory-based quantum network architecture that can be implemented on networks with an arbitrary topology. We examine networks with bow-tie lattice and Archimedean lattice topologies and use percolation theory to quantify the robustness of the networks. In particular, we provide figures of merit on the loss parameter of the optical medium that depend only on the topology of the network and quantify the robustness of the network against intermittent photon loss and intermittent failure of nodes. These figures of merit can be used to compare the robustness of different network topologies in order to determine the best topology in a given real-world scenario, which is critical in the realization of the quantum internet.
NASA Astrophysics Data System (ADS)
Adera, S.; Larsen, L.; Levy, M. C.; Thompson, S. E.
2017-12-01
In the Brazilian rainforest-savanna transition zone, deforestation has the potential to significantly affect rainfall by disrupting rainfall recycling, the process by which regional evapotranspiration contributes to regional rainfall. Understanding rainfall recycling in this region is important not only for sustaining Amazon and Cerrado ecosystems, but also for cattle ranching, agriculture, hydropower generation, and drinking water management. Simulations in previous studies suggest complex, scale-dependent interactions between forest cover connectivity and rainfall. For example, the size and distribution of deforested patches has been found to affect rainfall quantity and spatial distribution. Here we take an empirical approach, using the spatial connectivity of rainfall as an indicator of rainfall recycling, to ask: as forest cover connectivity decreased from 1981 - 2015, how did the spatial connectivity of rainfall change in the Brazilian rainforest-savanna transition zone? We use satellite forest cover and rainfall data covering this period of intensive forest cover loss in the region (forest cover from the Hansen Global Forest Change dataset; rainfall from the Climate Hazards Infrared Precipitation with Stations dataset). Rainfall spatial connectivity is quantified using transfer entropy, a metric from information theory, and summarized using network statistics. Networks of connectivity are quantified for paired deforested and non-deforested regions before deforestation (1981-1995) and during/after deforestation (2001-2015). Analyses reveal a decline in spatial connectivity networks of rainfall following deforestation.
Quantifying the value of redundant measurements at GCOS Reference Upper-Air Network sites
Madonna, F.; Rosoldi, M.; Güldner, J.; ...
2014-11-19
The potential for measurement redundancy to reduce uncertainty in atmospheric variables has not been investigated comprehensively for climate observations. We evaluated the usefulness of entropy and mutual correlation concepts, as defined in information theory, for quantifying random uncertainty and redundancy in time series of the integrated water vapour (IWV) and water vapour mixing ratio profiles provided by five highly instrumented GRUAN (GCOS, Global Climate Observing System, Reference Upper-Air Network) stations in 2010–2012. Results show that the random uncertainties on the IWV measured with radiosondes, global positioning system, microwave and infrared radiometers, and Raman lidar measurements differed by less than 8%.more » Comparisons of time series of IWV content from ground-based remote sensing instruments with in situ soundings showed that microwave radiometers have the highest redundancy with the IWV time series measured by radiosondes and therefore the highest potential to reduce the random uncertainty of the radiosondes time series. Moreover, the random uncertainty of a time series from one instrument can be reduced by ~ 60% by constraining the measurements with those from another instrument. The best reduction of random uncertainty is achieved by conditioning Raman lidar measurements with microwave radiometer measurements. In conclusion, specific instruments are recommended for atmospheric water vapour measurements at GRUAN sites. This approach can be applied to the study of redundant measurements for other climate variables.« less
NASA Astrophysics Data System (ADS)
Chou, Cheng-Ying; Anastasio, Mark A.
2016-04-01
In propagation-based X-ray phase-contrast (PB XPC) imaging, the measured image contains a mixture of absorption- and phase-contrast. To obtain separate images of the projected absorption and phase (i.e., refractive) properties of a sample, phase retrieval methods can be employed. It has been suggested that phase-retrieval can always improve image quality in PB XPC imaging. However, when objective (task-based) measures of image quality are employed, this is not necessarily true and phase retrieval can be detrimental. In this work, signal detection theory is utilized to quantify the performance of a Hotelling observer (HO) for detecting a known signal in a known background. Two cases are considered. In the first case, the HO acts directly on the measured intensity data. In the second case, the HO acts on either the retrieved phase or absorption image. We demonstrate that the performance of the HO is superior when acting on the measured intensity data. The loss of task-specific information induced by phase-retrieval is quantified by computing the efficiency of the HO as the ratio of the test statistic signal-to-noise ratio (SNR) for the two cases. The effect of the system geometry on this efficiency is systematically investigated. Our findings confirm that phase-retrieval can impair signal detection performance in XPC imaging.
Using health psychology to help patients: theories of behaviour change.
Barley, Elizabeth; Lawson, Victoria
2016-09-08
Behaviour change theories and related research evidence highlight the complexity of making and sticking to health-related behaviour changes. These theories make explicit factors that influence behaviour change, such as health beliefs, past behaviour, intention, social influences, perceived control and the context of the behaviour. Nurses can use this information to understand why a particular patient may find making recommended health behaviour changes difficult and to determine factors that may help them. This article outlines five well-established theories of behaviour change: the health belief model, the theory of planned behaviour, the stages of change model, self-determination theory, and temporal self-regulation theory. The evidence for interventions that are informed by these theories is then explored and appraised. The extent and quality of evidence varies depending on the type of behaviour and patients targeted, but evidence from randomised controlled trials indicates that interventions informed by theory can result in behaviour change.
ERIC Educational Resources Information Center
Cerveny, Robert P.
This curriculum guide provides an introduction to Management Information Systems (MIS) concepts and techniques for students preparing to develop MISs in professional settings, and to assist in MIS evaluation. According to the guide, students are exposed to concepts drawn from systems theory, information theory, management theory, data base…
Selecting Organization Development Theory from an HRD Perspective
ERIC Educational Resources Information Center
Lynham, Susan A.; Chermack, Thomas J.; Noggle, Melissa A.
2004-01-01
As is true for human resource development (HRD), the field of organization development (OD) draws from numerous disciplines to inform its theory base. However, the identification and selection of theory to inform improved practice remains a challenge and begs the question of what can be used to inform and guide one in the identification and…
Comment on Gallistel: behavior theory and information theory: some parallels.
Nevin, John A
2012-05-01
In this article, Gallistel proposes information theory as an approach to some enduring problems in the study of operant and classical conditioning. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Augustine, Starrlight
2017-03-01
I appreciated the review of motivational considerations for the set-up of the Dynamic Energy Budget (DEB) theory by Jusup et al. [1, section 2]. The authors refer to useful biological literature when illustrating and justifying concepts of homeostasis and maturity.
Probabilistic and spatially variable niches inferred from demography
Jeffrey M. Diez; Itamar Giladi; Robert Warren; H. Ronald Pulliam
2014-01-01
Summary 1. Mismatches between species distributions and habitat suitability are predicted by niche theory and have important implications for forecasting how species may respond to environmental changes. Quantifying these mismatches is challenging, however, due to the high dimensionality of species niches and the large spatial and temporal variability in population...
System identification principles in studies of forest dynamics.
Rolfe A. Leary
1970-01-01
Shows how it is possible to obtain governing equation parameter estimates on the basis of observed system states. The approach used represents a constructive alternative to regression techniques for models expressed as differential equations. This approach allows scientists to more completely quantify knowledge of forest development processes, to express theories in...
Application of Hierarchy Theory to Cross-Scale Hydrologic Modeling of Nutrient Loads
We describe a model called Regional Hydrologic Modeling for Environmental Evaluation 16 (RHyME2) for quantifying annual nutrient loads in stream networks and watersheds. RHyME2 is 17 a cross-scale statistical and process-based water-quality model. The model ...
The Ecological Research Program (ERP) of the EPA Office of Research and Development has the vision of a comprehensive theory and practice for characterizing, quantifying, and valuing ecosystem services and their relationship to human well-being for environmental decision making. ...
Measuring Intervention Effectiveness: The Benefits of an Item Response Theory Approach
ERIC Educational Resources Information Center
McEldoon, Katherine; Cho, Sun-Joo; Rittle-Johnson, Bethany
2012-01-01
Assessing the effectiveness of educational interventions relies on quantifying differences between interventions groups over time in a between-within design. Binary outcome variables (e.g., correct responses versus incorrect responses) are often assessed. Widespread approaches use percent correct on assessments, and repeated measures analysis of…
Peng, Jifeng; Dabiri, John O; Madden, Peter G; Lauder, George V
2007-02-01
Swimming and flying animals generate unsteady locomotive forces by delivering net momentum into the fluid wake. Hence, swimming and flying forces can be quantified by measuring the momentum of animal wakes. A recently developed model provides an approach to empirically deduce swimming and flying forces based on the measurement of velocity and vortex added-mass in the animal wake. The model is contingent on the identification of the vortex boundary in the wake. This paper demonstrates the application of that method to a case study quantifying the instantaneous locomotive forces generated by the pectoral fins of the bluegill sunfish (Lepomis macrochirus Rafinesque), measured using digital particle image velocimetry (DPIV). The finite-time Lyapunov exponent (FTLE) field calculated from the DPIV data was used to determine the wake vortex boundary, according to recently developed fluid dynamics theory. Momentum of the vortex wake and its added-mass were determined and the corresponding instantaneous locomotive forces were quantified at discrete time points during the fin stroke. The instantaneous forces estimated in this study agree in magnitude with the time-averaged forces quantified for the pectoral fin of the same species swimming in similar conditions and are consistent with the observed global motion of the animals. A key result of this study is its suggestion that the dynamical effect of the vortex wake on locomotion is to replace the real animal fin with an ;effective appendage', whose geometry is dictated by the FTLE field and whose interaction with the surrounding fluid is wholly dictated by inviscid concepts from potential flow theory. Benefits and limitations of this new framework for non-invasive instantaneous force measurement are discussed, and its application to comparative biomechanics and engineering studies is suggested.
Daly, Louise; McCarron, Mary; Higgins, Agnes; McCallion, Philip
2013-02-01
This paper presents a theory explaining the processes used by informal carers of people with dementia to mange alterations to their, and people with dementias' relationships with and places within their social worlds. Informal carers provide the majority of care to people with dementia. A great deal of international informal dementia care research is available, much of which elucidates the content, impacts and consequences of the informal caring role and the coping mechanisms that carers use. However, the socially situated experiences and processes integral to informal caring in dementia have not yet been robustly accounted for. A classic grounded theory approach was used as it is designed for research enquiries that aim to generate theory illustrating social patterns of action used to address an identified problem. Thirty interviews were conducted with 31 participants between 2006-2008. The theory was conceptualised from the data using the concurrent methods of theoretical sampling, constant comparative analysis, memo writing and theoretical sensitivity. Informal carers' main concern was identified as 'Living on the fringes', which was stimulated by dementia-related stigma and living a different life. The theory of 'Sustaining Place' explains the social pattern of actions employed by informal carers to manage this problem on behalf of themselves and the person with dementia. The theory of 'Sustaining Place' identifies an imperative for nurses, other formal carers and society to engage in actions to support and enable social connectedness, social inclusion and citizenship for informal carers and people with dementia. 'Sustaining Place' facilitates enhanced understanding of the complex and socially situated nature of informal dementia care through its portrayal of informal carers as social agents and can be used to guide nurses to better support those who live with dementia. © 2012 Blackwell Publishing Ltd.
Theory of mind selectively predicts preschoolers’ knowledge-based selective word learning
Brosseau-Liard, Patricia; Penney, Danielle; Poulin-Dubois, Diane
2015-01-01
Children can selectively attend to various attributes of a model, such as past accuracy or physical strength, to guide their social learning. There is a debate regarding whether a relation exists between theory-of-mind skills and selective learning. We hypothesized that high performance on theory-of-mind tasks would predict preference for learning new words from accurate informants (an epistemic attribute), but not from physically strong informants (a non-epistemic attribute). Three- and 4-year-olds (N = 65) completed two selective learning tasks, and their theory of mind abilities were assessed. As expected, performance on a theory-of-mind battery predicted children’s preference to learn from more accurate informants but not from physically stronger informants. Results thus suggest that preschoolers with more advanced theory of mind have a better understanding of knowledge and apply that understanding to guide their selection of informants. This work has important implications for research on children’s developing social cognition and early learning. PMID:26211504
Theory of mind selectively predicts preschoolers' knowledge-based selective word learning.
Brosseau-Liard, Patricia; Penney, Danielle; Poulin-Dubois, Diane
2015-11-01
Children can selectively attend to various attributes of a model, such as past accuracy or physical strength, to guide their social learning. There is a debate regarding whether a relation exists between theory-of-mind skills and selective learning. We hypothesized that high performance on theory-of-mind tasks would predict preference for learning new words from accurate informants (an epistemic attribute), but not from physically strong informants (a non-epistemic attribute). Three- and 4-year-olds (N = 65) completed two selective learning tasks, and their theory-of-mind abilities were assessed. As expected, performance on a theory-of-mind battery predicted children's preference to learn from more accurate informants but not from physically stronger informants. Results thus suggest that preschoolers with more advanced theory of mind have a better understanding of knowledge and apply that understanding to guide their selection of informants. This work has important implications for research on children's developing social cognition and early learning. © 2015 The British Psychological Society.
Identifying and applying psychological theory to setting and achieving rehabilitation goals.
Scobbie, Lesley; Wyke, Sally; Dixon, Diane
2009-04-01
Goal setting is considered to be a fundamental part of rehabilitation; however, theories of behaviour change relevant to goal-setting practice have not been comprehensively reviewed. (i) To identify and discuss specific theories of behaviour change relevant to goal-setting practice in the rehabilitation setting. (ii) To identify 'candidate' theories that that offer most potential to inform clinical practice. The rehabilitation and self-management literature was systematically searched to identify review papers or empirical studies that proposed a specific theory of behaviour change relevant to setting and/or achieving goals in a clinical context. Data from included papers were extracted under the headings of: key constructs, clinical application and empirical support. Twenty-four papers were included in the review which proposed a total of five theories: (i) social cognitive theory, (ii) goal setting theory, (iii) health action process approach, (iv) proactive coping theory, and (v) the self-regulatory model of illness behaviour. The first three of these theories demonstrated most potential to inform clinical practice, on the basis of their capacity to inform interventions that resulted in improved patient outcomes. Social cognitive theory, goal setting theory and the health action process approach are theories of behaviour change that can inform clinicians in the process of setting and achieving goals in the rehabilitation setting. Overlapping constructs within these theories have been identified, and can be applied in clinical practice through the development and evaluation of a goal-setting practice framework.
Answering Aggregation Questions in Contingency Valuation of Rural Transit Benefits
DOT National Transportation Integrated Search
2001-08-01
While the qualitative benefits of transit are relatively well known, quantifying the benefits of transit is still a developing methodology. Quantifying benefits offers improved operational management and planning as well as better information for pol...
Information Theory for Information Science: Antecedents, Philosophy, and Applications
ERIC Educational Resources Information Center
Losee, Robert M.
2017-01-01
This paper provides an historical overview of the theoretical antecedents leading to information theory, specifically those useful for understanding and teaching information science and systems. Information may be discussed in a philosophical manner and at the same time be measureable. This notion of information can thus be the subject of…
On the mechanochemical theory of biological pattern formation with application to vasculogenesis.
Murray, James D
2003-02-01
We first describe the Murray-Oster mechanical theory of pattern formation, the biological basis of which is experimentally well documented. The model quantifies the interaction of cells and the extracellular matrix via the cell-generated forces. The model framework is described in quantitative detail. Vascular endothelial cells, when cultured on gelled basement membrane matrix, rapidly aggregate into clusters while deforming the matrix into a network of cord-like structures tessellating the planar culture. We apply the mechanical theory of pattern formation to this culture system and show that neither strain-biased anisotropic cell traction nor cell migration are necessary for pattern formation: isotropic, strain-stimulated cell traction is sufficient to form the observed patterns. Predictions from the model were confirmed experimentally.
Functional renormalization group and Kohn-Sham scheme in density functional theory
NASA Astrophysics Data System (ADS)
Liang, Haozhao; Niu, Yifei; Hatsuda, Tetsuo
2018-04-01
Deriving accurate energy density functional is one of the central problems in condensed matter physics, nuclear physics, and quantum chemistry. We propose a novel method to deduce the energy density functional by combining the idea of the functional renormalization group and the Kohn-Sham scheme in density functional theory. The key idea is to solve the renormalization group flow for the effective action decomposed into the mean-field part and the correlation part. Also, we propose a simple practical method to quantify the uncertainty associated with the truncation of the correlation part. By taking the φ4 theory in zero dimension as a benchmark, we demonstrate that our method shows extremely fast convergence to the exact result even for the highly strong coupling regime.
NASA Astrophysics Data System (ADS)
Svensmark, Jens; Tolstikhin, Oleg I.; Madsen, Lars Bojer
2018-03-01
We present the theory of tunneling ionization of molecules with both electronic and nuclear motion treated quantum mechanically. The theory provides partial rates for ionization into the different final states of the molecular ion, including both bound vibrational and dissociative channels. The exact results obtained for a one-dimensional model of H2 and D2 are compared with two approximate approaches, the weak-field asymptotic theory and the Born-Oppenheimer approximation. The validity ranges and compatibility of the approaches are identified formally and illustrated by the calculations. The results quantify that at typical field strengths considered in strong-field physics, it is several orders of magnitude more likely to ionize into bound vibrational ionic channels than into the dissociative channel.
Uncertainty quantification and propagation in nuclear density functional theory
Schunck, N.; McDonnell, J. D.; Higdon, D.; ...
2015-12-23
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this study, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statisticalmore » analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.« less
Econophysics: from Game Theory and Information Theory to Quantum Mechanics
NASA Astrophysics Data System (ADS)
Jimenez, Edward; Moya, Douglas
2005-03-01
Rationality is the universal invariant among human behavior, universe physical laws and ordered and complex biological systems. Econophysics isboth the use of physical concepts in Finance and Economics, and the use of Information Economics in Physics. In special, we will show that it is possible to obtain the Quantum Mechanics principles using Information and Game Theory.
NASA Technical Reports Server (NTRS)
Coddington, Odele; Pilewskie, Peter; Schmidt, K. Sebastian; McBride, Patrick J.; Vukicevic, Tomislava
2013-01-01
This paper presents an approach using the GEneralized Nonlinear Retrieval Analysis (GENRA) tool and general inverse theory diagnostics including the maximum likelihood solution and the Shannon information content to investigate the performance of a new spectral technique for the retrieval of cloud optical properties from surface based transmittance measurements. The cumulative retrieval information over broad ranges in cloud optical thickness (tau), droplet effective radius (r(sub e)), and overhead sun angles is quantified under two conditions known to impact transmitted radiation; the variability in land surface albedo and atmospheric water vapor content. Our conclusions are: (1) the retrieved cloud properties are more sensitive to the natural variability in land surface albedo than to water vapor content; (2) the new spectral technique is more accurate (but still imprecise) than a standard approach, in particular for tau between 5 and 60 and r(sub e) less than approximately 20 nm; and (3) the retrieved cloud properties are dependent on sun angle for clouds of tau from 5 to 10 and r(sub e) less than 10 nm, with maximum sensitivity obtained for an overhead sun.
Learning and Information Approaches for Inference in Dynamic Data-Driven Geophysical Applications
NASA Astrophysics Data System (ADS)
Ravela, S.
2015-12-01
Many Geophysical inference problems are characterized by non-linear processes, high-dimensional models and complex uncertainties. A dynamic coupling between models, estimation, and sampling is typically sought to efficiently characterize and reduce uncertainty. This process is however fraught with several difficulties. Among them, the key difficulties are the ability to deal with model errors, efficacy of uncertainty quantification and data assimilation. In this presentation, we present three key ideas from learning and intelligent systems theory and apply them to two geophysical applications. The first idea is the use of Ensemble Learning to compensate for model error, the second is to develop tractable Information Theoretic Learning to deal with non-Gaussianity in inference, and the third is a Manifold Resampling technique for effective uncertainty quantification. We apply these methods, first to the development of a cooperative autonomous observing system using sUAS for studying coherent structures. We apply this to Second, we apply this to the problem of quantifying risk from hurricanes and storm surges in a changing climate. Results indicate that learning approaches can enable new effectiveness in cases where standard approaches to model reduction, uncertainty quantification and data assimilation fail.
A Study towards Building An Optimal Graph Theory Based Model For The Design of Tourism Website
NASA Astrophysics Data System (ADS)
Panigrahi, Goutam; Das, Anirban; Basu, Kajla
2010-10-01
Effective tourism website is a key to attract tourists from different parts of the world. Here we identify the factors of improving the effectiveness of website by considering it as a graph, where web pages including homepage are the nodes and hyperlinks are the edges between the nodes. In this model, the design constraints for building a tourism website are taken into consideration. Our objectives are to build a framework of an effective tourism website providing adequate level of information, service and also to enable the users to reach to the desired page by spending minimal loading time. In this paper an information hierarchy specifying the upper limit of outgoing link of a page has also been proposed. Following the hierarchy, the web developer can prepare an effective tourism website. Here loading time depends on page size and network traffic. We have assumed network traffic as uniform and the loading time is directly proportional with page size. This approach is done by quantifying the link structure of a tourism website. In this approach we also propose a page size distribution pattern of a tourism website.
Interdisciplinary research on patient-provider communication: a cross-method comparison.
Chou, Wen-Ying Sylvia; Han, Paul; Pilsner, Alison; Coa, Kisha; Greenberg, Larrie; Blatt, Benjamin
2011-01-01
Patient-provider communication, a key aspect of healthcare delivery, has been assessed through multiple methods for purposes of research, education, and quality control. Common techniques include satisfaction ratings and quantitatively- and qualitatively-oriented direct observations. Identifying the strengths and weaknesses of different approaches is critically important in determining the appropriate assessment method for a specific research or practical goal. Analyzing ten videotaped simulated encounters between medical students and Standardized Patients (SPs), this study compared three existing assessment methods through the same data set. Methods included: (1) dichotomized SP ratings on students' communication skills; (2) Roter Interaction Analysis System (RIAS) analysis; and (3) inductive discourse analysis informed by sociolinguistic theories. The large dichotomous contrast between good and poor ratings in (1) was not evidenced in any of the other methods. Following a discussion of strengths and weaknesses of each approach, we pilot-tested a combined assessment done by coders blinded to results of (1)-(3). This type of integrative approach has the potential of adding a quantifiable dimension to qualitative, discourse-based observations. Subjecting the same data set to separate analytic methods provides an excellent opportunity for methodological comparisons with the goal of informing future assessment of clinical encounters.
Jolley, Daniel; Douglas, Karen M
2014-02-01
The current studies explored the social consequences of exposure to conspiracy theories. In Study 1, participants were exposed to a range of conspiracy theories concerning government involvement in significant events such as the death of Diana, Princess of Wales. Results revealed that exposure to information supporting conspiracy theories reduced participants' intentions to engage in politics, relative to participants who were given information refuting conspiracy theories. This effect was mediated by feelings of political powerlessness. In Study 2, participants were exposed to conspiracy theories concerning the issue of climate change. Results revealed that exposure to information supporting the conspiracy theories reduced participants' intentions to reduce their carbon footprint, relative to participants who were given refuting information, or those in a control condition. This effect was mediated by powerlessness with respect to climate change, uncertainty, and disillusionment. Exposure to climate change conspiracy theories also influenced political intentions, an effect mediated by political powerlessness. The current findings suggest that conspiracy theories may have potentially significant social consequences, and highlight the need for further research on the social psychology of conspiracism. © 2012 The British Psychological Society.
Theories of how the school environment impacts on student health: systematic review and synthesis.
Bonell, C P; Fletcher, A; Jamal, F; Wells, H; Harden, A; Murphy, S; Thomas, J
2013-11-01
Public-health interventions informed by theory can be more effective but complex interventions often use insufficiently complex theories. We systematically reviewed theories of how school environments influence health. We included 37 reports drawing on 24 theories. Narrative synthesis summarised and categorised theories. We then produced an integrated theory of school environment influences on student health. This integrated theory could inform complex interventions such as health promoting schools programmes. Using systematic reviews to develop theories of change might be useful for other types of 'complex' public-health interventions addressing risks at the individual and community levels. © 2013 Published by Elsevier Ltd.
Implications of Information Theory for Computational Modeling of Schizophrenia
Wibral, Michael; Phillips, William A.
2017-01-01
Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory—such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio—can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development. PMID:29601053
Jayanti, R K
2001-01-01
Consumer information-processing theory provides a useful framework for policy makers concerned with regulating information provided by managed care organizations. The assumption that consumers are rational information processors and providing more information is better is questioned in this paper. Consumer research demonstrates that when faced with an uncertain decision, consumers adopt simplifying strategies leading to sub-optimal choices. A discussion on how consumers process risk information and the effects of various informational formats on decision outcomes is provided. Categorization theory is used to propose guidelines with regard to providing effective information to consumers choosing among competing managed care plans. Public policy implications borne out of consumer information-processing theory conclude the article.
The evolving Planck mass in classically scale-invariant theories
NASA Astrophysics Data System (ADS)
Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H.
2017-04-01
We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.
Snapshots of Informed Learning: LIS and Beyond
ERIC Educational Resources Information Center
Hughes, Hilary; Bruce, Christine
2013-01-01
Responding to the need for innovative LIS curriculum and pedagogy, grounded in both information and learning theory, this paper introduces the theory and practice of "informed learning" [3]. After explaining how informed learning originated within the LIS discipline we outline the principles and characteristics of informed learning. Then…
Enhancing quantitative approaches for assessing community resilience
Chuang, W. C.; Garmestani, A.S.; Eason, T. N.; Spanbauer, T. L.; Fried-Peterson, H. B.; Roberts, C.P.; Sundstrom, Shana M.; Burnett, J.L.; Angeler, David G.; Chaffin, Brian C.; Gunderson, L.; Twidwell, Dirac; Allen, Craig R.
2018-01-01
Scholars from many different intellectual disciplines have attempted to measure, estimate, or quantify resilience. However, there is growing concern that lack of clarity on the operationalization of the concept will limit its application. In this paper, we discuss the theory, research development and quantitative approaches in ecological and community resilience. Upon noting the lack of methods that quantify the complexities of the linked human and natural aspects of community resilience, we identify several promising approaches within the ecological resilience tradition that may be useful in filling these gaps. Further, we discuss the challenges for consolidating these approaches into a more integrated perspective for managing social-ecological systems.
Enhancing quantitative approaches for assessing community resilience.
Chuang, W C; Garmestani, A; Eason, T N; Spanbauer, T L; Fried-Petersen, H B; Roberts, C P; Sundstrom, S M; Burnett, J L; Angeler, D G; Chaffin, B C; Gunderson, L; Twidwell, D; Allen, C R
2018-05-01
Scholars from many different intellectual disciplines have attempted to measure, estimate, or quantify resilience. However, there is growing concern that lack of clarity on the operationalization of the concept will limit its application. In this paper, we discuss the theory, research development and quantitative approaches in ecological and community resilience. Upon noting the lack of methods that quantify the complexities of the linked human and natural aspects of community resilience, we identify several promising approaches within the ecological resilience tradition that may be useful in filling these gaps. Further, we discuss the challenges for consolidating these approaches into a more integrated perspective for managing social-ecological systems. Published by Elsevier Ltd.
Sharing resources, coordinating response : deploying and operating incident management systems
DOT National Transportation Integrated Search
1998-10-01
This report describes and, where possible, quantifies the value of information and information services for transportation agencies. It evaluates the various means of accessing information and looks at the important role of the information profession...
Theory of plasma contractors for electrodynamic tethered satellite systems
NASA Technical Reports Server (NTRS)
Parks, D. E.; Katz, I.
1986-01-01
Recent data from ground and space experiments indicate that plasma releases from an object dramatically reduce the sheath impedance between the object and the ambient plasma surrounding it. Available data is in qualitative accord with the theory developed to quantify the flow of current in the sheath. Electron transport in the theory is based on a fluid model of a collisionless plasma with an effective collision frequency comparable to frequencies of plasma oscillations. The theory leads to low effective impedances varying inversely with the square root of the injected plasma density. To support such a low impedance mode of operation using an argon plasma source for example requires that only one argon ion be injected for each thirty electrons extracted from the ambient plasma. The required plasma flow rates are quite low; to extract one ampere of electron current requires a mass flow rate of about one gram of argon per day.
ERIC Educational Resources Information Center
Kerr, Paulette A.
2010-01-01
This research was conducted to investigate the relationships between conceptions and practice of information literacy in academic libraries. To create a structure for the investigation, the research adopted the framework of Argyris and Schon (1974) in which professional practice is examined via theories of action, namely espoused theories and…
An information theory framework for dynamic functional domain connectivity.
Vergara, Victor M; Miller, Robyn; Calhoun, Vince
2017-06-01
Dynamic functional network connectivity (dFNC) analyzes time evolution of coherent activity in the brain. In this technique dynamic changes are considered for the whole brain. This paper proposes an information theory framework to measure information flowing among subsets of functional networks call functional domains. Our method aims at estimating bits of information contained and shared among domains. The succession of dynamic functional states is estimated at the domain level. Information quantity is based on the probabilities of observing each dynamic state. Mutual information measurement is then obtained from probabilities across domains. Thus, we named this value the cross domain mutual information (CDMI). Strong CDMIs were observed in relation to the subcortical domain. Domains related to sensorial input, motor control and cerebellum form another CDMI cluster. Information flow among other domains was seldom found. Other methods of dynamic connectivity focus on whole brain dFNC matrices. In the current framework, information theory is applied to states estimated from pairs of multi-network functional domains. In this context, we apply information theory to measure information flow across functional domains. Identified CDMI clusters point to known information pathways in the basal ganglia and also among areas of sensorial input, patterns found in static functional connectivity. In contrast, CDMI across brain areas of higher level cognitive processing follow a different pattern that indicates scarce information sharing. These findings show that employing information theory to formally measured information flow through brain domains reveals additional features of functional connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hommers, Wilfried; Lee, Wha-Yong
2010-01-01
In order to unify two major theories of moral judgment, a novel task is employed which combines elements of Kohlberg's stage theory and of the theory of information integration. In contrast to the format of Kohlberg's moral judgment interview, a nonverbal and quantitative response which makes low demands on verbal facility was used. Moral…
Thermodynamical transcription of density functional theory with minimum Fisher information
NASA Astrophysics Data System (ADS)
Nagy, Á.
2018-03-01
Ghosh, Berkowitz and Parr designed a thermodynamical transcription of the ground-state density functional theory and introduced a local temperature that varies from point to point. The theory, however, is not unique because the kinetic energy density is not uniquely defined. Here we derive the expression of the phase-space Fisher information in the GBP theory taking the inverse temperature as the Fisher parameter. It is proved that this Fisher information takes its minimum for the case of constant temperature. This result is consistent with the recently proven theorem that the phase-space Shannon information entropy attains its maximum at constant temperature.
Clinical outcome measurement: Models, theory, psychometrics and practice.
McClimans, Leah; Browne, John; Cano, Stefan
In the last decade much has been made of the role that models play in the epistemology of measurement. Specifically, philosophers have been interested in the role of models in producing measurement outcomes. This discussion has proceeded largely within the context of the physical sciences, with notable exceptions considering measurement in economics. However, models also play a central role in the methods used to develop instruments that purport to quantify psychological phenomena. These methods fall under the umbrella term 'psychometrics'. In this paper, we focus on Clinical Outcome Assessments (COAs) and discuss two measurement theories and their associated models: Classical Test Theory (CTT) and Rasch Measurement Theory. We argue that models have an important role to play in coordinating theoretical terms with empirical content, but to do so they must serve: 1) as a representation of the measurement interaction; and 2) in conjunction with a theory of the attribute in which we are interested. We conclude that Rasch Measurement Theory is a more promising approach than CTT in these regards despite the latter's popularity with health outcomes researchers. Copyright © 2017. Published by Elsevier Ltd.
Establishing a research agenda for scientific and technical information (STI) - Focus on the user
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.
1992-01-01
This report addresses the relationship between library science and information science theory and practice, between the development of conceptual understanding, and the practical competence of information professionals. Consideration is given to the concept of research, linking theory with practice, and the reality of theory based practice. Attention is given to the need for research and research priorities, focus on the user and information-seeking behavior, and a user-oriented research agenda for STI.
Establishing a research agenda for Scientific and Technical Information (STI): Focus on the user
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.
1992-01-01
This report addresses the relationship between library science and information science theory and practice, between the development of conceptual understanding, and the practical competence of information professionals. Consideration is given to the concept of research, linking theory with practice, and the reality of theory based practice. Attention is given to the need for research and research priorities, focus on the user and information-seeking behavior, and a user-oriented research agenda for STI.
The Philosophy of Information as an Underlying and Unifying Theory of Information Science
ERIC Educational Resources Information Center
Tomic, Taeda
2010-01-01
Introduction: Philosophical analyses of theoretical principles underlying these sub-domains reveal philosophy of information as underlying meta-theory of information science. Method: Conceptual research on the knowledge sub-domains in information science and philosophy and analysis of their mutual connection. Analysis: Similarities between…
Guan, Yanpeng; Wang, Enzhi; Liu, Xiaoli; Wang, Sijing; Luan, Hebing
2017-08-03
We have attempted a multiscale and quantified characterization method of the contact in three-dimensional granular material made of spherical particles, particularly in cemented granular material. Particle contact is defined as a type of surface contact with voids in its surroundings, rather than a point contact. Macro contact is a particle contact set satisfying the restrictive condition of a two-dimensional manifold with a boundary. On the basis of graph theory, two dual geometrical systems are abstracted from the granular pack. The face and the face set, which satisfies the two-dimensional manifold with a boundary in the solid cell system, are extracted to characterize the particle contact and the macro contact, respectively. This characterization method is utilized to improve the post-processing in DEM (Discrete Element Method) from a micro perspective to describe the macro effect of the cemented granular material made of spherical particles. Since the crack has the same shape as its corresponding contact, this method is adopted to characterize the crack and realize its visualization. The integral failure route of the sample can be determined by a graph theory algorithm. The contact force is assigned to the weight value of the face characterizing the particle contact. Since the force vectors can be added, the macro contact force can be solved by adding the weight of its corresponding faces.
Evidence-based Sensor Tasking for Space Domain Awareness
NASA Astrophysics Data System (ADS)
Jaunzemis, A.; Holzinger, M.; Jah, M.
2016-09-01
Space Domain Awareness (SDA) is the actionable knowledge required to predict, avoid, deter, operate through, recover from, and/or attribute cause to the loss and/or degradation of space capabilities and services. A main purpose for SDA is to provide decision-making processes with a quantifiable and timely body of evidence of behavior(s) attributable to specific space threats and/or hazards. To fulfill the promise of SDA, it is necessary for decision makers and analysts to pose specific hypotheses that may be supported or refuted by evidence, some of which may only be collected using sensor networks. While Bayesian inference may support some of these decision making needs, it does not adequately capture ambiguity in supporting evidence; i.e., it struggles to rigorously quantify 'known unknowns' for decision makers. Over the past 40 years, evidential reasoning approaches such as Dempster Shafer theory have been developed to address problems with ambiguous bodies of evidence. This paper applies mathematical theories of evidence using Dempster Shafer expert systems to address the following critical issues: 1) How decision makers can pose critical decision criteria as rigorous, testable hypotheses, 2) How to interrogate these hypotheses to reduce ambiguity, and 3) How to task a network of sensors to gather evidence for multiple competing hypotheses. This theory is tested using a simulated sensor tasking scenario balancing search versus track responsibilities.
A stochastic approach for quantifying immigrant integration: the Spanish test case
NASA Astrophysics Data System (ADS)
Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia
2014-10-01
We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.
Social Contagion, Adolescent Sexual Behavior, and Pregnancy: A Nonlinear Dynamic EMOSA Model.
ERIC Educational Resources Information Center
Rodgers, Joseph Lee; Rowe, David C.; Buster, Maury
1998-01-01
Expands an existing nonlinear dynamic epidemic model of onset of social activities (EMOSA), motivated by social contagion theory, to quantify the likelihood of pregnancy for adolescent girls of different sexuality statuses. Compares five sexuality/pregnancy models to explain variance in national prevalence curves. Finds that adolescent girls have…
Development of a 15-Item Scale to Measure Parental Perceptions of Their Neighborhood
ERIC Educational Resources Information Center
Quigg, Robin; Gray, Andrew; Reeder, Anthony Ivor; Holt, Alec; Waters, Debra L.
2015-01-01
Socioecological theory suggests that there are a range of influences that affect the physical activity levels of children, including parents' perceptions of the neighborhood. A questionnaire instrument to quantify parental neighborhood perceptions was developed for the Location of Children's Activity in Their Environment study as a potential…
Investigating Flow Experience and Scientific Practices during a Mobile Serious Educational Game
ERIC Educational Resources Information Center
Bressler, Denise M.; Bodzin, Alec M.
2016-01-01
Mobile serious educational games (SEGs) show promise for promoting scientific practices and high engagement. Researchers have quantified this engagement according to flow theory. This study investigated whether a mobile SEG promotes flow experience and scientific practices with eighth-grade urban students. Students playing the game (n = 59) were…
Must We Employ Behavioristic Theory to Have Students Evaluate Us as Teachers?
ERIC Educational Resources Information Center
Helwig, Carl
The recent resurgence of judging teacher effectiveness is part of a revival of behavioristic attempts to find universal criteria empirically as the identification of the "good teacher" or "good teaching." Defenders of behaviorist psychology argue that any "educational objectives" which cannot be quantified are not "real educational objectives."…
Vocalization Subsystem Responses to a Temporarily Induced Unilateral Vocal Fold Paralysis
ERIC Educational Resources Information Center
Croake, Daniel J.; Andreatta, Richard D.; Stemple, Joseph C.
2018-01-01
Purpose: The purpose of this study is to quantify the interactions of the 3 vocalization subsystems of respiration, phonation, and resonance before, during, and after a perturbation to the larynx (temporarily induced unilateral vocal fold paralysis) in 10 vocally healthy participants. Using dynamic systems theory as a guide, we hypothesized that…
The Ecological Research Program (ERP) of the EPA Office of Research and Development has the vision of a comprehensive theory and practice for characterizing, quantifying, and valuing ecosystem services, and their relationship to human well-being for environmental decision making....
Birken, Sarah A; DiMartino, Lisa D; Kirk, Meredith A; Lee, Shoou-Yih D; McClelland, Mark; Albert, Nancy M
2016-01-04
The theory of middle managers' role in implementing healthcare innovations hypothesized that middle managers influence implementation effectiveness by fulfilling the following four roles: diffusing information, synthesizing information, mediating between strategy and day-to-day activities, and selling innovation implementation. The theory also suggested several activities in which middle managers might engage to fulfill the four roles. The extent to which the theory aligns with middle managers' experience in practice is unclear. We surveyed middle managers (n = 63) who attended a nursing innovation summit to (1) assess alignment between the theory and middle managers' experience in practice and (2) elaborate on the theory with examples from middle managers' experience overseeing innovation implementation in practice. Middle managers rated all of the theory's hypothesized four roles as "extremely important" but ranked diffusing and synthesizing information as the most important and selling innovation implementation as the least important. They reported engaging in several activities that were consistent with the theory's hypothesized roles and activities such as diffusing information via meetings and training. They also reported engaging in activities not described in the theory such as appraising employee performance. Middle managers' experience aligned well with the theory and expanded definitions of the roles and activities that it hypothesized. Future studies should assess the relationship between hypothesized roles and the effectiveness with which innovations are implemented in practice. If evidence supports the theory, the theory should be leveraged to promote the fulfillment of hypothesized roles among middle managers, doing so may promote innovation implementation.
Quantum information processing by a continuous Maxwell demon
NASA Astrophysics Data System (ADS)
Stevens, Josey; Deffner, Sebastian
Quantum computing is believed to be fundamentally superior to classical computing; however quantifying the specific thermodynamic advantage has been elusive. Experimentally motivated, we generalize previous minimal models of discrete demons to continuous state space. Analyzing our model allows one to quantify the thermodynamic resources necessary to process quantum information. By further invoking the semi-classical limit we compare the quantum demon with its classical analogue. Finally, this model also serves as a starting point to study open quantum systems.
An Integrative Behavioral Model of Information Security Policy Compliance
Kim, Sang Hoon; Yang, Kyung Hoon; Park, Sunyoung
2014-01-01
The authors found the behavioral factors that influence the organization members' compliance with the information security policy in organizations on the basis of neutralization theory, Theory of planned behavior, and protection motivation theory. Depending on the theory of planned behavior, members' attitudes towards compliance, as well as normative belief and self-efficacy, were believed to determine the intention to comply with the information security policy. Neutralization theory, a prominent theory in criminology, could be expected to provide the explanation for information system security policy violations. Based on the protection motivation theory, it was inferred that the expected efficacy could have an impact on intentions of compliance. By the above logical reasoning, the integrative behavioral model and eight hypotheses could be derived. Data were collected by conducting a survey; 194 out of 207 questionnaires were available. The test of the causal model was conducted by PLS. The reliability, validity, and model fit were found to be statistically significant. The results of the hypotheses tests showed that seven of the eight hypotheses were acceptable. The theoretical implications of this study are as follows: (1) the study is expected to play a role of the baseline for future research about organization members' compliance with the information security policy, (2) the study attempted an interdisciplinary approach by combining psychology and information system security research, and (3) the study suggested concrete operational definitions of influencing factors for information security policy compliance through a comprehensive theoretical review. Also, the study has some practical implications. First, it can provide the guideline to support the successful execution of the strategic establishment for the implement of information system security policies in organizations. Second, it proves that the need of education and training programs suppressing members' neutralization intention to violate information security policy should be emphasized. PMID:24971373
An integrative behavioral model of information security policy compliance.
Kim, Sang Hoon; Yang, Kyung Hoon; Park, Sunyoung
2014-01-01
The authors found the behavioral factors that influence the organization members' compliance with the information security policy in organizations on the basis of neutralization theory, Theory of planned behavior, and protection motivation theory. Depending on the theory of planned behavior, members' attitudes towards compliance, as well as normative belief and self-efficacy, were believed to determine the intention to comply with the information security policy. Neutralization theory, a prominent theory in criminology, could be expected to provide the explanation for information system security policy violations. Based on the protection motivation theory, it was inferred that the expected efficacy could have an impact on intentions of compliance. By the above logical reasoning, the integrative behavioral model and eight hypotheses could be derived. Data were collected by conducting a survey; 194 out of 207 questionnaires were available. The test of the causal model was conducted by PLS. The reliability, validity, and model fit were found to be statistically significant. The results of the hypotheses tests showed that seven of the eight hypotheses were acceptable. The theoretical implications of this study are as follows: (1) the study is expected to play a role of the baseline for future research about organization members' compliance with the information security policy, (2) the study attempted an interdisciplinary approach by combining psychology and information system security research, and (3) the study suggested concrete operational definitions of influencing factors for information security policy compliance through a comprehensive theoretical review. Also, the study has some practical implications. First, it can provide the guideline to support the successful execution of the strategic establishment for the implement of information system security policies in organizations. Second, it proves that the need of education and training programs suppressing members' neutralization intention to violate information security policy should be emphasized.
ERIC Educational Resources Information Center
Pettersson, Rune
2014-01-01
Information design has practical and theoretical components. As an academic discipline we may view information design as a combined discipline, a practical theory, or as a theoretical practice. So far information design has incorporated facts, influences, methods, practices, principles, processes, strategies, and tools from a large number of…
Information Uncertainty to Compare Qualitative Reasoning Security Risk Assessment Results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavez, Gregory M; Key, Brian P; Zerkle, David K
2009-01-01
The security risk associated with malevolent acts such as those of terrorism are often void of the historical data required for a traditional PRA. Most information available to conduct security risk assessments for these malevolent acts is obtained from subject matter experts as subjective judgements. Qualitative reasoning approaches such as approximate reasoning and evidential reasoning are useful for modeling the predicted risk from information provided by subject matter experts. Absent from these approaches is a consistent means to compare the security risk assessment results. Associated with each predicted risk reasoning result is a quantifiable amount of information uncertainty which canmore » be measured and used to compare the results. This paper explores using entropy measures to quantify the information uncertainty associated with conflict and non-specificity in the predicted reasoning results. The measured quantities of conflict and non-specificity can ultimately be used to compare qualitative reasoning results which are important in triage studies and ultimately resource allocation. Straight forward extensions of previous entropy measures are presented here to quantify the non-specificity and conflict associated with security risk assessment results obtained from qualitative reasoning models.« less
Information Theoretic Characterization of Physical Theories with Projective State Space
NASA Astrophysics Data System (ADS)
Zaopo, Marco
2015-08-01
Probabilistic theories are a natural framework to investigate the foundations of quantum theory and possible alternative or deeper theories. In a generic probabilistic theory, states of a physical system are represented as vectors of outcomes probabilities and state spaces are convex cones. In this picture the physics of a given theory is related to the geometric shape of the cone of states. In quantum theory, for instance, the shape of the cone of states corresponds to a projective space over complex numbers. In this paper we investigate geometric constraints on the state space of a generic theory imposed by the following information theoretic requirements: every non completely mixed state of a system is perfectly distinguishable from some other state in a single shot measurement; information capacity of physical systems is conserved under making mixtures of states. These assumptions guarantee that a generic physical system satisfies a natural principle asserting that the more a state of the system is mixed the less information can be stored in the system using that state as logical value. We show that all theories satisfying the above assumptions are such that the shape of their cones of states is that of a projective space over a generic field of numbers. Remarkably, these theories constitute generalizations of quantum theory where superposition principle holds with coefficients pertaining to a generic field of numbers in place of complex numbers. If the field of numbers is trivial and contains only one element we obtain classical theory. This result tells that superposition principle is quite common among probabilistic theories while its absence gives evidence of either classical theory or an implausible theory.
Displacive transformation of virus protein crystal
NASA Astrophysics Data System (ADS)
Celotto, S.; Pond, R. C.
2003-10-01
A crystalline protein undergoes a displacive transformation in the T-even bacteriophage. In the present work, the transformation mechanism is modelled in terms of interfacial dislocations whose motion gives rise to the observed deformation. The topological properties (Burgers vector, {b}, and `overlap' step height, h) of the dislocations involved are defined rigorously and a recent theory is used that quantifies the diffusional flux arising due to their movement. The circumstance under which passage of transformation dislocations is diffusionless is identified. Thus, dislocation modelling is used successfully to describe a diffusionless displacive transformation in a process where the phenomenological theory of martensite crystallography cannot be applied.
A nonlinear dynamical system for combustion instability in a pulse model combustor
NASA Astrophysics Data System (ADS)
Takagi, Kazushi; Gotoda, Hiroshi
2016-11-01
We theoretically and numerically study the bifurcation phenomena of nonlinear dynamical system describing combustion instability in a pulse model combustor on the basis of dynamical system theory and complex network theory. The dynamical behavior of pressure fluctuations undergoes a significant transition from steady-state to deterministic chaos via the period-doubling cascade process known as Feigenbaum scenario with decreasing the characteristic flow time. Recurrence plots and recurrence networks analysis we adopted in this study can quantify the significant changes in dynamic behavior of combustion instability that cannot be captured in the bifurcation diagram.
A non-asymptotic homogenization theory for periodic electromagnetic structures
Tsukerman, Igor; Markel, Vadim A.
2014-01-01
Homogenization of electromagnetic periodic composites is treated as a two-scale problem and solved by approximating the fields on both scales with eigenmodes that satisfy Maxwell's equations and boundary conditions as accurately as possible. Built into this homogenization methodology is an error indicator whose value characterizes the accuracy of homogenization. The proposed theory allows one to define not only bulk, but also position-dependent material parameters (e.g. in proximity to a physical boundary) and to quantify the trade-off between the accuracy of homogenization and its range of applicability to various illumination conditions. PMID:25104912
Expanding the universe of categorical syllogisms: a challenge for reasoning researchers.
Roberts, Maxwell J
2005-11-01
Syllogistic reasoning, in which people identify conclusions from quantified premise pairs, remains a benchmark task whose patterns of data must be accounted for by general theories of deductive reasoning. However, psychologists have confined themselves to administering only the 64 premise pairs historically identified by Aristotle. By utilizing all combinations of negations, the present article identifies an expanded set of 576 premise pairs and gives the valid conclusions that they support. Many of these have interesting properties, and the identification of predictions and their verification will be an important next step for all proponents of such theories.
Hudson, Nicholas J; Naval-Sánchez, Marina; Porto-Neto, Laercio; Pérez-Enciso, Miguel; Reverter, Antonio
2018-06-05
Asian and European wild boars were independently domesticated ca. 10,000 years ago. Since the 17th century, Chinese breeds have been imported to Europe to improve the genetics of European animals by introgression of favourable alleles, resulting in a complex mosaic of haplotypes. To interrogate the structure of these haplotypes further, we have run a new haplotype segregation analysis based on information theory, namely compression efficiency (CE). We applied the approach to sequence data from individuals from each phylogeographic region (n = 23 from Asia and Europe) including a number of major pig breeds. Our genome-wide CE is able to discriminate the breeds in a manner reflecting phylogeography. Furthermore, 24,956 non-overlapping sliding windows (each comprising 1,000 consecutive SNP) were quantified for extent of haplotype sharing within and between Asia and Europe. The genome-wide distribution of extent of haplotype sharing was quite different between groups. Unlike European pigs, Asian pigs haplotype sharing approximates a normal distribution. In line with this, we found the European breeds possessed a number of genomic windows of dramatically higher haplotype sharing than the Asian breeds. Our CE analysis of sliding windows capture some of the genomic regions reported to contain signatures of selection in domestic pigs. Prominent among these regions, we highlight the role of a gene encoding the mitochondrial enzyme LACTB which has been associated with obesity, and the gene encoding MYOG a fundamental transcriptional regulator of myogenesis. The origin of these regions likely reflects either a population bottleneck in European animals, or selective targets on commercial phenotypes reducing allelic diversity in particular genes and/or regulatory regions.
NASA Astrophysics Data System (ADS)
Schwabe, O.; Shehab, E.; Erkoyuncu, J.
2015-08-01
The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.
From brain topography to brain topology: relevance of graph theory to functional neuroscience.
Minati, Ludovico; Varotto, Giulia; D'Incerti, Ludovico; Panzica, Ferruccio; Chan, Dennis
2013-07-10
Although several brain regions show significant specialization, higher functions such as cross-modal information integration, abstract reasoning and conscious awareness are viewed as emerging from interactions across distributed functional networks. Analytical approaches capable of capturing the properties of such networks can therefore enhance our ability to make inferences from functional MRI, electroencephalography and magnetoencephalography data. Graph theory is a branch of mathematics that focuses on the formal modelling of networks and offers a wide range of theoretical tools to quantify specific features of network architecture (topology) that can provide information complementing the anatomical localization of areas responding to given stimuli or tasks (topography). Explicit modelling of the architecture of axonal connections and interactions among areas can furthermore reveal peculiar topological properties that are conserved across diverse biological networks, and highly sensitive to disease states. The field is evolving rapidly, partly fuelled by computational developments that enable the study of connectivity at fine anatomical detail and the simultaneous interactions among multiple regions. Recent publications in this area have shown that graph-based modelling can enhance our ability to draw causal inferences from functional MRI experiments, and support the early detection of disconnection and the modelling of pathology spread in neurodegenerative disease, particularly Alzheimer's disease. Furthermore, neurophysiological studies have shown that network topology has a profound link to epileptogenesis and that connectivity indices derived from graph models aid in modelling the onset and spread of seizures. Graph-based analyses may therefore significantly help understand the bases of a range of neurological conditions. This review is designed to provide an overview of graph-based analyses of brain connectivity and their relevance to disease aimed principally at general neuroscientists and clinicians.
Swannell, Ellen R; Brown, Christopher A; Jones, Anthony K P; Brown, Richard J
2016-03-01
Theory suggests that as activation of pain concepts in memory increases, so too does subsequent pain perception. Previously, researchers have found that activating pain concepts in memory increases pain perception of subsequent painful stimuli, relative to neutral information. However, they have not attempted to quantify the nature of the association between information studied and ensuing pain perception. We subliminally presented words that had either a low or high degree of association to the word 'pain,' although this was only partially successful and some words were consciously perceived. Participants then received randomized laser heat stimuli, delivered at 1 of 3 intensity levels (low, moderate, high), and we measured the effect of this on behavioral and electrophysiological measures of pain. Participants (N = 27) rated moderate- and high-intensity laser stimuli as more painful after viewing high relative to low associates of pain; these effects remained present when we controlled for measures of mood, anxiety, and physical symptom reporting. Similar effects were observed physiologically, with higher stimulus negativity preceding after high relative to low associates and greater amplitudes for the N2 component of the laser-evoked potential after presentation of high associates in the moderate and high laser intensity conditions. These data support activation-based models of the effects of memory on pain perception. Consistent with current theories of memory and pain, we found that high, relative to low activation of pain concepts in memory increased psychological and physiological responses to laser-induced pain. The effect remained regardless of whether participants showed conscious awareness of activation. Theoretical and clinical implications are discussed. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.
Haeufle, D F B; Günther, M; Wunner, G; Schmitt, S
2014-01-01
In biomechanics and biorobotics, muscles are often associated with reduced movement control effort and simplified control compared to technical actuators. This is based on evidence that the nonlinear muscle properties positively influence movement control. It is, however, open how to quantify the simplicity aspect of control effort and compare it between systems. Physical measures, such as energy consumption, stability, or jerk, have already been applied to compare biological and technical systems. Here a physical measure of control effort based on information entropy is presented. The idea is that control is simpler if a specific movement is generated with less processed sensor information, depending on the control scheme and the physical properties of the systems being compared. By calculating the Shannon information entropy of all sensor signals required for control, an information cost function can be formulated allowing the comparison of models of biological and technical control systems. Exemplarily applied to (bio-)mechanical models of hopping, the method reveals that the required information for generating hopping with a muscle driven by a simple reflex control scheme is only I=32 bits versus I=660 bits with a DC motor and a proportional differential controller. This approach to quantifying control effort captures the simplicity of a control scheme and can be used to compare completely different actuators and control approaches.
Quantification of network structural dissimilarities.
Schieber, Tiago A; Carpi, Laura; Díaz-Guilera, Albert; Pardalos, Panos M; Masoller, Cristina; Ravetti, Martín G
2017-01-09
Identifying and quantifying dissimilarities among graphs is a fundamental and challenging problem of practical importance in many fields of science. Current methods of network comparison are limited to extract only partial information or are computationally very demanding. Here we propose an efficient and precise measure for network comparison, which is based on quantifying differences among distance probability distributions extracted from the networks. Extensive experiments on synthetic and real-world networks show that this measure returns non-zero values only when the graphs are non-isomorphic. Most importantly, the measure proposed here can identify and quantify structural topological differences that have a practical impact on the information flow through the network, such as the presence or absence of critical links that connect or disconnect connected components.
2014-05-01
DISTRIBUTION A. Approved for public release: distribution unlimited. INFORMATION, UNDERSTANDING, AND INFLUENCE: AN AGENCY THEORY STRATEGY ...Influence: An Agency Theory Strategy For Air Base Communications And Cyberspace Support 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...present communications and cyberspace support organizations. Next, it introduces a strategy based on this analysis to bring information, understanding