Basic research on design analysis methods for rotorcraft vibrations
NASA Technical Reports Server (NTRS)
Hanagud, S.
1991-01-01
The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.
NASA Astrophysics Data System (ADS)
Karmalkar, A.; Sexton, D.; Murphy, J.
2017-12-01
We present exploratory work towards developing an efficient strategy to select variants of a state-of-the-art but expensive climate model suitable for climate projection studies. The strategy combines information from a set of idealized perturbed parameter ensemble (PPE) and CMIP5 multi-model ensemble (MME) experiments, and uses two criteria as basis to select model variants for a PPE suitable for future projections: a) acceptable model performance at two different timescales, and b) maintaining diversity in model response to climate change. We demonstrate that there is a strong relationship between model errors at weather and climate timescales for a variety of key variables. This relationship is used to filter out parts of parameter space that do not give credible simulations of historical climate, while minimizing the impact on ranges in forcings and feedbacks that drive model responses to climate change. We use statistical emulation to explore the parameter space thoroughly, and demonstrate that about 90% can be filtered out without affecting diversity in global-scale climate change responses. This leads to identification of plausible parts of parameter space from which model variants can be selected for projection studies.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell-Maupin, Kathryn; Oden, J. T.
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
Farrell-Maupin, Kathryn; Oden, J. T.
2017-08-01
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Synek, Alexander; Pahr, Dieter H
2018-06-01
A micro-finite element-based method to estimate the bone loading history based on bone architecture was recently presented in the literature. However, a thorough investigation of the parameter sensitivity and plausibility of this method to predict joint loads is still missing. The goals of this study were (1) to analyse the parameter sensitivity of the joint load predictions at one proximal femur and (2) to assess the plausibility of the results by comparing load predictions of ten proximal femora to in vivo hip joint forces measured with instrumented prostheses (available from www.orthoload.com ). Joint loads were predicted by optimally scaling the magnitude of four unit loads (inclined [Formula: see text] to [Formula: see text] with respect to the vertical axis) applied to micro-finite element models created from high-resolution computed tomography scans ([Formula: see text]m voxel size). Parameter sensitivity analysis was performed by varying a total of nine parameters and showed that predictions of the peak load directions (range 10[Formula: see text]-[Formula: see text]) are more robust than the predicted peak load magnitudes (range 2344.8-4689.5 N). Comparing the results of all ten femora with the in vivo loading data of ten subjects showed that peak loads are plausible both in terms of the load direction (in vivo: [Formula: see text], predicted: [Formula: see text]) and magnitude (in vivo: [Formula: see text], predicted: [Formula: see text]). Overall, this study suggests that micro-finite element-based joint load predictions are both plausible and robust in terms of the predicted peak load direction, but predicted load magnitudes should be interpreted with caution.
Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling
NASA Astrophysics Data System (ADS)
Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.
2002-05-01
Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, Peter
2014-01-24
This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Kathryn, E-mail: kfarrell@ices.utexas.edu; Oden, J. Tinsley, E-mail: oden@ices.utexas.edu; Faghihi, Danial, E-mail: danial@ices.utexas.edu
A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.
Compact continuum brain model for human electroencephalogram
NASA Astrophysics Data System (ADS)
Kim, J. W.; Shin, H.-B.; Robinson, P. A.
2007-12-01
A low-dimensional, compact brain model has recently been developed based on physiologically based mean-field continuum formulation of electric activity of the brain. The essential feature of the new compact model is a second order time-delayed differential equation that has physiologically plausible terms, such as rapid corticocortical feedback and delayed feedback via extracortical pathways. Due to its compact form, the model facilitates insight into complex brain dynamics via standard linear and nonlinear techniques. The model successfully reproduces many features of previous models and experiments. For example, experimentally observed typical rhythms of electroencephalogram (EEG) signals are reproduced in a physiologically plausible parameter region. In the nonlinear regime, onsets of seizures, which often develop into limit cycles, are illustrated by modulating model parameters. It is also shown that a hysteresis can occur when the system has multiple attractors. As a further illustration of this approach, power spectra of the model are fitted to those of sleep EEGs of two subjects (one with apnea, the other with narcolepsy). The model parameters obtained from the fittings show good matches with previous literature. Our results suggest that the compact model can provide a theoretical basis for analyzing complex EEG signals.
NASA Astrophysics Data System (ADS)
Farrell, Kathryn; Oden, J. Tinsley; Faghihi, Danial
2015-08-01
A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.
Selection, calibration, and validation of models of tumor growth.
Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C
2016-11-01
This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
Using Dirichlet Priors to Improve Model Parameter Plausibility
ERIC Educational Resources Information Center
Rai, Dovan; Gong, Yue; Beck, Joseph E.
2009-01-01
Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…
Model-based recovery of histological parameters from multispectral images of the colon
NASA Astrophysics Data System (ADS)
Hidovic-Rowe, Dzena; Claridge, Ela
2005-04-01
Colon cancer alters the macroarchitecture of the colon tissue. Common changes include angiogenesis and the distortion of the tissue collagen matrix. Such changes affect the colon colouration. This paper presents the principles of a novel optical imaging method capable of extracting parameters depicting histological quantities of the colon. The method is based on a computational, physics-based model of light interaction with tissue. The colon structure is represented by three layers: mucosa, submucosa and muscle layer. Optical properties of the layers are defined by molar concentration and absorption coefficients of haemoglobins; the size and density of collagen fibres; the thickness of the layer and the refractive indexes of collagen and the medium. Using the entire histologically plausible ranges for these parameters, a cross-reference is created computationally between the histological quantities and the associated spectra. The output of the model was compared to experimental data acquired in vivo from 57 histologically confirmed normal and abnormal tissue samples and histological parameters were extracted. The model produced spectra which match well the measured data, with the corresponding spectral parameters being well within histologically plausible ranges. Parameters extracted for the abnormal spectra showed the increase in blood volume fraction and changes in collagen pattern characteristic of the colon cancer. The spectra extracted from multi-spectral images of ex-vivo colon including adenocarcinoma show the characteristic features associated with normal and abnormal colon tissue. These findings suggest that it should be possible to compute histological quantities for the colon from the multi-spectral images.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
Multiple regimes of robust patterns between network structure and biodiversity
NASA Astrophysics Data System (ADS)
Jover, Luis F.; Flores, Cesar O.; Cortez, Michael H.; Weitz, Joshua S.
2015-12-01
Ecological networks such as plant-pollinator and host-parasite networks have structured interactions that define who interacts with whom. The structure of interactions also shapes ecological and evolutionary dynamics. Yet, there is significant ongoing debate as to whether certain structures, e.g., nestedness, contribute positively, negatively or not at all to biodiversity. We contend that examining variation in life history traits is key to disentangling the potential relationship between network structure and biodiversity. Here, we do so by analyzing a dynamic model of virus-bacteria interactions across a spectrum of network structures. Consistent with prior studies, we find plausible parameter domains exhibiting strong, positive relationships between nestedness and biodiversity. Yet, the same model can exhibit negative relationships between nestedness and biodiversity when examined in a distinct, plausible region of parameter space. We discuss steps towards identifying when network structure could, on its own, drive the resilience, sustainability, and even conservation of ecological communities.
Multiple regimes of robust patterns between network structure and biodiversity
Jover, Luis F.; Flores, Cesar O.; Cortez, Michael H.; Weitz, Joshua S.
2015-01-01
Ecological networks such as plant-pollinator and host-parasite networks have structured interactions that define who interacts with whom. The structure of interactions also shapes ecological and evolutionary dynamics. Yet, there is significant ongoing debate as to whether certain structures, e.g., nestedness, contribute positively, negatively or not at all to biodiversity. We contend that examining variation in life history traits is key to disentangling the potential relationship between network structure and biodiversity. Here, we do so by analyzing a dynamic model of virus-bacteria interactions across a spectrum of network structures. Consistent with prior studies, we find plausible parameter domains exhibiting strong, positive relationships between nestedness and biodiversity. Yet, the same model can exhibit negative relationships between nestedness and biodiversity when examined in a distinct, plausible region of parameter space. We discuss steps towards identifying when network structure could, on its own, drive the resilience, sustainability, and even conservation of ecological communities. PMID:26632996
NASA Astrophysics Data System (ADS)
Badawy, B.; Fletcher, C. G.
2017-12-01
The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.
Dallmann, André; Ince, Ibrahim; Meyer, Michaela; Willmann, Stefan; Eissing, Thomas; Hempel, Georg
2017-11-01
In the past years, several repositories for anatomical and physiological parameters required for physiologically based pharmacokinetic modeling in pregnant women have been published. While providing a good basis, some important aspects can be further detailed. For example, they did not account for the variability associated with parameters or were lacking key parameters necessary for developing more detailed mechanistic pregnancy physiologically based pharmacokinetic models, such as the composition of pregnancy-specific tissues. The aim of this meta-analysis was to provide an updated and extended database of anatomical and physiological parameters in healthy pregnant women that also accounts for changes in the variability of a parameter throughout gestation and for the composition of pregnancy-specific tissues. A systematic literature search was carried out to collect study data on pregnancy-related changes of anatomical and physiological parameters. For each parameter, a set of mathematical functions was fitted to the data and to the standard deviation observed among the data. The best performing functions were selected based on numerical and visual diagnostics as well as based on physiological plausibility. The literature search yielded 473 studies, 302 of which met the criteria to be further analyzed and compiled in a database. In total, the database encompassed 7729 data. Although the availability of quantitative data for some parameters remained limited, mathematical functions could be generated for many important parameters. Gaps were filled based on qualitative knowledge and based on physiologically plausible assumptions. The presented results facilitate the integration of pregnancy-dependent changes in anatomy and physiology into mechanistic population physiologically based pharmacokinetic models. Such models can ultimately provide a valuable tool to investigate the pharmacokinetics during pregnancy in silico and support informed decision making regarding optimal dosing regimens in this vulnerable special population.
NASA Astrophysics Data System (ADS)
Maldonado, Solvey; Findeisen, Rolf
2010-06-01
The modeling, analysis, and design of treatment therapies for bone disorders based on the paradigm of force-induced bone growth and adaptation is a challenging task. Mathematical models provide, in comparison to clinical, medical and biological approaches an structured alternative framework to understand the concurrent effects of the multiple factors involved in bone remodeling. By now, there are few mathematical models describing the appearing complex interactions. However, the resulting models are complex and difficult to analyze, due to the strong nonlinearities appearing in the equations, the wide range of variability of the states, and the uncertainties in parameters. In this work, we focus on analyzing the effects of changes in model structure and parameters/inputs variations on the overall steady state behavior using systems theoretical methods. Based on an briefly reviewed existing model that describes force-induced bone adaptation, the main objective of this work is to analyze the stationary behavior and to identify plausible treatment targets for remodeling related bone disorders. Identifying plausible targets can help in the development of optimal treatments combining both physical activity and drug-medication. Such treatments help to improve/maintain/restore bone strength, which deteriorates under bone disorder conditions, such as estrogen deficiency.
Two Strain Dengue Model with Temporary Cross Immunity and Seasonality
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Ballesteros, Sebastien; Stollenwerk, Nico
2010-09-01
Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.
Two Strain Dengue Model with Temporary Cross Immunity and Seasonality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguiar, Maira; Ballesteros, Sebastien; Stollenwerk, Nico
Models on dengue fever epidemiology have previously shown critical fluctuations with power law distributions and also deterministic chaos in some parameter regions due to the multi-strain structure of the disease pathogen. In our first model including well known biological features, we found a rich dynamical structure including limit cycles, symmetry breaking bifurcations, torus bifurcations, coexisting attractors including isola solutions and deterministic chaos (as indicated by positive Lyapunov exponents) in a much larger parameter region, which is also biologically more plausible than the previous results of other researches. Based on these findings we will investigate the model structures further including seasonality.
SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.
Weight, Michael D; Harpending, Henry
2017-01-01
The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.
NASA Astrophysics Data System (ADS)
Pyt'ev, Yu. P.
2018-01-01
mathematical formalism for subjective modeling, based on modelling of uncertainty, reflecting unreliability of subjective information and fuzziness that is common for its content. The model of subjective judgments on values of an unknown parameter x ∈ X of the model M( x) of a research object is defined by the researcher-modeler as a space1 ( X, p( X), P{I^{\\bar x}}, Be{l^{\\bar x}}) with plausibility P{I^{\\bar x}} and believability Be{l^{\\bar x}} measures, where x is an uncertain element taking values in X that models researcher—modeler's uncertain propositions about an unknown x ∈ X, measures P{I^{\\bar x}}, Be{l^{\\bar x}} model modalities of a researcher-modeler's subjective judgments on the validity of each x ∈ X: the value of P{I^{\\bar x}}(\\tilde x = x) determines how relatively plausible, in his opinion, the equality (\\tilde x = x) is, while the value of Be{l^{\\bar x}}(\\tilde x = x) determines how the inequality (\\tilde x = x) should be relatively believed in. Versions of plausibility Pl and believability Bel measures and pl- and bel-integrals that inherit some traits of probabilities, psychophysics and take into account interests of researcher-modeler groups are considered. It is shown that the mathematical formalism of subjective modeling, unlike "standard" mathematical modeling, •enables a researcher-modeler to model both precise formalized knowledge and non-formalized unreliable knowledge, from complete ignorance to precise knowledge of the model of a research object, to calculate relative plausibilities and believabilities of any features of a research object that are specified by its subjective model M(\\tilde x), and if the data on observations of a research object is available, then it: •enables him to estimate the adequacy of subjective model to the research objective, to correct it by combining subjective ideas and the observation data after testing their consistency, and, finally, to empirically recover the model of a research object.
NASA Astrophysics Data System (ADS)
Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.
2017-12-01
Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.
Macroecological analyses support an overkill scenario for late Pleistocene extinctions.
Diniz-Filho, J A F
2004-08-01
The extinction of megafauna at the end of Pleistocene has been traditionally explained by environmental changes or overexploitation by human hunting (overkill). Despite difficulties in choosing between these alternative (and not mutually exclusive) scenarios, the plausibility of the overkill hypothesis can be established by ecological models of predator-prey interactions. In this paper, I have developed a macroecological model for the overkill hypothesis, in which prey population dynamic parameters, including abundance, geographic extent, and food supply for hunters, were derived from empirical allometric relationships with body mass. The last output correctly predicts the final destiny (survival or extinction) for 73% of the species considered, a value only slightly smaller than those obtained by more complex models based on detailed archaeological and ecological data for each species. This illustrates the high selectivity of Pleistocene extinction in relation to body mass and confers more plausibility on the overkill scenario.
Xu, Kesheng; Maidana, Jean P.; Caviedes, Mauricio; Quero, Daniel; Aguirre, Pablo; Orio, Patricio
2017-01-01
In this article, we describe and analyze the chaotic behavior of a conductance-based neuronal bursting model. This is a model with a reduced number of variables, yet it retains biophysical plausibility. Inspired by the activity of cold thermoreceptors, the model contains a persistent Sodium current, a Calcium-activated Potassium current and a hyperpolarization-activated current (Ih) that drive a slow subthreshold oscillation. Driven by this oscillation, a fast subsystem (fast Sodium and Potassium currents) fires action potentials in a periodic fashion. Depending on the parameters, this model can generate a variety of firing patterns that includes bursting, regular tonic and polymodal firing. Here we show that the transitions between different firing patterns are often accompanied by a range of chaotic firing, as suggested by an irregular, non-periodic firing pattern. To confirm this, we measure the maximum Lyapunov exponent of the voltage trajectories, and the Lyapunov exponent and Lempel-Ziv's complexity of the ISI time series. The four-variable slow system (without spiking) also generates chaotic behavior, and bifurcation analysis shows that this is often originated by period doubling cascades. Either with or without spikes, chaos is no longer generated when the Ih is removed from the system. As the model is biologically plausible with biophysically meaningful parameters, we propose it as a useful tool to understand chaotic dynamics in neurons. PMID:28344550
Phillips, Lawrence; Pearl, Lisa
2015-11-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.
On the distinguishability of HRF models in fMRI.
Rosa, Paulo N; Figueiredo, Patricia; Silvestre, Carlos J
2015-01-01
Modeling the Hemodynamic Response Function (HRF) is a critical step in fMRI studies of brain activity, and it is often desirable to estimate HRF parameters with physiological interpretability. A biophysically informed model of the HRF can be described by a non-linear time-invariant dynamic system. However, the identification of this dynamic system may leave much uncertainty on the exact values of the parameters. Moreover, the high noise levels in the data may hinder the model estimation task. In this context, the estimation of the HRF may be seen as a problem of model falsification or invalidation, where we are interested in distinguishing among a set of eligible models of dynamic systems. Here, we propose a systematic tool to determine the distinguishability among a set of physiologically plausible HRF models. The concept of absolutely input-distinguishable systems is introduced and applied to a biophysically informed HRF model, by exploiting the structure of the underlying non-linear dynamic system. A strategy to model uncertainty in the input time-delay and magnitude is developed and its impact on the distinguishability of two physiologically plausible HRF models is assessed, in terms of the maximum noise amplitude above which it is not possible to guarantee the falsification of one model in relation to another. Finally, a methodology is proposed for the choice of the input sequence, or experimental paradigm, that maximizes the distinguishability of the HRF models under investigation. The proposed approach may be used to evaluate the performance of HRF model estimation techniques from fMRI data.
A Workflow for Global Sensitivity Analysis of PBPK Models
McNally, Kevin; Cotton, Richard; Loizou, George D.
2011-01-01
Physiologically based pharmacokinetic (PBPK) models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilized to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis (SA) technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined the elements of a workflow for SA of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot), which we believe is intuitive and appropriate for toxicologists, risk assessors, and regulators. PMID:21772819
Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.
2007-01-01
In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921
NASA Astrophysics Data System (ADS)
Farrell, Kathryn; Oden, J. Tinsley
2014-07-01
Coarse-grained models of atomic systems, created by aggregating groups of atoms into molecules to reduce the number of degrees of freedom, have been used for decades in important scientific and technological applications. In recent years, interest in developing a more rigorous theory for coarse graining and in assessing the predictivity of coarse-grained models has arisen. In this work, Bayesian methods for the calibration and validation of coarse-grained models of atomistic systems in thermodynamic equilibrium are developed. For specificity, only configurational models of systems in canonical ensembles are considered. Among major challenges in validating coarse-grained models are (1) the development of validation processes that lead to information essential in establishing confidence in the model's ability predict key quantities of interest and (2), above all, the determination of the coarse-grained model itself; that is, the characterization of the molecular architecture, the choice of interaction potentials and thus parameters, which best fit available data. The all-atom model is treated as the "ground truth," and it provides the basis with respect to which properties of the coarse-grained model are compared. This base all-atom model is characterized by an appropriate statistical mechanics framework in this work by canonical ensembles involving only configurational energies. The all-atom model thus supplies data for Bayesian calibration and validation methods for the molecular model. To address the first challenge, we develop priors based on the maximum entropy principle and likelihood functions based on Gaussian approximations of the uncertainties in the parameter-to-observation error. To address challenge (2), we introduce the notion of model plausibilities as a means for model selection. This methodology provides a powerful approach toward constructing coarse-grained models which are most plausible for given all-atom data. We demonstrate the theory and methods through applications to representative atomic structures and we discuss extensions to the validation process for molecular models of polymer structures encountered in certain semiconductor nanomanufacturing processes. The powerful method of model plausibility as a means for selecting interaction potentials for coarse-grained models is discussed in connection with a coarse-grained hexane molecule. Discussions of how all-atom information is used to construct priors are contained in an appendix.
Real-time physics-based 3D biped character animation using an inverted pendulum model.
Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee
2010-01-01
We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.
Dynamic causal modelling: a critical review of the biophysical and statistical foundations.
Daunizeau, J; David, O; Stephan, K E
2011-09-15
The goal of dynamic causal modelling (DCM) of neuroimaging data is to study experimentally induced changes in functional integration among brain regions. This requires (i) biophysically plausible and physiologically interpretable models of neuronal network dynamics that can predict distributed brain responses to experimental stimuli and (ii) efficient statistical methods for parameter estimation and model comparison. These two key components of DCM have been the focus of more than thirty methodological articles since the seminal work of Friston and colleagues published in 2003. In this paper, we provide a critical review of the current state-of-the-art of DCM. We inspect the properties of DCM in relation to the most common neuroimaging modalities (fMRI and EEG/MEG) and the specificity of inference on neural systems that can be made from these data. We then discuss both the plausibility of the underlying biophysical models and the robustness of the statistical inversion techniques. Finally, we discuss potential extensions of the current DCM framework, such as stochastic DCMs, plastic DCMs and field DCMs. Copyright © 2009 Elsevier Inc. All rights reserved.
A Method of Q-Matrix Validation for the Linear Logistic Test Model
Baghaei, Purya; Hohensinn, Christine
2017-01-01
The linear logistic test model (LLTM) is a well-recognized psychometric model for examining the components of difficulty in cognitive tests and validating construct theories. The plausibility of the construct model, summarized in a matrix of weights, known as the Q-matrix or weight matrix, is tested by (1) comparing the fit of LLTM with the fit of the Rasch model (RM) using the likelihood ratio (LR) test and (2) by examining the correlation between the Rasch model item parameters and LLTM reconstructed item parameters. The problem with the LR test is that it is almost always significant and, consequently, LLTM is rejected. The drawback of examining the correlation coefficient is that there is no cut-off value or lower bound for the magnitude of the correlation coefficient. In this article we suggest a simulation method to set a minimum benchmark for the correlation between item parameters from the Rasch model and those reconstructed by the LLTM. If the cognitive model is valid then the correlation coefficient between the RM-based item parameters and the LLTM-reconstructed item parameters derived from the theoretical weight matrix should be greater than those derived from the simulated matrices. PMID:28611721
NASA Astrophysics Data System (ADS)
Chaloupka, Jiří; Khaliullin, Giniyat
2015-07-01
We have explored the hidden symmetries of a generic four-parameter nearest-neighbor spin model, allowed in honeycomb-lattice compounds under trigonal compression. Our method utilizes a systematic algorithm to identify all dual transformations of the model that map the Hamiltonian on itself, changing the parameters and providing exact links between different points in its parameter space. We have found the complete set of points of hidden SU(2) symmetry at which a seemingly highly anisotropic model can be mapped back on the Heisenberg model and inherits therefore its properties such as the presence of gapless Goldstone modes. The procedure used to search for the hidden symmetries is quite general and may be extended to other bond-anisotropic spin models and other lattices, such as the triangular, kagome, hyperhoneycomb, or harmonic-honeycomb lattices. We apply our findings to the honeycomb-lattice iridates Na2IrO3 and Li2IrO3 , and illustrate how they help to identify plausible values of the model parameters that are compatible with the available experimental data.
Raj, Tirath; Gaur, Ruchi; Dixit, Pooja; Gupta, Ravi P; Kagdiyal, V; Kumar, Ravindra; Tuli, Deepak K
2016-09-20
In this study, five ionic liquids (ILs) have been explored for biomass pretreatment for the production of fermentable sugar. We also investigated the driving factors responsible for improved enzymatic digestibility of various ILs treated biomass along with postulating the plausible mechanism thereof. Post pretreatment, mainly two factors impacted the enzymatic digestibility (i) structural deformation (cellulose I to II) along with xylan/lignin removal and (ii) properties of ILs; wherein, K-T parameters, viscosity and surface tension had a direct influence on pretreatment. A systematic investigation of these parameters and their impact on enzymatic digestibility is drawn. [C2mim][OAc] with β-value 1.32 resulted 97.7% of glucose yield using 10 FPU/g of biomass. A closer insight into the cellulose structural transformation has prompted a plausible mechanism explaining the better digestibility. The impact of these parameters on the digestibility can pave the way to customize the process to make biomass vulnerable to enzymatic attack. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gaining insight into the T _2^*-T2 relationship in surface NMR free-induction decay measurements
NASA Astrophysics Data System (ADS)
Grombacher, Denys; Auken, Esben
2018-05-01
One of the primary shortcomings of the surface nuclear magnetic resonance (NMR) free-induction decay (FID) measurement is the uncertainty surrounding which mechanism controls the signal's time dependence. Ideally, the FID-estimated relaxation time T_2^* that describes the signal's decay carries an intimate link to the geometry of the pore space. In this limit the parameter T_2^* is closely linked to a related parameter T2, which is more closely linked to pore-geometry. If T_2^* ˜eq {T_2} the FID can provide valuable insight into relative pore-size and can be used to make quantitative permeability estimates. However, given only FID measurements it is difficult to determine whether T_2^* is linked to pore geometry or whether it has been strongly influenced by background magnetic field inhomogeneity. If the link between an observed T_2^* and the underlying T2 could be further constrained the utility of the standard surface NMR FID measurement would be greatly improved. We hypothesize that an approach employing an updated surface NMR forward model that solves the full Bloch equations with appropriately weighted relaxation terms can be used to help constrain the T_2^*-T2 relationship. Weighting the relaxation terms requires estimating the poorly constrained parameters T2 and T1; to deal with this uncertainty we propose to conduct a parameter search involving multiple inversions that employ a suite of forward models each describing a distinct but plausible T_2^*-T2 relationship. We hypothesize that forward models given poor T2 estimates will produce poor data fits when using the complex-inversion, while forward models given reliable T2 estimates will produce satisfactory data fits. By examining the data fits produced by the suite of plausible forward models, the likely T_2^*-T2 can be constrained by identifying the range of T2 estimates that produce reliable data fits. Synthetic and field results are presented to investigate the feasibility of the proposed technique.
Physical plausibility of cold star models satisfying Karmarkar conditions
NASA Astrophysics Data System (ADS)
Fuloria, Pratibha; Pant, Neeraj
2017-11-01
In the present article, we have obtained a new well behaved solution to Einstein's field equations in the background of Karmarkar spacetime. The solution has been used for stellar modelling within the demand of current observational evidences. All the physical parameters are well behaved inside the stellar interior and our model satisfies all the required conditions to be physically realizable. The obtained compactness parameter is within the Buchdahl limit, i.e. 2M/R ≤ 8/9 . The TOV equation is well maintained inside the fluid spheres. The stability of the models has been further confirmed by using Herrera's cracking method. The models proposed in the present work are compatible with observational data of compact objects 4U1608-52 and PSRJ1903+327. The necessary graphs have been shown to authenticate the physical viability of our models.
Gaussian Mixture Model of Heart Rate Variability
Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario
2012-01-01
Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386
Stirrup, Oliver T; Babiker, Abdel G; Carpenter, James R; Copas, Andrew J
2016-04-30
Longitudinal data are widely analysed using linear mixed models, with 'random slopes' models particularly common. However, when modelling, for example, longitudinal pre-treatment CD4 cell counts in HIV-positive patients, the incorporation of non-stationary stochastic processes such as Brownian motion has been shown to lead to a more biologically plausible model and a substantial improvement in model fit. In this article, we propose two further extensions. Firstly, we propose the addition of a fractional Brownian motion component, and secondly, we generalise the model to follow a multivariate-t distribution. These extensions are biologically plausible, and each demonstrated substantially improved fit on application to example data from the Concerted Action on SeroConversion to AIDS and Death in Europe study. We also propose novel procedures for residual diagnostic plots that allow such models to be assessed. Cohorts of patients were simulated from the previously reported and newly developed models in order to evaluate differences in predictions made for the timing of treatment initiation under different clinical management strategies. A further simulation study was performed to demonstrate the substantial biases in parameter estimates of the mean slope of CD4 decline with time that can occur when random slopes models are applied in the presence of censoring because of treatment initiation, with the degree of bias found to depend strongly on the treatment initiation rule applied. Our findings indicate that researchers should consider more complex and flexible models for the analysis of longitudinal biomarker data, particularly when there are substantial missing data, and that the parameter estimates from random slopes models must be interpreted with caution. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Parameter Balancing in Kinetic Models of Cell Metabolism†
2010-01-01
Kinetic modeling of metabolic pathways has become a major field of systems biology. It combines structural information about metabolic pathways with quantitative enzymatic rate laws. Some of the kinetic constants needed for a model could be collected from ever-growing literature and public web resources, but they are often incomplete, incompatible, or simply not available. We address this lack of information by parameter balancing, a method to complete given sets of kinetic constants. Based on Bayesian parameter estimation, it exploits the thermodynamic dependencies among different biochemical quantities to guess realistic model parameters from available kinetic data. Our algorithm accounts for varying measurement conditions in the input data (pH value and temperature). It can process kinetic constants and state-dependent quantities such as metabolite concentrations or chemical potentials, and uses prior distributions and data augmentation to keep the estimated quantities within plausible ranges. An online service and free software for parameter balancing with models provided in SBML format (Systems Biology Markup Language) is accessible at www.semanticsbml.org. We demonstrate its practical use with a small model of the phosphofructokinase reaction and discuss its possible applications and limitations. In the future, parameter balancing could become an important routine step in the kinetic modeling of large metabolic networks. PMID:21038890
Dettmer, Jan; Dosso, Stan E
2012-10-01
This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.
Lathrop, R H; Casale, M; Tobias, D J; Marsh, J L; Thompson, L M
1998-01-01
We describe a prototype system (Poly-X) for assisting an expert user in modeling protein repeats. Poly-X reduces the large number of degrees of freedom required to specify a protein motif in complete atomic detail. The result is a small number of parameters that are easily understood by, and under the direct control of, a domain expert. The system was applied to the polyglutamine (poly-Q) repeat in the first exon of huntingtin, the gene implicated in Huntington's disease. We present four poly-Q structural motifs: two poly-Q beta-sheet motifs (parallel and antiparallel) that constitute plausible alternatives to a similar previously published poly-Q beta-sheet motif, and two novel poly-Q helix motifs (alpha-helix and pi-helix). To our knowledge, helical forms of polyglutamine have not been proposed before. The motifs suggest that there may be several plausible aggregation structures for the intranuclear inclusion bodies which have been found in diseased neurons, and may help in the effort to understand the structural basis for Huntington's disease.
Bayesian analysis of caustic-crossing microlensing events
NASA Astrophysics Data System (ADS)
Cassan, A.; Horne, K.; Kains, N.; Tsapras, Y.; Browne, P.
2010-06-01
Aims: Caustic-crossing binary-lens microlensing events are important anomalous events because they are capable of detecting an extrasolar planet companion orbiting the lens star. Fast and robust modelling methods are thus of prime interest in helping to decide whether a planet is detected by an event. Cassan introduced a new set of parameters to model binary-lens events, which are closely related to properties of the light curve. In this work, we explain how Bayesian priors can be added to this framework, and investigate on interesting options. Methods: We develop a mathematical formulation that allows us to compute analytically the priors on the new parameters, given some previous knowledge about other physical quantities. We explicitly compute the priors for a number of interesting cases, and show how this can be implemented in a fully Bayesian, Markov chain Monte Carlo algorithm. Results: Using Bayesian priors can accelerate microlens fitting codes by reducing the time spent considering physically implausible models, and helps us to discriminate between alternative models based on the physical plausibility of their parameters.
Magnetic anisotropy in the Kitaev model systems Na2IrO3 and RuCl3
NASA Astrophysics Data System (ADS)
Chaloupka, Jiří; Khaliullin, Giniyat
2016-08-01
We study the ordered moment direction in the extended Kitaev-Heisenberg model relevant to honeycomb lattice magnets with strong spin-orbit coupling. We utilize numerical diagonalization and analyze the exact cluster ground states using a particular set of spin-coherent states, obtaining thereby quantum corrections to the magnetic anisotropy beyond conventional perturbative methods. It is found that the quantum fluctuations strongly modify the moment direction obtained at a classical level and are thus crucial for a precise quantification of the interactions. The results show that the moment direction is a sensitive probe of the model parameters in real materials. Focusing on the experimentally relevant zigzag phases of the model, we analyze the currently available neutron-diffraction and resonant x-ray-diffraction data on Na2IrO3 and RuCl3 and discuss the parameter regimes plausible in these Kitaev-Heisenberg model systems.
Exploring the dynamics of balance data — movement variability in terms of drift and diffusion
NASA Astrophysics Data System (ADS)
Gottschall, Julia; Peinke, Joachim; Lippens, Volker; Nagel, Volker
2009-02-01
We introduce a method to analyze postural control on a balance board by reconstructing the underlying dynamics in terms of a Langevin model. Drift and diffusion coefficients are directly estimated from the data and fitted by a suitable parametrization. The governing parameters are utilized to evaluate balance performance and the impact of supra-postural tasks on it. We show that the proposed method of analysis gives not only self-consistent results but also provides a plausible model for the reconstruction of balance dynamics.
Comparison of Damping Mechanisms for Transverse Waves in Solar Coronal Loops
NASA Astrophysics Data System (ADS)
Montes-Solís, María; Arregui, Iñigo
2017-09-01
We present a method to assess the plausibility of alternative mechanisms to explain the damping of magnetohydrodynamic transverse waves in solar coronal loops. The considered mechanisms are resonant absorption of kink waves in the Alfvén continuum, phase mixing of Alfvén waves, and wave leakage. Our methods make use of Bayesian inference and model comparison techniques. We first infer the values for the physical parameters that control the wave damping, under the assumption of a particular mechanism, for typically observed damping timescales. Then, the computation of marginal likelihoods and Bayes factors enable us to quantify the relative plausibility between the alternative mechanisms. We find that, in general, the evidence is not large enough to support a single particular damping mechanism as the most plausible one. Resonant absorption and wave leakage offer the most probable explanations in strong damping regimes, while phase mixing is the best candidate for weak/moderate damping. When applied to a selection of 89 observed transverse loop oscillations, with their corresponding measurements of damping timescales and taking into account data uncertainties, we find that positive evidence for a given damping mechanism is only available in a few cases.
Comparison of Damping Mechanisms for Transverse Waves in Solar Coronal Loops
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montes-Solís, María; Arregui, Iñigo, E-mail: mmsolis@iac.es
We present a method to assess the plausibility of alternative mechanisms to explain the damping of magnetohydrodynamic transverse waves in solar coronal loops. The considered mechanisms are resonant absorption of kink waves in the Alfvén continuum, phase mixing of Alfvén waves, and wave leakage. Our methods make use of Bayesian inference and model comparison techniques. We first infer the values for the physical parameters that control the wave damping, under the assumption of a particular mechanism, for typically observed damping timescales. Then, the computation of marginal likelihoods and Bayes factors enable us to quantify the relative plausibility between the alternativemore » mechanisms. We find that, in general, the evidence is not large enough to support a single particular damping mechanism as the most plausible one. Resonant absorption and wave leakage offer the most probable explanations in strong damping regimes, while phase mixing is the best candidate for weak/moderate damping. When applied to a selection of 89 observed transverse loop oscillations, with their corresponding measurements of damping timescales and taking into account data uncertainties, we find that positive evidence for a given damping mechanism is only available in a few cases.« less
TIDAL HEATING IN A MAGMA OCEAN WITHIN JUPITER’S MOON Io
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tyler, Robert H.; Henning, Wade G.; Hamilton, Christopher W., E-mail: robert.h.tyler@nasa.gov
Active volcanism observed on Io is thought to be driven by the temporally periodic, spatially differential projection of Jupiter's gravitational field over the moon. Previous theoretical estimates of the tidal heat have all treated Io as essentially a solid, with fluids addressed only through adjustment of rheological parameters rather than through appropriate extension of the dynamics. These previous estimates of the tidal response and associated heat generation on Io are therefore incomplete and possibly erroneous because dynamical aspects of the fluid behavior are not permitted in the modeling approach. Here we address this by modeling the partial-melt asthenosphere as amore » global layer of fluid governed by the Laplace Tidal Equations. Solutions for the tidal response are then compared with solutions obtained following the traditional solid-material approach. It is found that the tidal heat in the solid can match that of the average observed heat flux (nominally 2.25 W m{sup −2}), though only over a very restricted range of plausible parameters, and that the distribution of the solid tidal heat flux cannot readily explain a longitudinal shift in the observed (inferred) low-latitude heat fluxes. The tidal heat in the fluid reaches that observed over a wider range of plausible parameters, and can also readily provide the longitudinal offset. Finally, expected feedbacks and coupling between the solid/fluid tides are discussed. Most broadly, the results suggest that both solid and fluid tidal-response estimates must be considered in exoplanet studies, particularly where orbital migration under tidal dissipation is addressed.« less
Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.
Nicola, Wilten; Campbell, Sue Ann
2013-01-01
We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.
Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons
Nicola, Wilten; Campbell, Sue Ann
2013-01-01
We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013
The Universal Plausibility Metric (UPM) & Principle (UPP).
Abel, David L
2009-12-03
Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes." A method of objectively measuring the plausibility of any chance hypothesis (The Universal Plausibility Metric [UPM]) is presented. A numerical inequality is also provided whereby any chance hypothesis can be definitively falsified when its UPM metric of xi is < 1 (The Universal Plausibility Principle [UPP]). Both UPM and UPP pre-exist and are independent of any experimental design and data set. No low-probability hypothetical plausibility assertion should survive peer-review without subjection to the UPP inequality standard of formal falsification (xi < 1).
A stochastic differential equation model of diurnal cortisol patterns
NASA Technical Reports Server (NTRS)
Brown, E. N.; Meehan, P. M.; Dempster, A. P.
2001-01-01
Circadian modulation of episodic bursts is recognized as the normal physiological pattern of diurnal variation in plasma cortisol levels. The primary physiological factors underlying these diurnal patterns are the ultradian timing of secretory events, circadian modulation of the amplitude of secretory events, infusion of the hormone from the adrenal gland into the plasma, and clearance of the hormone from the plasma by the liver. Each measured plasma cortisol level has an error arising from the cortisol immunoassay. We demonstrate that all of these three physiological principles can be succinctly summarized in a single stochastic differential equation plus measurement error model and show that physiologically consistent ranges of the model parameters can be determined from published reports. We summarize the model parameters in terms of the multivariate Gaussian probability density and establish the plausibility of the model with a series of simulation studies. Our framework makes possible a sensitivity analysis in which all model parameters are allowed to vary simultaneously. The model offers an approach for simultaneously representing cortisol's ultradian, circadian, and kinetic properties. Our modeling paradigm provides a framework for simulation studies and data analysis that should be readily adaptable to the analysis of other endocrine hormone systems.
Hunnicutt, Jacob N; Ulbricht, Christine M; Chrysanthopoulou, Stavroula A; Lapane, Kate L
2016-12-01
We systematically reviewed pharmacoepidemiologic and comparative effectiveness studies that use probabilistic bias analysis to quantify the effects of systematic error including confounding, misclassification, and selection bias on study results. We found articles published between 2010 and October 2015 through a citation search using Web of Science and Google Scholar and a keyword search using PubMed and Scopus. Eligibility of studies was assessed by one reviewer. Three reviewers independently abstracted data from eligible studies. Fifteen studies used probabilistic bias analysis and were eligible for data abstraction-nine simulated an unmeasured confounder and six simulated misclassification. The majority of studies simulating an unmeasured confounder did not specify the range of plausible estimates for the bias parameters. Studies simulating misclassification were in general clearer when reporting the plausible distribution of bias parameters. Regardless of the bias simulated, the probability distributions assigned to bias parameters, number of simulated iterations, sensitivity analyses, and diagnostics were not discussed in the majority of studies. Despite the prevalence and concern of bias in pharmacoepidemiologic and comparative effectiveness studies, probabilistic bias analysis to quantitatively model the effect of bias was not widely used. The quality of reporting and use of this technique varied and was often unclear. Further discussion and dissemination of the technique are warranted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Phillips, Lawrence; Pearl, Lisa
2015-01-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…
Filter Strategies for Mars Science Laboratory Orbit Determination
NASA Technical Reports Server (NTRS)
Thompson, Paul F.; Gustafson, Eric D.; Kruizinga, Gerhard L.; Martin-Mur, Tomas J.
2013-01-01
The Mars Science Laboratory (MSL) spacecraft had ambitious navigation delivery and knowledge accuracy requirements for landing inside Gale Crater. Confidence in the orbit determination (OD) solutions was increased by investigating numerous filter strategies for solving the orbit determination problem. We will discuss the strategy for the different types of variations: for example, data types, data weights, solar pressure model covariance, and estimating versus considering model parameters. This process generated a set of plausible OD solutions that were compared to the baseline OD strategy. Even implausible or unrealistic results were helpful in isolating sensitivities in the OD solutions to certain model parameterizations or data types.
NASA Astrophysics Data System (ADS)
Garcia Galiano, S. G.; Olmos, P.; Giraldo Osorio, J. D.
2015-12-01
In the Mediterranean area, significant changes on temperature and precipitation are expected throughout the century. These trends could exacerbate the existing conditions in regions already vulnerable to climatic variability, reducing the water availability. Improving knowledge about plausible impacts of climate change on water cycle processes at basin scale, is an important step for building adaptive capacity to the impacts in this region, where severe water shortages are expected for the next decades. RCMs ensemble in combination with distributed hydrological models with few parameters, constitutes a valid and robust methodology to increase the reliability of climate and hydrological projections. For reaching this objective, a novel methodology for building Regional Climate Models (RCMs) ensembles of meteorological variables (rainfall an temperatures), was applied. RCMs ensembles are justified for increasing the reliability of climate and hydrological projections. The evaluation of RCMs goodness-of-fit to build the ensemble is based on empirical probability density functions (PDF) extracted from both RCMs dataset and a highly resolution gridded observational dataset, for the time period 1961-1990. The applied method is considering the seasonal and annual variability of the rainfall and temperatures. The RCMs ensembles constitute the input to a distributed hydrological model at basin scale, for assessing the runoff projections. The selected hydrological model is presenting few parameters in order to reduce the uncertainties involved. The study basin corresponds to a head basin of Segura River Basin, located in the South East of Spain. The impacts on runoff and its trend from observational dataset and climate projections, were assessed. Considering the control period 1961-1990, plausible significant decreases in runoff for the time period 2021-2050, were identified.
Radiation signatures from a locally energized flaring loop
NASA Technical Reports Server (NTRS)
Emslie, A. G.; Vlahos, L.
1980-01-01
The radiation signatures from a locally energized solar flare loop based on the physical properties of the energy release mechanisms were consistent with hard X-ray, microwave, and EUV observations for plausible source parameters. It was found that a suprathermal tail of high energy electrons is produced by the primary energy release, and that the number of energetic charged particles ejected into the interplanetary medium in the model is consistent with observations. The radiation signature model predicts that the intrinsic polarization of the hard X-ray burst should increase over the photon energy range of 20 to 100 keV.
Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.
Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis
2008-10-01
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)
The Universal Plausibility Metric (UPM) & Principle (UPP)
2009-01-01
Background Mere possibility is not an adequate basis for asserting scientific plausibility. A precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. But can something so seemingly relative and subjective as plausibility ever be quantified? Amazingly, the answer is, "Yes." A method of objectively measuring the plausibility of any chance hypothesis (The Universal Plausibility Metric [UPM]) is presented. A numerical inequality is also provided whereby any chance hypothesis can be definitively falsified when its UPM metric of ξ is < 1 (The Universal Plausibility Principle [UPP]). Both UPM and UPP pre-exist and are independent of any experimental design and data set. Conclusion No low-probability hypothetical plausibility assertion should survive peer-review without subjection to the UPP inequality standard of formal falsification (ξ < 1). PMID:19958539
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
NASA Astrophysics Data System (ADS)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less
Understanding asteroid collisional history through experimental and numerical studies
NASA Technical Reports Server (NTRS)
Davis, Donald R.; Ryan, Eileen V.; Weidenschilling, S. J.
1991-01-01
Asteroids can lose angular momentum due to so called splash effect, the analog to the drain effect for cratering impacts. Numerical code with the splash effect incorporated was applied to study the simultaneous evolution of asteroid sized and spins. Results are presented on the spin changes of asteroids due to various physical effects that are incorporated in the described model. The goal was to understand the interplay between the evolution of sizes and spins over a wide and plausible range of model parameters. A single starting population was used both for size distribution and the spin distribution of asteroids and the changes in the spins were calculated over solar system history for different model parameters. It is shown that there is a strong coupling between the size and spin evolution, that the observed relative spindown of asteroids approximately 100 km diameter is likely to be the result of the angular momentum splash effect.
Understanding asteroid collisional history through experimental and numerical studies
NASA Astrophysics Data System (ADS)
Davis, Donald R.; Ryan, Eileen V.; Weidenschilling, S. J.
1991-06-01
Asteroids can lose angular momentum due to so called splash effect, the analog to the drain effect for cratering impacts. Numerical code with the splash effect incorporated was applied to study the simultaneous evolution of asteroid sized and spins. Results are presented on the spin changes of asteroids due to various physical effects that are incorporated in the described model. The goal was to understand the interplay between the evolution of sizes and spins over a wide and plausible range of model parameters. A single starting population was used both for size distribution and the spin distribution of asteroids and the changes in the spins were calculated over solar system history for different model parameters. It is shown that there is a strong coupling between the size and spin evolution, that the observed relative spindown of asteroids approximately 100 km diameter is likely to be the result of the angular momentum splash effect.
Li, Xiaolu; Liang, Yu; Xu, Lijun
2014-09-01
To provide a credible model for light detection and ranging (LiDAR) target classification, the focus of this study is on the relationship between intensity data of LiDAR and the bidirectional reflectance distribution function (BRDF). An integration method based on the built-in-lab coaxial laser detection system was advanced. A kind of intermediary BRDF model advanced by Schlick was introduced into the integration method, considering diffuse and specular backscattering characteristics of the surface. A group of measurement campaigns were carried out to investigate the influence of the incident angle and detection range on the measured intensity data. Two extracted parameters r and S(λ) are influenced by different surface features, which illustrate the surface features of the distribution and magnitude of reflected energy, respectively. The combination of two parameters can be used to describe the surface characteristics for target classification in a more plausible way.
Kentzoglanakis, Kyriakos; Poole, Matthew
2012-01-01
In this paper, we investigate the problem of reverse engineering the topology of gene regulatory networks from temporal gene expression data. We adopt a computational intelligence approach comprising swarm intelligence techniques, namely particle swarm optimization (PSO) and ant colony optimization (ACO). In addition, the recurrent neural network (RNN) formalism is employed for modeling the dynamical behavior of gene regulatory systems. More specifically, ACO is used for searching the discrete space of network architectures and PSO for searching the corresponding continuous space of RNN model parameters. We propose a novel solution construction process in the context of ACO for generating biologically plausible candidate architectures. The objective is to concentrate the search effort into areas of the structure space that contain architectures which are feasible in terms of their topological resemblance to real-world networks. The proposed framework is initially applied to the reconstruction of a small artificial network that has previously been studied in the context of gene network reverse engineering. Subsequently, we consider an artificial data set with added noise for reconstructing a subnetwork of the genetic interaction network of S. cerevisiae (yeast). Finally, the framework is applied to a real-world data set for reverse engineering the SOS response system of the bacterium Escherichia coli. Results demonstrate the relative advantage of utilizing problem-specific knowledge regarding biologically plausible structural properties of gene networks over conducting a problem-agnostic search in the vast space of network architectures.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
Hong, Cheng William; Mamidipalli, Adrija; Hooker, Jonathan C.; Hamilton, Gavin; Wolfson, Tanya; Chen, Dennis H.; Dehkordy, Soudabeh Fazeli; Middleton, Michael S.; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
Background Proton density fat fraction (PDFF) estimation requires spectral modeling of the hepatic triglyceride (TG) signal. Deviations in the TG spectrum may occur, leading to bias in PDFF quantification. Purpose To investigate the effects of varying six-peak TG spectral models on PDFF estimation bias. Study Type Retrospective secondary analysis of prospectively acquired clinical research data. Population Forty-four adults with biopsy-confirmed nonalcoholic steatohepatitis. Field Strength/Sequence Confounder-corrected chemical-shift-encoded 3T MRI (using a 2D multiecho gradient-recalled echo technique with magnitude reconstruction) and MR spectroscopy. Assessment In each patient, 61 pairs of colocalized MRI-PDFF and MRS-PDFF values were estimated: one pair used the standard six-peak spectral model, the other 60 were six-peak variants calculated by adjusting spectral model parameters over their biologically plausible ranges. MRI-PDFF values calculated using each variant model and the standard model were compared, and the agreement between MRI-PDFF and MRS-PDFF was assessed. Statistical Tests MRS-PDFF and MRI-PDFF were summarized descriptively. Bland–Altman (BA) analyses were performed between PDFF values calculated using each variant model and the standard model. Linear regressions were performed between BA biases and mean PDFF values for each variant model, and between MRI-PDFF and MRS-PDFF. Results Using the standard model, mean MRS-PDFF of the study population was 17.9±8.0% (range: 4.1–34.3%). The difference between the highest and lowest mean variant MRI-PDFF values was 1.5%. Relative to the standard model, the model with the greatest absolute BA bias overestimated PDFF by 1.2%. Bias increased with increasing PDFF (P < 0.0001 for 59 of the 60 variant models). MRI-PDFF and MRS-PDFF agreed closely for all variant models (R2=0.980, P < 0.0001). Data Conclusion Over a wide range of hepatic fat content, PDFF estimation is robust across the biologically plausible range of TG spectra. Although absolute estimation bias increased with higher PDFF, its magnitude was small and unlikely to be clinically meaningful. Level of Evidence 3 Technical Efficacy Stage 2 PMID:28851124
NASA Astrophysics Data System (ADS)
Madi, Raneem; Huibert de Rooij, Gerrit; Mielenz, Henrike; Mai, Juliane
2018-02-01
Few parametric expressions for the soil water retention curve are suitable for dry conditions. Furthermore, expressions for the soil hydraulic conductivity curves associated with parametric retention functions can behave unrealistically near saturation. We developed a general criterion for water retention parameterizations that ensures physically plausible conductivity curves. Only 3 of the 18 tested parameterizations met this criterion without restrictions on the parameters of a popular conductivity curve parameterization. A fourth required one parameter to be fixed. We estimated parameters by shuffled complex evolution (SCE) with the objective function tailored to various observation methods used to obtain retention curve data. We fitted the four parameterizations with physically plausible conductivities as well as the most widely used parameterization. The performance of the resulting 12 combinations of retention and conductivity curves was assessed in a numerical study with 751 days of semiarid atmospheric forcing applied to unvegetated, uniform, 1 m freely draining columns for four textures. Choosing different parameterizations had a minor effect on evaporation, but cumulative bottom fluxes varied by up to an order of magnitude between them. This highlights the need for a careful selection of the soil hydraulic parameterization that ideally does not only rely on goodness of fit to static soil water retention data but also on hydraulic conductivity measurements. Parameter fits for 21 soils showed that extrapolations into the dry range of the retention curve often became physically more realistic when the parameterization had a logarithmic dry branch, particularly in fine-textured soils where high residual water contents would otherwise be fitted.
Thermodynamically consistent model calibration in chemical kinetics
2011-01-01
Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can provide dimensionality reduction, better estimation performance, and lower computational complexity, and can help to alleviate the problem of data overfitting. PMID:21548948
Scale-dependent CMB power asymmetry from primordial speed of sound and a generalized δ N formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Dong-Gang; Cai, Yi-Fu; Zhao, Wen
2016-02-01
We explore a plausible mechanism that the hemispherical power asymmetry in the CMB is produced by the spatial variation of the primordial sound speed parameter. We suggest that in a generalized approach of the δ N formalism the local e-folding number may depend on some other primordial parameters besides the initial values of inflaton. Here the δ N formalism is extended by considering the effects of a spatially varying sound speed parameter caused by a super-Hubble perturbation of a light field. Using this generalized δ N formalism, we systematically calculate the asymmetric primordial spectrum in the model of multi-speed inflation by taking intomore » account the constraints of primordial non-Gaussianities. We further discuss specific model constraints, and the corresponding asymmetry amplitudes are found to be scale-dependent, which can accommodate current observations of the power asymmetry at different length scales.« less
Ojha, Deepak Kumar; Viju, Daniel; Vinu, R
2017-10-01
In this study, the apparent kinetics of fast pyrolysis of alkali lignin was evaluated by obtaining isothermal mass loss data in the timescale of 2-30s at 400-700°C in an analytical pyrolyzer. The data were analyzed using different reaction models to determine the rate constants and apparent rate parameters. First order and one dimensional diffusion models resulted in good fits with experimental data with apparent activation energy of 23kJmol -1 . Kinetic compensation effect was established using a large number of kinetic parameters reported in the literature for pyrolysis of different lignins. The time evolution of the major functional groups in the pyrolysate was analyzed using in situ Fourier transform infrared spectroscopy. Maximum production of the volatiles occurred around 10-12s. A clear transformation of guaiacols to phenol, catechol and their derivatives, and aromatic hydrocarbons was observed with increasing temperature. The plausible reaction steps involved in various transformations are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dilla, Tatiana; Alexiou, Dimitra; Chatzitheofilou, Ismini; Ayyub, Ruba; Lowin, Julia; Norrbacka, Kirsi
2017-05-01
Dulaglutide 1.5 mg once weekly is a novel glucagon-like peptide 1 (GLP-1) receptor agonist, for the treatment of type two diabetes mellitus (T2DM). The objective was to estimate the cost-effectiveness of dulaglutide once weekly vs liraglutide 1.8 mg once daily for the treatment of T2DM in Spain in patients with a BMI ≥30 kg/m 2 . The IMS CORE Diabetes Model (CDM) was used to estimate costs and outcomes from the perspective of Spanish National Health System, capturing relevant direct medical costs over a lifetime time horizon. Comparative safety and efficacy data were derived from direct comparison of dulaglutide 1.5 mg vs liraglutide 1.8 mg from the AWARD-6 trial in patients with a body mass index (BMI) ≥30 kg/m 2 . All patients were assumed to remain on treatment for 2 years before switching treatment to basal insulin at a daily dose of 40 IU. One-way sensitivity analyses (OWSA) and probabilistic sensitivity analyses (PSA) were conducted to explore the sensitivity of the model to plausible variations in key parameters and uncertainty of model inputs. Under base case assumptions, dulaglutide 1.5 mg was less costly and more effective vs liraglutide 1.8 mg (total lifetime costs €108,489 vs €109,653; total QALYS 10.281 vs 10.259). OWSA demonstrated that dulaglutide 1.5 mg remained dominant given plausible variations in key input parameters. Results of the PSA were consistent with base case results. Primary limitations of the analysis are common to other cost-effectiveness analyses of chronic diseases like T2DM and include the extrapolation of short-term clinical data to the lifetime time horizon and uncertainty around optimum treatment durations. The model found that dulaglutide 1.5 mg was more effective and less costly than liraglutide 1.8 mg for the treatment of T2DM in Spain. Findings were robust to plausible variations in inputs. Based on these results, dulaglutide may result in cost savings to the Spanish National Health System.
NASA Astrophysics Data System (ADS)
Estevez-Delgado, Gabino; Estevez-Delgado, Joaquin
2018-05-01
An analysis and construction is presented for a stellar model characterized by two parameters (w, n) associated with the compactness ratio and anisotropy, respectively. The reliability range for the parameter w ≤ 1.97981225149 corresponds with a compactness ratio u ≤ 0.2644959374, the density and pressures are positive, regular and monotonic decrescent functions, the radial and tangential speed of sound are lower than the light speed, moreover, than the plausible stability. The behavior of the speeds of sound are determinate for the anisotropy parameter n, admitting a subinterval where the speeds are monotonic crescent functions and other where we have monotonic decrescent functions for the same speeds, both cases describing a compact object that is also potentially stable. In the bigger value for the observational mass M = 2.05 M⊙ and radii R = 12.957 Km for the star PSR J0348+0432, the model indicates that the maximum central density ρc = 1.283820319 × 1018 Kg/m3 corresponds to the maximum value of the anisotropy parameter and the radial and tangential speed of the sound are monotonic decrescent functions.
Parameters, Journal of the U.S. Army War College. Volume 17, Number 1, Spring 1987
1987-01-01
way that makes sense to a reader and then explaining them. Enter the clement of judgment, which immediately puts the reporter on a slippery slope ...reformers support something they call maneuver warfare. The concept of maneuver is itself a slippery one that the reformers describe using terms such as...nuclear euthanasia . So just as the military power must have a plausible enemy, so also it must have a plausible design for countering the public threat
NASA Astrophysics Data System (ADS)
Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.
An imprecise probability approach for squeal instability analysis based on evidence theory
NASA Astrophysics Data System (ADS)
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-01-01
An imprecise probability approach based on evidence theory is proposed for squeal instability analysis of uncertain disc brakes in this paper. First, the squeal instability of the finite element (FE) model of a disc brake is investigated and its dominant unstable eigenvalue is detected by running two typical numerical simulations, i.e., complex eigenvalue analysis (CEA) and transient dynamical analysis. Next, the uncertainty mainly caused by contact and friction is taken into account and some key parameters of the brake are described as uncertain parameters. All these uncertain parameters are usually involved with imprecise data such as incomplete information and conflict information. Finally, a squeal instability analysis model considering imprecise uncertainty is established by integrating evidence theory, Taylor expansion, subinterval analysis and surrogate model. In the proposed analysis model, the uncertain parameters with imprecise data are treated as evidence variables, and the belief measure and plausibility measure are employed to evaluate system squeal instability. The effectiveness of the proposed approach is demonstrated by numerical examples and some interesting observations and conclusions are summarized from the analyses and discussions. The proposed approach is generally limited to the squeal problems without too many investigated parameters. It can be considered as a potential method for squeal instability analysis, which will act as the first step to reduce squeal noise of uncertain brakes with imprecise information.
Impaired associative learning in schizophrenia: behavioral and computational studies
Diwadkar, Vaibhav A.; Flaugher, Brad; Jones, Trevor; Zalányi, László; Ujfalussy, Balázs; Keshavan, Matcheri S.
2008-01-01
Associative learning is a central building block of human cognition and in large part depends on mechanisms of synaptic plasticity, memory capacity and fronto–hippocampal interactions. A disorder like schizophrenia is thought to be characterized by altered plasticity, and impaired frontal and hippocampal function. Understanding the expression of this dysfunction through appropriate experimental studies, and understanding the processes that may give rise to impaired behavior through biologically plausible computational models will help clarify the nature of these deficits. We present a preliminary computational model designed to capture learning dynamics in healthy control and schizophrenia subjects. Experimental data was collected on a spatial-object paired-associate learning task. The task evinces classic patterns of negatively accelerated learning in both healthy control subjects and patients, with patients demonstrating lower rates of learning than controls. Our rudimentary computational model of the task was based on biologically plausible assumptions, including the separation of dorsal/spatial and ventral/object visual streams, implementation of rules of learning, the explicit parameterization of learning rates (a plausible surrogate for synaptic plasticity), and learning capacity (a plausible surrogate for memory capacity). Reductions in learning dynamics in schizophrenia were well-modeled by reductions in learning rate and learning capacity. The synergy between experimental research and a detailed computational model of performance provides a framework within which to infer plausible biological bases of impaired learning dynamics in schizophrenia. PMID:19003486
Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny
2015-01-01
Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269
Bringing metabolic networks to life: convenience rate law and thermodynamic constraints
Liebermeister, Wolfram; Klipp, Edda
2006-01-01
Background Translating a known metabolic network into a dynamic model requires rate laws for all chemical reactions. The mathematical expressions depend on the underlying enzymatic mechanism; they can become quite involved and may contain a large number of parameters. Rate laws and enzyme parameters are still unknown for most enzymes. Results We introduce a simple and general rate law called "convenience kinetics". It can be derived from a simple random-order enzyme mechanism. Thermodynamic laws can impose dependencies on the kinetic parameters. Hence, to facilitate model fitting and parameter optimisation for large networks, we introduce thermodynamically independent system parameters: their values can be varied independently, without violating thermodynamical constraints. We achieve this by expressing the equilibrium constants either by Gibbs free energies of formation or by a set of independent equilibrium constants. The remaining system parameters are mean turnover rates, generalised Michaelis-Menten constants, and constants for inhibition and activation. All parameters correspond to molecular energies, for instance, binding energies between reactants and enzyme. Conclusion Convenience kinetics can be used to translate a biochemical network – manually or automatically - into a dynamical model with plausible biological properties. It implements enzyme saturation and regulation by activators and inhibitors, covers all possible reaction stoichiometries, and can be specified by a small number of parameters. Its mathematical form makes it especially suitable for parameter estimation and optimisation. Parameter estimates can be easily computed from a least-squares fit to Michaelis-Menten values, turnover rates, equilibrium constants, and other quantities that are routinely measured in enzyme assays and stored in kinetic databases. PMID:17173669
Bayesian estimation inherent in a Mexican-hat-type neural network
NASA Astrophysics Data System (ADS)
Takiyama, Ken
2016-05-01
Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.
An action potential-driven model of soleus muscle activation dynamics for locomotor-like movements
NASA Astrophysics Data System (ADS)
Kim, Hojeong; Sandercock, Thomas G.; Heckman, C. J.
2015-08-01
Objective. The goal of this study was to develop a physiologically plausible, computationally robust model for muscle activation dynamics (A(t)) under physiologically relevant excitation and movement. Approach. The interaction of excitation and movement on A(t) was investigated comparing the force production between a cat soleus muscle and its Hill-type model. For capturing A(t) under excitation and movement variation, a modular modeling framework was proposed comprising of three compartments: (1) spikes-to-[Ca2+]; (2) [Ca2+]-to-A; and (3) A-to-force transformation. The individual signal transformations were modeled based on physiological factors so that the parameter values could be separately determined for individual modules directly based on experimental data. Main results. The strong dependency of A(t) on excitation frequency and muscle length was found during both isometric and dynamically-moving contractions. The identified dependencies of A(t) under the static and dynamic conditions could be incorporated in the modular modeling framework by modulating the model parameters as a function of movement input. The new modeling approach was also applicable to cat soleus muscles producing waveforms independent of those used to set the model parameters. Significance. This study provides a modeling framework for spike-driven muscle responses during movement, that is suitable not only for insights into molecular mechanisms underlying muscle behaviors but also for large scale simulations.
Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence
ERIC Educational Resources Information Center
Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff
2012-01-01
Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445
Holmes, Tyson H.; Lewis, David B.
2014-01-01
Bayesian estimation techniques offer a systematic and quantitative approach for synthesizing data drawn from the literature to model immunological systems. As detailed here, the practitioner begins with a theoretical model and then sequentially draws information from source data sets and/or published findings to inform estimation of model parameters. Options are available to weigh these various sources of information differentially per objective measures of their corresponding scientific strengths. This approach is illustrated in depth through a carefully worked example for a model of decline in T-cell receptor excision circle content of peripheral T cells during development and aging. Estimates from this model indicate that 21 years of age is plausible for the developmental timing of mean age of onset of decline in T-cell receptor excision circle content of peripheral T cells. PMID:25179832
NASA Astrophysics Data System (ADS)
Sipkens, Timothy A.; Hadwin, Paul J.; Grauer, Samuel J.; Daun, Kyle J.
2018-03-01
Competing theories have been proposed to account for how the latent heat of vaporization of liquid iron varies with temperature, but experimental confirmation remains elusive, particularly at high temperatures. We propose time-resolved laser-induced incandescence measurements on iron nanoparticles combined with Bayesian model plausibility, as a novel method for evaluating these relationships. Our approach scores the explanatory power of candidate models, accounting for parameter uncertainty, model complexity, measurement noise, and goodness-of-fit. The approach is first validated with simulated data and then applied to experimental data for iron nanoparticles in argon. Our results justify the use of Román's equation to account for the temperature dependence of the latent heat of vaporization of liquid iron.
Günther, Fritz; Marelli, Marco
2016-01-01
Noun compounds, consisting of two nouns (the head and the modifier) that are combined into a single concept, differ in terms of their plausibility: school bus is a more plausible compound than saddle olive. The present study investigates which factors influence the plausibility of attested and novel noun compounds. Distributional Semantic Models (DSMs) are used to obtain formal (vector) representations of word meanings, and compositional methods in DSMs are employed to obtain such representations for noun compounds. From these representations, different plausibility measures are computed. Three of those measures contribute in predicting the plausibility of noun compounds: The relatedness between the meaning of the head noun and the compound (Head Proximity), the relatedness between the meaning of modifier noun and the compound (Modifier Proximity), and the similarity between the head noun and the modifier noun (Constituent Similarity). We find non-linear interactions between Head Proximity and Modifier Proximity, as well as between Modifier Proximity and Constituent Similarity. Furthermore, Constituent Similarity interacts non-linearly with the familiarity with the compound. These results suggest that a compound is perceived as more plausible if it can be categorized as an instance of the category denoted by the head noun, if the contribution of the modifier to the compound meaning is clear but not redundant, and if the constituents are sufficiently similar in cases where this contribution is not clear. Furthermore, compounds are perceived to be more plausible if they are more familiar, but mostly for cases where the relation between the constituents is less clear. PMID:27732599
Equifinality and process-based modelling
NASA Astrophysics Data System (ADS)
Khatami, S.; Peel, M. C.; Peterson, T. J.; Western, A. W.
2017-12-01
Equifinality is understood as one of the fundamental difficulties in the study of open complex systems, including catchment hydrology. A review of the hydrologic literature reveals that the term equifinality has been widely used, but in many cases inconsistently and without coherent recognition of the various facets of equifinality, which can lead to ambiguity but also methodological fallacies. Therefore, in this study we first characterise the term equifinality within the context of hydrological modelling by reviewing the genesis of the concept of equifinality and then presenting a theoretical framework. During past decades, equifinality has mainly been studied as a subset of aleatory (arising due to randomness) uncertainty and for the assessment of model parameter uncertainty. Although the connection between parameter uncertainty and equifinality is undeniable, we argue there is more to equifinality than just aleatory parameter uncertainty. That is, the importance of equifinality and epistemic uncertainty (arising due to lack of knowledge) and their implications is overlooked in our current practice of model evaluation. Equifinality and epistemic uncertainty in studying, modelling, and evaluating hydrologic processes are treated as if they can be simply discussed in (or often reduced to) probabilistic terms (as for aleatory uncertainty). The deficiencies of this approach to conceptual rainfall-runoff modelling are demonstrated for selected Australian catchments by examination of parameter and internal flux distributions and interactions within SIMHYD. On this basis, we present a new approach that expands equifinality concept beyond model parameters to inform epistemic uncertainty. The new approach potentially facilitates the identification and development of more physically plausible models and model evaluation schemes particularly within the multiple working hypotheses framework, and is generalisable to other fields of environmental modelling as well.
NASA Technical Reports Server (NTRS)
Sekanina, Zdenek
1991-01-01
One of the more attractive among the plausible scenarios for the major emission event recently observed on Comet Halley at a heliocentric distance of 14.3 AU is activation of a source of ejecta driven by an icy substance much more volatile than water. As prerequisite for the forthcoming detailed analysis of the imaging observations of this event, a simple model is proposed that yields the sublimation rate versus time at any location on the surface of a rotating cometary nucleus for two candidate ices: carbon monoxide and carbon dioxide. The model's variable parameters are the comet's heliocentric distance r and the Sun's instantaneous zenith angle z.
Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling
Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.
2009-01-01
Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models. PMID:19225569
Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.
Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I
2009-01-01
Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models.
NASA Astrophysics Data System (ADS)
Lombardi, D.
2011-12-01
Plausibility judgments-although well represented in conceptual change theories (see, for example, Chi, 2005; diSessa, 1993; Dole & Sinatra, 1998; Posner et al., 1982)-have received little empirical attention until our recent work investigating teachers' and students' understanding of and perceptions about human-induced climate change (Lombardi & Sinatra, 2010, 2011). In our first study with undergraduate students, we found that greater plausibility perceptions of human-induced climate accounted for significantly greater understanding of weather and climate distinctions after instruction, even after accounting for students' prior knowledge (Lombardi & Sinatra, 2010). In a follow-up study with inservice science and preservice elementary teachers, we showed that anger about the topic of climate change and teaching about climate change was significantly related to implausible perceptions about human-induced climate change (Lombardi & Sinatra, 2011). Results from our recent studies helped to inform our development of a model of the role of plausibility judgments in conceptual change situations. The model applies to situations involving cognitive dissonance, where background knowledge conflicts with an incoming message. In such situations, we define plausibility as a judgment on the relative potential truthfulness of incoming information compared to one's existing mental representations (Rescher, 1976). Students may not consciously think when making plausibility judgments, expending only minimal mental effort in what is referred to as an automatic cognitive process (Stanovich, 2009). However, well-designed instruction could facilitate students' reappraisal of plausibility judgments in more effortful and conscious cognitive processing. Critical evaluation specifically may be one effective method to promote plausibility reappraisal in a classroom setting (Lombardi & Sinatra, in progress). In science education, critical evaluation involves the analysis of how evidentiary data support a hypothesis and its alternatives. The presentation will focus on how instruction promoting critical evaluation can encourage individuals to reappraise their plausibility judgments and initiate knowledge reconstruction. In a recent pilot study, teachers experienced an instructional scaffold promoting critical evaluation of two competing climate change theories (i.e., human-induced and increasing solar irradiance) and significantly changed both their plausibility judgments and perceptions of correctness toward the scientifically-accepted model of human-induced climate change. A comparison group of teachers who did not experience the critical evaluation activity showed no significant change. The implications of these studies for future research and instruction will be discussed in the presentation, including effective ways to increase students' and teachers' ability to be critically evaluative and reappraise their plausibility judgments. With controversial science issues, such as climate change, such abilities may be necessary to facilitate conceptual change.
Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K
2015-03-27
Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.
Automated Oligopeptide Formation Under Simple Programmable Conditions
NASA Astrophysics Data System (ADS)
Suárez-Marina, I.; Rodriguez-Garcia, M.; Surman, A. J.; Cooper, G. J. T.; Cronin, L.
2017-07-01
Traditionally, prebiotic chemistry has investigated the formation of life's precursors under very specific conditions thought to be "plausible". Herein, we explore peptide formation studying several parameters at once by using an automated platform.
ERIC Educational Resources Information Center
Lombardi, Doug; Bickel, Elliot S.; Bailey, Janelle M.; Burrell, Shondricka
2018-01-01
Evaluation is an important aspect of science and is receiving increasing attention in science education. The present study investigated (1) changes to plausibility judgments and knowledge as a result of a series of instructional scaffolds, called model-evidence link activities, that facilitated evaluation of scientific and alternative models in…
A one-dimensional model of solid-earth electrical resistivity beneath Florida
Blum, Cletus; Love, Jeffrey J.; Pedrie, Kolby; Bedrosian, Paul A.; Rigler, E. Joshua
2015-11-19
An estimated one-dimensional layered model of electrical resistivity beneath Florida was developed from published geological and geophysical information. The resistivity of each layer is represented by plausible upper and lower bounds as well as a geometric mean resistivity. Corresponding impedance transfer functions, Schmucker-Weidelt transfer functions, apparent resistivity, and phase responses are calculated for inducing geomagnetic frequencies ranging from 10−5 to 100 hertz. The resulting one-dimensional model and response functions can be used to make general estimates of time-varying electric fields associated with geomagnetic storms such as might represent induction hazards for electric-power grid operation. The plausible upper- and lower-bound resistivity structures show the uncertainty, giving a wide range of plausible time-varying electric fields.
Systematic Construction of Kinetic Models from Genome-Scale Metabolic Networks
Smallbone, Kieran; Klipp, Edda; Mendes, Pedro; Liebermeister, Wolfram
2013-01-01
The quantitative effects of environmental and genetic perturbations on metabolism can be studied in silico using kinetic models. We present a strategy for large-scale model construction based on a logical layering of data such as reaction fluxes, metabolite concentrations, and kinetic constants. The resulting models contain realistic standard rate laws and plausible parameters, adhere to the laws of thermodynamics, and reproduce a predefined steady state. These features have not been simultaneously achieved by previous workflows. We demonstrate the advantages and limitations of the workflow by translating the yeast consensus metabolic network into a kinetic model. Despite crudely selected data, the model shows realistic control behaviour, a stable dynamic, and realistic response to perturbations in extracellular glucose concentrations. The paper concludes by outlining how new data can continuously be fed into the workflow and how iterative model building can assist in directing experiments. PMID:24324546
Time-dependent inhomogeneous jet models for BL Lac objects
NASA Technical Reports Server (NTRS)
Marlowe, A. T.; Urry, C. M.; George, I. M.
1992-01-01
Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.
Time-dependent inhomogeneous jet models for BL Lac objects
NASA Astrophysics Data System (ADS)
Marlowe, A. T.; Urry, C. M.; George, I. M.
1992-05-01
Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.
Flood risk assessment and robust management under deep uncertainty: Application to Dhaka City
NASA Astrophysics Data System (ADS)
Mojtahed, Vahid; Gain, Animesh Kumar; Giupponi, Carlo
2014-05-01
The socio-economic changes as well as climatic changes have been the main drivers of uncertainty in environmental risk assessment and in particular flood. The level of future uncertainty that researchers face when dealing with problems in a future perspective with focus on climate change is known as Deep Uncertainty (also known as Knightian uncertainty), since nobody has already experienced and undergone those changes before and our knowledge is limited to the extent that we have no notion of probabilities, and therefore consolidated risk management approaches have limited potential.. Deep uncertainty is referred to circumstances that analysts and experts do not know or parties to decision making cannot agree on: i) the appropriate models describing the interaction among system variables, ii) probability distributions to represent uncertainty about key parameters in the model 3) how to value the desirability of alternative outcomes. The need thus emerges to assist policy-makers by providing them with not a single and optimal solution to the problem at hand, such as crisp estimates for the costs of damages of natural hazards considered, but instead ranges of possible future costs, based on the outcomes of ensembles of assessment models and sets of plausible scenarios. Accordingly, we need to substitute optimality as a decision criterion with robustness. Under conditions of deep uncertainty, the decision-makers do not have statistical and mathematical bases to identify optimal solutions, while instead they should prefer to implement "robust" decisions that perform relatively well over all conceivable outcomes out of all future unknown scenarios. Under deep uncertainty, analysts cannot employ probability theory or other statistics that usually can be derived from observed historical data and therefore, we turn to non-statistical measures such as scenario analysis. We construct several plausible scenarios with each scenario being a full description of what may happen in future and based on a meaningful synthesis of parameters' values with control of their correlations for maintaining internal consistencies. This paper aims at incorporating a set of data mining and sampling tools to assess uncertainty of model outputs under future climatic and socio-economic changes for Dhaka city and providing a decision support system for robust flood management and mitigation policies. After constructing an uncertainty matrix to identify the main sources of uncertainty for Dhaka City, we identify several hazard and vulnerability maps based on future climatic and socio-economic scenarios. The vulnerability of each flood management alternative under different set of scenarios is determined and finally the robustness of each plausible solution considered is defined based on the above assessment.
NASA Astrophysics Data System (ADS)
Koch, Jonas; Nowak, Wolfgang
2013-04-01
At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.
Evaluating the effectiveness of the MASW technique in a geologically complex terrain
NASA Astrophysics Data System (ADS)
Anukwu, G. C.; Khalil, A. E.; Abdullah, K. B.
2018-04-01
MASW surveys carried at a number of sites in Pulau Pinang, Malaysia, showed complicated dispersion curves which consequently made the inversion into soil shear velocity model ambiguous. This research work details effort to define the source of these complicated dispersion curves. As a starting point, the complexity of the phase velocity spectrum is assumed to be due to either the surveying parameters or the elastic properties of the soil structures. For the former, the surveying was carried out using different parameters. The complexities were persistent for the different surveying parameters, an indication that the elastic properties of the soil structure could be the reason. In order to exploit this assumption, a synthetic modelling approach was adopted using information from borehole, literature and geologically plausible models. Results suggest that the presence of irregular variation in the stiffness of the soil layers, high stiffness contrast and relatively shallow bedrock, results in a quite complex f-v spectrum, especially at frequencies lower than 20Hz, making it difficult to accurately extract the dispersion curve below this frequency. As such, for MASW technique, especially in complex geological situations as demonstrated, great care should be taken during the data processing and inversion to obtain a model that accurately depicts the subsurface.
Risse, Sarah; Hohenstein, Sven; Kliegl, Reinhold; Engbert, Ralf
2014-01-01
Eye-movement experiments suggest that the perceptual span during reading is larger than the fixated word, asymmetric around the fixation position, and shrinks in size contingent on the foveal processing load. We used the SWIFT model of eye-movement control during reading to test these hypotheses and their implications under the assumption of graded parallel processing of all words inside the perceptual span. Specifically, we simulated reading in the boundary paradigm and analysed the effects of denying the model to have valid preview of a parafoveal word n + 2 two words to the right of fixation. Optimizing the model parameters for the valid preview condition only, we obtained span parameters with remarkably realistic estimates conforming to the empirical findings on the size of the perceptual span. More importantly, the SWIFT model generated parafoveal processing up to word n + 2 without fitting the model to such preview effects. Our results suggest that asymmetry and dynamic modulation are plausible properties of the perceptual span in a parallel word-processing model such as SWIFT. Moreover, they seem to guide the flexible distribution of processing resources during reading between foveal and parafoveal words. PMID:24771996
Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.
Flassig, R J; Sundmacher, K
2012-12-01
Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevenson, Simon; Ohme, Frank; Fairhurst, Stephen, E-mail: simon.stevenson@ligo.org
2015-09-01
The coalescence of compact binaries containing neutron stars or black holes is one of the most promising signals for advanced ground-based laser interferometer gravitational-wave (GW) detectors, with the first direct detections expected over the next few years. The rate of binary coalescences and the distribution of component masses is highly uncertain, and population synthesis models predict a wide range of plausible values. Poorly constrained parameters in population synthesis models correspond to poorly understood astrophysics at various stages in the evolution of massive binary stars, the progenitors of binary neutron star and binary black hole systems. These include effects such asmore » supernova kick velocities, parameters governing the energetics of common envelope evolution and the strength of stellar winds. Observing multiple binary black hole systems through GWs will allow us to infer details of the astrophysical mechanisms that lead to their formation. Here we simulate GW observations from a series of population synthesis models including the effects of known selection biases, measurement errors and cosmology. We compare the predictions arising from different models and show that we will be able to distinguish between them with observations (or the lack of them) from the early runs of the advanced LIGO and Virgo detectors. This will allow us to narrow down the large parameter space for binary evolution models.« less
Kong, Deguo; MacLeod, Matthew; Cousins, Ian T
2014-09-01
The effect of projected future changes in temperature, wind speed, precipitation and particulate organic carbon on concentrations of persistent organic chemicals in the Baltic Sea regional environment is evaluated using the POPCYCLING-Baltic multimedia chemical fate model. Steady-state concentrations of hypothetical perfectly persistent chemicals with property combinations that encompass the entire plausible range for non-ionizing organic substances are modelled under two alternative climate change scenarios (IPCC A2 and B2) and compared to a baseline climate scenario. The contributions of individual climate parameters are deduced in model experiments in which only one of the four parameters is changed from the baseline scenario. Of the four selected climate parameters, temperature is the most influential, and wind speed is least. Chemical concentrations in the Baltic region are projected to change by factors of up to 3.0 compared to the baseline climate scenario. For chemicals with property combinations similar to legacy persistent organic pollutants listed by the Stockholm Convention, modelled concentration ratios between two climate change scenarios and the baseline scenario range from factors of 0.5 to 2.0. This study is a first step toward quantitatively assessing climate change-induced changes in the environmental concentrations of persistent organic chemicals in the Baltic Sea region. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Simple Model of Global Aerosol Indirect Effects
NASA Technical Reports Server (NTRS)
Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter
2013-01-01
Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.
Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation
NASA Astrophysics Data System (ADS)
Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.
2016-12-01
With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.
Rocky Worlds Limited to ∼1.8 Earth Radii by Atmospheric Escape during a Star’s Extreme UV Saturation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmer, Owen R.; Catling, David C., E-mail: info@lehmer.us
Recent observations and analysis of low-mass (<10 M {sub ⊕}) exoplanets have found that rocky planets only have radii up to 1.5–2 R {sub ⊕}. Two general hypotheses exist for the cause of the dichotomy between rocky and gas-enveloped planets (or possible water worlds): either low-mass planets do not necessarily form thick atmospheres of a few wt.%, or the thick atmospheres on these planets easily escape, driven by X-ray and extreme ultraviolet (XUV) emissions from young parent stars. Here, we show that a cutoff between rocky and gas-enveloped planets due to hydrodynamic escape is most likely to occur at amore » mean radius of 1.76 ± 0.38 (2 σ ) R {sub ⊕} around Sun-like stars. We examine the limit in rocky planet radii predicted by hydrodynamic escape across a wide range of possible model inputs, using 10,000 parameter combinations drawn randomly from plausible parameter ranges. We find a cutoff between rocky and gas-enveloped planets that agrees with the observed cutoff. The large cross-section available for XUV absorption in the extremely distended primitive atmospheres of low-mass planets results in complete loss of atmospheres during the ∼100 Myr phase of stellar XUV saturation. In contrast, more-massive planets have less-distended atmospheres and less escape, and so retain thick atmospheres through XUV saturation—and then indefinitely as the XUV and escape fluxes drop over time. The agreement between our model and exoplanet data leads us to conclude that hydrodynamic escape plausibly explains the observed upper limit on rocky planet size and few planets (a “valley”, or “radius gap”) in the 1.5–2 R {sub ⊕} range.« less
Biologically Plausible, Human-scale Knowledge Representation
ERIC Educational Resources Information Center
Crawford, Eric; Gingerich, Matthew; Eliasmith, Chris
2016-01-01
Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, 1993), "mesh" binding (van der Velde & de Kamps, 2006), and conjunctive binding (Smolensky, 1990). Recent theoretical work has suggested that…
NASA Astrophysics Data System (ADS)
Li, Tianjun; Nanopoulos, Dimitri V.; Walker, Joel W.
2010-10-01
We consider proton decay in the testable flipped SU(5)×U(1)X models with TeV-scale vector-like particles which can be realized in free fermionic string constructions and F-theory model building. We significantly improve upon the determination of light threshold effects from prior studies, and perform a fresh calculation of the second loop for the process p→eπ from the heavy gauge boson exchange. The cumulative result is comparatively fast proton decay, with a majority of the most plausible parameter space within reach of the future Hyper-Kamiokande and DUSEL experiments. Because the TeV-scale vector-like particles can be produced at the LHC, we predict a strong correlation between the most exciting particle physics experiments of the coming decade.
Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using anmore » adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base.« less
Systems Analysis of the Hydrogen Transition with HyTrans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leiby, Paul Newsome; Greene, David L; Bowman, David Charles
2007-01-01
The U.S. Federal government is carefully considering the merits and long-term prospects of hydrogen-fueled vehicles. NAS (1) has called for the careful application of systems analysis tools to structure the complex assessment required. Others, raising cautionary notes, question whether a consistent and plausible transition to hydrogen light-duty vehicles can identified (2) and whether that transition would, on balance, be environmentally preferred. Modeling the market transition to hydrogen-powered vehicles is an inherently complex process, encompassing hydrogen production, delivery and retailing, vehicle manufacturing, and vehicle choice and use. We describe the integration of key technological and market factors in a dynamic transitionmore » model, HyTrans. The usefulness of HyTrans and its predictions depends on three key factors: (1) the validity of the economic theories that underpin the model, (2) the authenticity with which the key processes are represented, and (3) the accuracy of specific parameter values used in the process representations. This paper summarizes the theoretical basis of HyTrans, and highlights the implications of key parameter specifications with sensitivity analysis.« less
Aghdasinia, Hassan; Bagheri, Rasoul; Vahid, Behrouz; Khataee, Alireza
2016-11-01
Optimization of Acid Yellow 36 (AY36) degradation by heterogeneous Fenton process in a recirculated fluidized-bed reactor was studied using central composite design (CCD). Natural pyrite was applied as the catalyst characterized by X-ray diffraction and scanning electron microscopy. The CCD model was developed for the estimation of degradation efficiency as a function of independent operational parameters including hydrogen peroxide concentration (0.5-2.5 mmol/L), initial AY36 concentration (5-25 mg/L), pH (3-9) and catalyst dosage (0.4-1.2 mg/L). The obtained data from the model are in good agreement with the experimental data (R(2 )= 0.964). Moreover, this model is applicable not only to determine the optimized experimental conditions for maximum AY36 degradation, but also to find individual and interactive effects of the mentioned parameters. Finally, gas chromatography-mass spectroscopy (GC-MS) was utilized for the identification of some degradation intermediates and a plausible degradation pathway was proposed.
Black Hole Mergers as Probes of Structure Formation
NASA Technical Reports Server (NTRS)
Alicea-Munoz, E.; Miller, M. Coleman
2008-01-01
Intense structure formation and reionization occur at high redshift, yet there is currently little observational information about this very important epoch. Observations of gravitational waves from massive black hole (MBH) mergers can provide us with important clues about the formation of structures in the early universe. Past efforts have been limited to calculating merger rates using different models in which many assumptions are made about the specific values of physical parameters of the mergers, resulting in merger rate estimates that span a very wide range (0.1 - 104 mergers/year). Here we develop a semi-analytical, phenomenological model of MBH mergers that includes plausible combinations of several physical parameters, which we then turn around to determine how well observations with the Laser Interferometer Space Antenna (LISA) will be able to enhance our understanding of the universe during the critical z 5 - 30 structure formation era. We do this by generating synthetic LISA observable data (total BH mass, BH mass ratio, redshift, merger rates), which are then analyzed using a Markov Chain Monte Carlo method. This allows us to constrain the physical parameters of the mergers. We find that our methodology works well at estimating merger parameters, consistently giving results within 1- of the input parameter values. We also discover that the number of merger events is a key discriminant among models. This helps our method be robust against observational uncertainties. Our approach, which at this stage constitutes a proof of principle, can be readily extended to physical models and to more general problems in cosmology and gravitational wave astrophysics.
Electrochemical impedance spectroscopy of lithium-titanium disulfide rechargeable cells
NASA Technical Reports Server (NTRS)
Narayanan, S. R.; Shen, D. H.; Surampudi, S.; Attia, A. I.; Halpert, G.
1993-01-01
The two-terminal alternating current impedance of Li/TiS2 rechargeable cells was studied as a function of frequency, state-of-charge, and extended cycling. Analysis based on a plausible equivalent circuit model for the Li/TiS2 cell leads to evaluation of kinetic parameters for the various physicochemical processes occurring at the electrode/electrolyte interfaces. To investigate the causes of cell degradation during extended cycling, the parameters evaluated for cells cycled 5 times were compared with the parameters of cells cycled over 600 times. The findings are that the combined ohmic resistance of the electrolyte and electrodes suffers a tenfold increase after extended cycling, while the charge-transfer resistance and diffusional impedance at the TiS2/electrolyte interface are not significantIy affected. The results reflect the morphological change and increase in area of the anode due to cycling. The study also shows that overdischarge of a cathode-limited cell causes a decrease in the diffusion coefficient of the lithium ion in the cathode.
Simulating human behavior for national security human interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard, Michael Lewis; Hart, Dereck H.; Verzi, Stephen J.
2007-01-01
This 3-year research and development effort focused on what we believe is a significant technical gap in existing modeling and simulation capabilities: the representation of plausible human cognition and behaviors within a dynamic, simulated environment. Specifically, the intent of the ''Simulating Human Behavior for National Security Human Interactions'' project was to demonstrate initial simulated human modeling capability that realistically represents intra- and inter-group interaction behaviors between simulated humans and human-controlled avatars as they respond to their environment. Significant process was made towards simulating human behaviors through the development of a framework that produces realistic characteristics and movement. The simulated humansmore » were created from models designed to be psychologically plausible by being based on robust psychological research and theory. Progress was also made towards enhancing Sandia National Laboratories existing cognitive models to support culturally plausible behaviors that are important in representing group interactions. These models were implemented in the modular, interoperable, and commercially supported Umbra{reg_sign} simulation framework.« less
Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward
2013-09-01
Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.
Diagnosing the dangerous demography of manta rays using life history theory.
Dulvy, Nicholas K; Pardo, Sebastián A; Simpfendorfer, Colin A; Carlson, John K
2014-01-01
Background. The directed harvest and global trade in the gill plates of mantas, and devil rays, has led to increased fishing pressure and steep population declines in some locations. The slow life history, particularly of the manta rays, is cited as a key reason why such species have little capacity to withstand directed fisheries. Here, we place their life history and demography within the context of other sharks and rays. Methods. Despite the limited availability of data, we use life history theory and comparative analysis to estimate the intrinsic risk of extinction (as indexed by the maximum intrinsic rate of population increase r max) for a typical generic manta ray using a variant of the classic Euler-Lotka demographic model. This model requires only three traits to calculate the maximum intrinsic population growth rate r max: von Bertalanffy growth rate, annual pup production and age at maturity. To account for the uncertainty in life history parameters, we created plausible parameter ranges and propagate these uncertainties through the model to calculate a distribution of the plausible range of r max values. Results. The maximum population growth rate r max of manta ray is most sensitive to the length of the reproductive cycle, and the median r max of 0.116 year(-1) 95th percentile [0.089-0.139] is one of the lowest known of the 106 sharks and rays for which we have comparable demographic information. Discussion. In common with other unprotected, unmanaged, high-value large-bodied sharks and rays the combination of very low population growth rates of manta rays, combined with the high value of their gill rakers and the international nature of trade, is highly likely to lead to rapid depletion and potential local extinction unless a rapid conservation management response occurs worldwide. Furthermore, we show that it is possible to derive important insights into the demography extinction risk of data-poor species using well-established life history theory.
Diagnosing the dangerous demography of manta rays using life history theory
Pardo, Sebastián A.; Simpfendorfer, Colin A.; Carlson, John K.
2014-01-01
Background. The directed harvest and global trade in the gill plates of mantas, and devil rays, has led to increased fishing pressure and steep population declines in some locations. The slow life history, particularly of the manta rays, is cited as a key reason why such species have little capacity to withstand directed fisheries. Here, we place their life history and demography within the context of other sharks and rays. Methods. Despite the limited availability of data, we use life history theory and comparative analysis to estimate the intrinsic risk of extinction (as indexed by the maximum intrinsic rate of population increase rmax) for a typical generic manta ray using a variant of the classic Euler–Lotka demographic model. This model requires only three traits to calculate the maximum intrinsic population growth rate rmax: von Bertalanffy growth rate, annual pup production and age at maturity. To account for the uncertainty in life history parameters, we created plausible parameter ranges and propagate these uncertainties through the model to calculate a distribution of the plausible range of rmax values. Results. The maximum population growth rate rmax of manta ray is most sensitive to the length of the reproductive cycle, and the median rmax of 0.116 year−1 95th percentile [0.089–0.139] is one of the lowest known of the 106 sharks and rays for which we have comparable demographic information. Discussion. In common with other unprotected, unmanaged, high-value large-bodied sharks and rays the combination of very low population growth rates of manta rays, combined with the high value of their gill rakers and the international nature of trade, is highly likely to lead to rapid depletion and potential local extinction unless a rapid conservation management response occurs worldwide. Furthermore, we show that it is possible to derive important insights into the demography extinction risk of data-poor species using well-established life history theory. PMID:24918029
Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.
Schroder, Kai; Zinke, Arno; Klein, Reinhard
2015-02-01
Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.
NASA Astrophysics Data System (ADS)
Mondal, Argha; Upadhyay, Ranjit Kumar
2017-11-01
In this paper, an attempt has been made to understand the activity of mean membrane voltage and subsidiary system variables with moment equations (i.e., mean, variance and covariance's) under noisy environment. We consider a biophysically plausible modified Hindmarsh-Rose (H-R) neural system injected by an applied current exhibiting spiking-bursting phenomenon. The effects of predominant parameters on the dynamical behavior of a modified H-R system are investigated. Numerically, it exhibits period-doubling, period halving bifurcation and chaos phenomena. Further, a nonlinear system has been analyzed for the first and second order moments with additive stochastic perturbations. It has been solved using fourth order Runge-Kutta method and noisy systems by Euler's scheme. It has been demonstrated that the firing properties of neurons to evoke an action potential in a certain parameter space of the large exact systems can be estimated using an approximated model. Strong stimulation can cause a change in increase or decrease of the firing patterns. Corresponding to a fixed set of parameter values, the firing behavior and dynamical differences of the collective variables of a large, exact and approximated systems are investigated.
Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.
2013-01-01
Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.
Chasing Perfection: Should We Reduce Model Uncertainty in Carbon Cycle-Climate Feedbacks
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Lombardozzi, D.; Wieder, W. R.; Lindsay, K. T.; Thomas, R. Q.
2015-12-01
Earth system model simulations of the terrestrial carbon (C) cycle show large multi-model spread in the carbon-concentration and carbon-climate feedback parameters. Large differences among models are also seen in their simulation of global vegetation and soil C stocks and other aspects of the C cycle, prompting concern about model uncertainty and our ability to faithfully represent fundamental aspects of the terrestrial C cycle in Earth system models. Benchmarking analyses that compare model simulations with common datasets have been proposed as a means to assess model fidelity with observations, and various model-data fusion techniques have been used to reduce model biases. While such efforts will reduce multi-model spread, they may not help reduce uncertainty (and increase confidence) in projections of the C cycle over the twenty-first century. Many ecological and biogeochemical processes represented in Earth system models are poorly understood at both the site scale and across large regions, where biotic and edaphic heterogeneity are important. Our experience with the Community Land Model (CLM) suggests that large uncertainty in the terrestrial C cycle and its feedback with climate change is an inherent property of biological systems. The challenge of representing life in Earth system models, with the rich diversity of lifeforms and complexity of biological systems, may necessitate a multitude of modeling approaches to capture the range of possible outcomes. Such models should encompass a range of plausible model structures. We distinguish between model parameter uncertainty and model structural uncertainty. Focusing on improved parameter estimates may, in fact, limit progress in assessing model structural uncertainty associated with realistically representing biological processes. Moreover, higher confidence may be achieved through better process representation, but this does not necessarily reduce uncertainty.
NASA Astrophysics Data System (ADS)
Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.
2012-11-01
Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.
Comparison of screening-level and Monte Carlo approaches for wildlife food web exposure modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastorok, R.; Butcher, M.; LaTier, A.
1995-12-31
The implications of using quantitative uncertainty analysis (e.g., Monte Carlo) and site-specific tissue residue data for wildlife exposure modeling were examined with data on trace elements at the Clark Fork River Superfund Site. Exposure of white-tailed deer, red fox, and American kestrel was evaluated using three approaches. First, a screening-level exposure model was based on conservative estimates of exposure parameters, including estimates of dietary residues derived from bioconcentration factors (BCFs) and soil chemistry. A second model without Monte Carlo was based on site-specific data for tissue residues of trace elements (As, Cd, Cu, Pb, Zn) in key dietary species andmore » plausible assumptions for habitat spatial segmentation and other exposure parameters. Dietary species sampled included dominant grasses (tufted hairgrass and redtop), willows, alfalfa, barley, invertebrates (grasshoppers, spiders, and beetles), and deer mice. Third, the Monte Carlo analysis was based on the site-specific residue data and assumed or estimated distributions for exposure parameters. Substantial uncertainties are associated with several exposure parameters, especially BCFS, such that exposure and risk may be greatly overestimated in screening-level approaches. The results of the three approaches are compared with respect to realism, practicality, and data gaps. Collection of site-specific data on trace elements concentrations in plants and animals eaten by the target wildlife receptors is a cost-effective way to obtain realistic estimates of exposure. Implications of the results for exposure and risk estimates are discussed relative to use of wildlife exposure modeling and evaluation of remedial actions at Superfund sites.« less
Interpretation of magnetic anomalies using a genetic algorithm
NASA Astrophysics Data System (ADS)
Kaftan, İlknur
2017-08-01
A genetic algorithm (GA) is an artificial intelligence method used for optimization. We applied a GA to the inversion of magnetic anomalies over a thick dike. Inversion of nonlinear geophysical problems using a GA has advantages because it does not require model gradients or well-defined initial model parameters. The evolution process consists of selection, crossover, and mutation genetic operators that look for the best fit to the observed data and a solution consisting of plausible compact sources. The efficiency of a GA on both synthetic and real magnetic anomalies of dikes by estimating model parameters, such as depth to the top of the dike ( H), the half-width of the dike ( B), the distance from the origin to the reference point ( D), the dip of the thick dike ( δ), and the susceptibility contrast ( k), has been shown. For the synthetic anomaly case, it has been considered for both noise-free and noisy magnetic data. In the real case, the vertical magnetic anomaly from the Pima copper mine in Arizona, USA, and the vertical magnetic anomaly in the Bayburt-Sarıhan skarn zone in northeastern Turkey have been inverted and interpreted. We compared the estimated parameters with the results of conventional inversion methods used in previous studies. We can conclude that the GA method used in this study is a useful tool for evaluating magnetic anomalies for dike models.
Tracking slow modulations in synaptic gain using dynamic causal modelling: validation in epilepsy.
Papadopoulou, Margarita; Leite, Marco; van Mierlo, Pieter; Vonck, Kristl; Lemieux, Louis; Friston, Karl; Marinazzo, Daniele
2015-02-15
In this work we propose a proof of principle that dynamic causal modelling can identify plausible mechanisms at the synaptic level underlying brain state changes over a timescale of seconds. As a benchmark example for validation we used intracranial electroencephalographic signals in a human subject. These data were used to infer the (effective connectivity) architecture of synaptic connections among neural populations assumed to generate seizure activity. Dynamic causal modelling allowed us to quantify empirical changes in spectral activity in terms of a trajectory in parameter space - identifying key synaptic parameters or connections that cause observed signals. Using recordings from three seizures in one patient, we considered a network of two sources (within and just outside the putative ictal zone). Bayesian model selection was used to identify the intrinsic (within-source) and extrinsic (between-source) connectivity. Having established the underlying architecture, we were able to track the evolution of key connectivity parameters (e.g., inhibitory connections to superficial pyramidal cells) and test specific hypotheses about the synaptic mechanisms involved in ictogenesis. Our key finding was that intrinsic synaptic changes were sufficient to explain seizure onset, where these changes showed dissociable time courses over several seconds. Crucially, these changes spoke to an increase in the sensitivity of principal cells to intrinsic inhibitory afferents and a transient loss of excitatory-inhibitory balance. Copyright © 2014. Published by Elsevier Inc.
Matott, L Shawn; Jiang, Zhengzheng; Rabideau, Alan J; Allen-King, Richelle M
2015-01-01
Numerous isotherm expressions have been developed for describing sorption of hydrophobic organic compounds (HOCs), including "dual-mode" approaches that combine nonlinear behavior with a linear partitioning component. Choosing among these alternative expressions for describing a given dataset is an important task that can significantly influence subsequent transport modeling and/or mechanistic interpretation. In this study, a series of numerical experiments were undertaken to identify "best-in-class" isotherms by refitting 10 alternative models to a suite of 13 previously published literature datasets. The corrected Akaike Information Criterion (AICc) was used for ranking these alternative fits and distinguishing between plausible and implausible isotherms for each dataset. The occurrence of multiple plausible isotherms was inversely correlated with dataset "richness", such that datasets with fewer observations and/or a narrow range of aqueous concentrations resulted in a greater number of plausible isotherms. Overall, only the Polanyi-partition dual-mode isotherm was classified as "plausible" across all 13 of the considered datasets, indicating substantial statistical support consistent with current advances in sorption theory. However, these findings are predicated on the use of the AICc measure as an unbiased ranking metric and the adoption of a subjective, but defensible, threshold for separating plausible and implausible isotherms. Copyright © 2015 Elsevier B.V. All rights reserved.
Fire, ice, water, and dirt: A simple climate model
NASA Astrophysics Data System (ADS)
Kroll, John
2017-07-01
A simple paleoclimate model was developed as a modeling exercise. The model is a lumped parameter system consisting of an ocean (water), land (dirt), glacier, and sea ice (ice) and driven by the sun (fire). In comparison with other such models, its uniqueness lies in its relative simplicity yet yielding good results. For nominal values of parameters, the system is very sensitive to small changes in the parameters, yielding equilibrium, steady oscillations, and catastrophes such as freezing or boiling oceans. However, stable solutions can be found, especially naturally oscillating solutions. For nominally realistic conditions, natural periods of order 100kyrs are obtained, and chaos ensues if the Milankovitch orbital forcing is applied. An analysis of a truncated system shows that the naturally oscillating solution is a limit cycle with the characteristics of a relaxation oscillation in the two major dependent variables, the ocean temperature and the glacier ice extent. The key to getting oscillations is having the effective emissivity decreasing with temperature and, at the same time, the effective ocean albedo decreases with increasing glacier extent. Results of the original model compare favorably to the proxy data for ice mass variation, but not for temperature variation. However, modifications to the effective emissivity and albedo can be made to yield much more realistic results. The primary conclusion is that the opinion of Saltzman [Clim. Dyn. 5, 67-78 (1990)] is plausible that the external Milankovitch orbital forcing is not sufficient to explain the dominant 100kyr period in the data.
Fire, ice, water, and dirt: A simple climate model.
Kroll, John
2017-07-01
A simple paleoclimate model was developed as a modeling exercise. The model is a lumped parameter system consisting of an ocean (water), land (dirt), glacier, and sea ice (ice) and driven by the sun (fire). In comparison with other such models, its uniqueness lies in its relative simplicity yet yielding good results. For nominal values of parameters, the system is very sensitive to small changes in the parameters, yielding equilibrium, steady oscillations, and catastrophes such as freezing or boiling oceans. However, stable solutions can be found, especially naturally oscillating solutions. For nominally realistic conditions, natural periods of order 100kyrs are obtained, and chaos ensues if the Milankovitch orbital forcing is applied. An analysis of a truncated system shows that the naturally oscillating solution is a limit cycle with the characteristics of a relaxation oscillation in the two major dependent variables, the ocean temperature and the glacier ice extent. The key to getting oscillations is having the effective emissivity decreasing with temperature and, at the same time, the effective ocean albedo decreases with increasing glacier extent. Results of the original model compare favorably to the proxy data for ice mass variation, but not for temperature variation. However, modifications to the effective emissivity and albedo can be made to yield much more realistic results. The primary conclusion is that the opinion of Saltzman [Clim. Dyn. 5, 67-78 (1990)] is plausible that the external Milankovitch orbital forcing is not sufficient to explain the dominant 100kyr period in the data.
Emulation: A fast stochastic Bayesian method to eliminate model space
NASA Astrophysics Data System (ADS)
Roberts, Alan; Hobbs, Richard; Goldstein, Michael
2010-05-01
Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.
Biologically plausible learning in neural networks: a lesson from bacterial chemotaxis.
Shimansky, Yury P
2009-12-01
Learning processes in the brain are usually associated with plastic changes made to optimize the strength of connections between neurons. Although many details related to biophysical mechanisms of synaptic plasticity have been discovered, it is unclear how the concurrent performance of adaptive modifications in a huge number of spatial locations is organized to minimize a given objective function. Since direct experimental observation of even a relatively small subset of such changes is not feasible, computational modeling is an indispensable investigation tool for solving this problem. However, the conventional method of error back-propagation (EBP) employed for optimizing synaptic weights in artificial neural networks is not biologically plausible. This study based on computational experiments demonstrated that such optimization can be performed rather efficiently using the same general method that bacteria employ for moving closer to an attractant or away from a repellent. With regard to neural network optimization, this method consists of regulating the probability of an abrupt change in the direction of synaptic weight modification according to the temporal gradient of the objective function. Neural networks utilizing this method (regulation of modification probability, RMP) can be viewed as analogous to swimming in the multidimensional space of their parameters in the flow of biochemical agents carrying information about the optimality criterion. The efficiency of RMP is comparable to that of EBP, while RMP has several important advantages. Since the biological plausibility of RMP is beyond a reasonable doubt, the RMP concept provides a constructive framework for the experimental analysis of learning in natural neural networks.
Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten
2017-08-01
Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.
Estimating the boundaries of a limit cycle in a 2D dynamical system using renormalization group
NASA Astrophysics Data System (ADS)
Dutta, Ayan; Das, Debapriya; Banerjee, Dhruba; Bhattacharjee, Jayanta K.
2018-04-01
While the plausibility of formation of limit cycle has been a well studied topic in context of the Poincare-Bendixson theorem, studies on estimates in regard to the possible size and shape of the limit cycle seem to be scanty in the literature. In this paper we present a pedagogical study of some aspects of the size of this limit cycle using perturbative renormalization group by doing detailed and explicit calculations upto second order for the Selkov model for glycolytic oscillations. This famous model is well known to lead to a limit cycle for certain ranges of values of the parameters involved in the problem. Within the tenets of the approximations made, reasonable agreement with the numerical plots can be achieved.
NASA Astrophysics Data System (ADS)
Ingale, S. V.; Datta, D.
2010-10-01
Consequence of the accidental release of radioactivity from a nuclear power plant is assessed in terms of exposure or dose to the members of the public. Assessment of risk is routed through this dose computation. Dose computation basically depends on the basic dose assessment model and exposure pathways. One of the exposure pathways is the ingestion of contaminated food. The aim of the present paper is to compute the uncertainty associated with the risk to the members of the public due to the ingestion of contaminated food. The governing parameters of the ingestion dose assessment model being imprecise, we have approached evidence theory to compute the bound of the risk. The uncertainty is addressed by the belief and plausibility fuzzy measures.
Constraints on the synchronization of entorhinal cortex stellate cells
NASA Astrophysics Data System (ADS)
Crotty, Patrick; Lasker, Eric; Cheng, Sen
2012-07-01
Synchronized oscillations of large numbers of central neurons are believed to be important for a wide variety of cognitive functions, including long-term memory recall and spatial navigation. It is therefore plausible that evolution has optimized the biophysical properties of central neurons in some way for synchronized oscillations to occur. Here, we use computational models to investigate the relationships between the presumably genetically determined parameters of stellate cells in layer II of the entorhinal cortex and the ability of coupled populations of these cells to synchronize their intrinsic oscillations: in particular, we calculate the time it takes circuits of two or three cells with initially randomly distributed phases to synchronize their oscillations to within one action potential width, and the metabolic energy they consume in doing so. For recurrent circuit topologies, we find that parameters giving low intrinsic firing frequencies close to those actually observed are strongly advantageous for both synchronization time and metabolic energy consumption.
Modeling hot spring chemistries with applications to martian silica formation
NASA Astrophysics Data System (ADS)
Marion, G. M.; Catling, D. C.; Crowley, J. K.; Kargel, J. S.
2011-04-01
Many recent studies have implicated hydrothermal systems as the origin of martian minerals across a wide range of martian sites. Particular support for hydrothermal systems include silica (SiO 2) deposits, in some cases >90% silica, in the Gusev Crater region, especially in the Columbia Hills and at Home Plate. We have developed a model called CHEMCHAU that can be used up to 100 °C to simulate hot springs associated with hydrothermal systems. The model was partially derived from FREZCHEM, which is a colder temperature model parameterized for broad ranges of temperature (<-70 to 25 °C), pressure (1-1000 bars), and chemical composition. We demonstrate the validity of Pitzer parameters, volumetric parameters, and equilibrium constants in the CHEMCHAU model for the Na-K-Mg-Ca-H-Cl-ClO 4-SO 4-OH-HCO 3-CO 3-CO 2-O 2-CH 4-Si-H 2O system up to 100 °C and apply the model to hot springs and silica deposits. A theoretical simulation of silica and calcite equilibrium shows how calcite is least soluble with high pH and high temperatures, while silica behaves oppositely. Such influences imply that differences in temperature and pH on Mars could lead to very distinct mineral assemblages. Using measured solution chemistries of Yellowstone hot springs and Icelandic hot springs, we simulate salts formed during the evaporation of two low pH cases (high and low temperatures) and a high temperature, alkaline (high pH) sodic water. Simulation of an acid-sulfate case leads to precipitation of Fe and Al minerals along with silica. Consistency with martian mineral assemblages suggests that hot, acidic sulfate solutions are plausibility progenitors of minerals in the past on Mars. In the alkaline pH (8.45) simulation, formation of silica at high temperatures (355 K) led to precipitation of anhydrous minerals (CaSO 4, Na 2SO 4) that was also the case for the high temperature (353 K) low pH case where anhydrous minerals (NaCl, CaSO 4) also precipitated. Thus we predict that secondary minerals associated with massive silica deposits are plausible indicators on Mars of precipitation environments and aqueous chemistry. Theoretical model calculations are in reasonable agreement with independent experimental silica concentrations, which strengthens the validity of the new CHEMCHAU model.
Modeling hot spring chemistries with applications to martian silica formation
Marion, G.M.; Catling, D.C.; Crowley, J.K.; Kargel, J.S.
2011-01-01
Many recent studies have implicated hydrothermal systems as the origin of martian minerals across a wide range of martian sites. Particular support for hydrothermal systems include silica (SiO2) deposits, in some cases >90% silica, in the Gusev Crater region, especially in the Columbia Hills and at Home Plate. We have developed a model called CHEMCHAU that can be used up to 100??C to simulate hot springs associated with hydrothermal systems. The model was partially derived from FREZCHEM, which is a colder temperature model parameterized for broad ranges of temperature (<-70 to 25??C), pressure (1-1000 bars), and chemical composition. We demonstrate the validity of Pitzer parameters, volumetric parameters, and equilibrium constants in the CHEMCHAU model for the Na-K-Mg-Ca-H-Cl-ClO4-SO4-OH-HCO3-CO3-CO2-O2-CH4-Si-H2O system up to 100??C and apply the model to hot springs and silica deposits.A theoretical simulation of silica and calcite equilibrium shows how calcite is least soluble with high pH and high temperatures, while silica behaves oppositely. Such influences imply that differences in temperature and pH on Mars could lead to very distinct mineral assemblages. Using measured solution chemistries of Yellowstone hot springs and Icelandic hot springs, we simulate salts formed during the evaporation of two low pH cases (high and low temperatures) and a high temperature, alkaline (high pH) sodic water. Simulation of an acid-sulfate case leads to precipitation of Fe and Al minerals along with silica. Consistency with martian mineral assemblages suggests that hot, acidic sulfate solutions are plausibility progenitors of minerals in the past on Mars. In the alkaline pH (8.45) simulation, formation of silica at high temperatures (355K) led to precipitation of anhydrous minerals (CaSO4, Na2SO4) that was also the case for the high temperature (353K) low pH case where anhydrous minerals (NaCl, CaSO4) also precipitated. Thus we predict that secondary minerals associated with massive silica deposits are plausible indicators on Mars of precipitation environments and aqueous chemistry. Theoretical model calculations are in reasonable agreement with independent experimental silica concentrations, which strengthens the validity of the new CHEMCHAU model. ?? 2011 Elsevier Inc.
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Growth rates of rainbow smelt in Lake Champlain: Effects of density and diet
Stritzel, Thomson J.L.; Parrish, D.L.; Parker-Stetter, S. L.; Rudstam, L. G.; Sullivan, P.J.
2011-01-01
Stritzel Thomson JL, Parrish DL, Parker-Stetter SL, Rudstam LG, Sullivan PJ. Growth rates of rainbow smelt in Lake Champlain: effects of density and diet. Ecology of Freshwater Fish 2010. ?? 2010 John Wiley & Sons A/S Abstract- We estimated the densities of rainbow smelt (Osmerus mordax) using hydroacoustics and obtained specimens for diet analysis and groundtruthed acoustics data from mid-water trawl sampling in four areas of Lake Champlain, USA-Canada. Densities of rainbow smelt cohorts alternated during the 2-year study; age-0 rainbow smelt were very abundant in 2001 (up to 6fish per m2) and age-1 and older were abundant (up to 1.2fish per m2) in 2002. Growth rates and densities varied among areas and years. We used model selection on eight area-year-specific variables to investigate biologically plausible predictors of rainbow smelt growth rates. The best supported model of growth rates of age-0 smelt indicated a negative relationship with age-0 density, likely associated with intraspecific competition for zooplankton. The next best-fit model had age-1 density as a predictor of age-0 growth. The best supported models (N=4) of growth rates of age-1 fish indicated a positive relationship with availability of age-0 smelt and resulting levels of cannibalism. Other plausible models were contained variants of these parameters. Cannibalistic rainbow smelt consumed younger conspecifics that were up to 53% of their length. Prediction of population dynamics for rainbow smelt requires an understanding of the relationship between density and growth as age-0 fish outgrow their main predators (adult smelt) by autumn in years with fast growth rates, but not in years with slow growth rates. ?? 2011 John Wiley & Sons A/S.
A Model for Generating Multi-hazard Scenarios
NASA Astrophysics Data System (ADS)
Lo Jacomo, A.; Han, D.; Champneys, A.
2017-12-01
Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.
Families of Plausible Solutions to the Puzzle of Boyajian’s Star
NASA Astrophysics Data System (ADS)
Wright, Jason T.; Sigurdsson, Steinn
2016-09-01
Good explanations for the unusual light curve of Boyajian's Star have been hard to find. Recent results by Montet & Simon lend strength and plausibility to the conclusion of Schaefer that in addition to short-term dimmings, the star also experiences large, secular decreases in brightness on decadal timescales. This, combined with a lack of long-wavelength excess in the star's spectral energy distribution, strongly constrains scenarios involving circumstellar material, including hypotheses invoking a spherical cloud of artifacts. We show that the timings of the deepest dimmings appear consistent with being randomly distributed, and that the star's reddening and narrow sodium absorption is consistent with the total, long-term dimming observed. Following Montet & Simon's encouragement to generate alternative hypotheses, we attempt to circumscribe the space of possible explanations with a range of plausibilities, including: a cloud in the outer solar system, structure in the interstellar medium (ISM), natural and artificial material orbiting Boyajian's Star, an intervening object with a large disk, and variations in Boyajian's Star itself. We find the ISM and intervening disk models more plausible than the other natural models.
Information fusion in regularized inversion of tomographic pumping tests
Bohling, Geoffrey C.; ,
2008-01-01
In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.
Cowell, Rosemary A; Bussey, Timothy J; Saksida, Lisa M
2012-11-01
We describe how computational models can be useful to cognitive and behavioral neuroscience, and discuss some guidelines for deciding whether a model is useful. We emphasize that because instantiating a cognitive theory as a computational model requires specification of an explicit mechanism for the function in question, it often produces clear and novel behavioral predictions to guide empirical research. However, computational modeling in cognitive and behavioral neuroscience remains somewhat rare, perhaps because of misconceptions concerning the use of computational models (in particular, connectionist models) in these fields. We highlight some common misconceptions, each of which relates to an aspect of computational models: the problem space of the model, the level of biological organization at which the model is formulated, and the importance (or not) of biological plausibility, parsimony, and model parameters. Careful consideration of these aspects of a model by empiricists, along with careful delineation of them by modelers, may facilitate communication between the two disciplines and promote the use of computational models for guiding cognitive and behavioral experiments. Copyright © 2012 Elsevier Ltd. All rights reserved.
He, Yujie; Yang, Jinyan; Zhuang, Qianlai; McGuire, A. David; Zhu, Qing; Liu, Yaling; Teskey, Robert O.
2014-01-01
Conventional Q10 soil organic matter decomposition models and more complex microbial models are available for making projections of future soil carbon dynamics. However, it is unclear (1) how well the conceptually different approaches can simulate observed decomposition and (2) to what extent the trajectories of long-term simulations differ when using the different approaches. In this study, we compared three structurally different soil carbon (C) decomposition models (one Q10 and two microbial models of different complexity), each with a one- and two-horizon version. The models were calibrated and validated using 4 years of measurements of heterotrophic soil CO2 efflux from trenched plots in a Dahurian larch (Larix gmelinii Rupr.) plantation. All models reproduced the observed heterotrophic component of soil CO2 efflux, but the trajectories of soil carbon dynamics differed substantially in 100 year simulations with and without warming and increased litterfall input, with microbial models that produced better agreement with observed changes in soil organic C in long-term warming experiments. Our results also suggest that both constant and varying carbon use efficiency are plausible when modeling future decomposition dynamics and that the use of a short-term (e.g., a few years) period of measurement is insufficient to adequately constrain model parameters that represent long-term responses of microbial thermal adaption. These results highlight the need to reframe the representation of decomposition models and to constrain parameters with long-term observations and multiple data streams. We urge caution in interpreting future soil carbon responses derived from existing decomposition models because both conceptual and parameter uncertainties are substantial.
Evidence accumulation as a model for lexical selection.
Anders, R; Riès, S; van Maanen, L; Alario, F X
2015-11-01
We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.
Low-temperature transonic cooling flows in galaxy clusters
NASA Technical Reports Server (NTRS)
Sulkanen, Martin E.; Burns, Jack O.; Norman, Michael L.
1989-01-01
Calculations are presented which demonstrate that cooling flow models with large sonic radii may be consistent with observed cluster gas properties. It is found that plausible cluster parameters and cooling flow mass accretion rates can produce sonic radii of 10-20 kpc for sonic point temperatures of 1-3 x 10 to the 6th K. The numerical calculations match these cooling flows to hydrostatic atmosphere solutions for the cluster gas beyond the cooling flow region. The cooling flows produce no appreciable 'holes' in the surface brightness toward the cluster center, and the model can be made to match the observed X-ray surface brightness of three clusters in which cooling flows had been believed to be absent. It is suggested that clusters with low velocity dispersion may be the natural location for such 'cool' cooling flows, and fits of these models to the X-ray surface brightness profiles for three clusters are presented.
NASA Astrophysics Data System (ADS)
Herrera-Vega, Javier; Montero-Hernández, Samuel; Tachtsidis, Ilias; Treviño-Palacios, Carlos G.; Orihuela-Espina, Felipe
2017-11-01
Accurate estimation of brain haemodynamics parameters such as cerebral blood flow and volume as well as oxygen consumption i.e. metabolic rate of oxygen, with funcional near infrared spectroscopy (fNIRS) requires precise characterization of light propagation through head tissues. An anatomically realistic forward model of the human adult head with unprecedented detailed specification of the 5 scalp sublayers to account for blood irrigation in the connective tissue layer is introduced. The full model consists of 9 layers, accounts for optical properties ranging from 750nm to 950nm and has a voxel size of 0.5mm. The whole model is validated comparing the predicted remitted spectra, using Monte Carlo simulations of radiation propagation with 108 photons, against continuous wave (CW) broadband fNIRS experimental data. As the true oxy- and deoxy-hemoglobin concentrations during acquisition are unknown, a genetic algorithm searched for the vector of parameters that generates a modelled spectrum that optimally fits the experimental spectrum. Differences between experimental and model predicted spectra was quantified using the Root mean square error (RMSE). RMSE was 0.071 +/- 0.004, 0.108 +/- 0.018 and 0.235+/-0.015 at 1, 2 and 3cm interoptode distance respectively. The parameter vector of absolute concentrations of haemoglobin species in scalp and cortex retrieved with the genetic algorithm was within histologically plausible ranges. The new model capability to estimate the contribution of the scalp blood flow shall permit incorporating this information to the regularization of the inverse problem for a cleaner reconstruction of brain hemodynamics.
NASA Astrophysics Data System (ADS)
Bello, S. A.; Abdullah, M.; Hamid, N. S. A.; Yoshikawa, A.; Olawepo, A. O.
2017-07-01
The ionospheric thickness (B0) and shape (B1) are bottomside profile parameters introduced by the International Reference Ionosphere (IRI) model. We have validated these parameters with the latest version of the IRI-2012 model and compared them with the solar quiet of geomagnetic H-component (SqH). The B0, B1 and SqH are calculated from the measurements obtained from digisonde DPS-4 sounder and the Magnetic Data Acquisition System (MAGDAS) magnetometer, respectively at Ilorin (geo latitude 8.50°N, geo longitude 4.68°E, and Magnetic dip 4.1°S) an equatorial station in the African sector. The study was for the year 2010, a year of low solar activity (with 27-day averaged solar index, F10.7 = 80 sfu). The results show that B0 for the entire months was higher during the daytime than during the night time. On the other hand, the magnitude of B1during the daytime period is lower than nighttime values and exhibit oscillatory pattern. By comparing the experimental observations of the profile parameters with the IRI-2012 model prediction, we found that B0 was fairly represented by the IRI model options during the nighttime period while discrepancies exist between the model estimates and the experimental values during the morning till midday. A close agreement exists between the observed B1 values and IRI model options. We observed a positive and significant correlation coefficient between B0 and SqH indicating a plausible relationship between these parameters while a weak and negative correlation coefficient between B1 and SqH was observed. We concluded that the difference in the relationship of SqH and the profile parameters B0 and B1 observed can be attributed to their sensitivity to the electric field which is responsible for the E × B drift which in turn modulate the height of the F2.
NASA Astrophysics Data System (ADS)
Zhu, L. Y.; Kemple, M. D.; Landy, S. B.; Buckley, P.
The importance of dipolar cross correlation in 13C relaxation studies of molecular motion in AX 2 spin systems (A = 13C, X = 1H) was examined. Several different models for the internal motion, including two restricted-diffusion, and two-site jump models, the Kinosita model [K. Kinosita, Jr., S. Kawato, and A. Ikegami, Biophys. J.20, 289 (1977)], and an axially symmetric model, were applied through the Lipari and Szabo [ J. Am. Chem. Soc.104, 4546 (1982)] formalism to calculate errors in 13C T1, obtained from inversion-recovery measurements under proton saturation, and NOE when dipolar cross correlation is neglected. Motional parameters in the Lipari and Szabo formalism, τ m, S2, and τ e, were then determined from T1 and NOE (including the errors) and compared with parameters initially used to simulate the relaxation data. The resulting differences in the motional parameters, while model dependent, were generally small for plausible motions. At larger S2 values (≥ 0.6), the errors in both τ m and S2 were <5%. Errors in τ e increased with S2 but were usually less than 10%. Larger errors in the parameters were found for an axially symmetric model, but with τ m fixed even those were >5% only for the τ m = 1 ns, τ e = 10 ps case. Furthermore, it was observed that deviations in a given motional parameter were mostly of the same sign, which allows bounds to be set on experimentally derived parameters. Relaxation data for the peptide melittin synthesized with gly enriched with 13C at the backbone cu position and with lys enriched with 13C in the side chain were examined in light of the results of the simulations. All in all, it appears that neglect of dipolar cross correlation in 13C T1 (With proton saturation) and NOE measurements in AX 2 systems does not lead to major problems in interpretation of the results in terms of molecular motion.
Solutions for transients in arbitrarily branching cables: III. Voltage clamp problems.
Major, G
1993-07-01
Branched cable voltage recording and voltage clamp analytical solutions derived in two previous papers are used to explore practical issues concerning voltage clamp. Single exponentials can be fitted reasonably well to the decay phase of clamped synaptic currents, although they contain many underlying components. The effective time constant depends on the fit interval. The smoothing effects on synaptic clamp currents of dendritic cables and series resistance are explored with a single cylinder + soma model, for inputs with different time courses. "Soma" and "cable" charging currents cannot be separated easily when the soma is much smaller than the dendrites. Subtractive soma capacitance compensation and series resistance compensation are discussed. In a hippocampal CA1 pyramidal neurone model, voltage control at most dendritic sites is extremely poor. Parameter dependencies are illustrated. The effects of series resistance compound those of dendritic cables and depend on the "effective capacitance" of the cell. Plausible combinations of parameters can cause order-of-magnitude distortions to clamp current waveform measures of simulated Schaeffer collateral inputs. These voltage clamp problems are unlikely to be solved by the use of switch clamp methods.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
Issues raised by the reference doses for perfluorooctane sulfonate and perfluorooctanoic acid.
Dong, Zhaomin; Bahar, Md Mezbaul; Jit, Joytishna; Kennedy, Bruce; Priestly, Brian; Ng, Jack; Lamb, Dane; Liu, Yanju; Duan, Luchun; Naidu, Ravi
2017-08-01
On 25th May 2016, the U.S. EPA released reference doses (RfDs) for Perfluorooctane Sulfonate (PFOS) and Perfluorooctanoic Acid (PFOA) of 20ng/kg/day, which were much more conservative than previous values. These RfDs rely on the choices of animal point of departure (PoD) and the toxicokinetics (TK) model. At this stage, considering that the human evidence is not strong enough for RfD determination, using animal data may be appropriate but with more uncertainties. In this article, the uncertainties concerning RfDs from the choices of PoD and TK models are addressed. Firstly, the candidate PoDs should include more critical endpoints (such as immunotoxicity), which may lead to lower RfDs. Secondly, the reliability of the adopted three-compartment TK model is compromised: the parameters are not non-biologically plausible; and this TK model was applied to simulate gestation and lactation exposures, while the two exposure scenarios were not actually included in the model structure. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel
2016-12-01
On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.
Generalized gas-solid adsorption modeling: Single-component equilibria
Ladshaw, Austin; Yiacoumi, Sotira; Tsouris, Costas; ...
2015-01-07
Over the last several decades, modeling of gas–solid adsorption at equilibrium has generally been accomplished through the use of isotherms such as the Freundlich, Langmuir, Tóth, and other similar models. While these models are relatively easy to adapt for describing experimental data, their simplicity limits their generality to be used with many different sets of data. This limitation forces engineers and scientists to test each different model in order to evaluate which one can best describe their data. Additionally, the parameters of these models all have a different physical interpretation, which may have an effect on how they can bemore » further extended into kinetic, thermodynamic, and/or mass transfer models for engineering applications. Therefore, it is paramount to adopt not only a more general isotherm model, but also a concise methodology to reliably optimize for and obtain the parameters of that model. A model of particular interest is the Generalized Statistical Thermodynamic Adsorption (GSTA) isotherm. The GSTA isotherm has enormous flexibility, which could potentially be used to describe a variety of different adsorption systems, but utilizing this model can be fairly difficult due to that flexibility. To circumvent this complication, a comprehensive methodology and computer code has been developed that can perform a full equilibrium analysis of adsorption data for any gas-solid system using the GSTA model. The code has been developed in C/C++ and utilizes a Levenberg–Marquardt’s algorithm to handle the non-linear optimization of the model parameters. Since the GSTA model has an adjustable number of parameters, the code iteratively goes through all number of plausible parameters for each data set and then returns the best solution based on a set of scrutiny criteria. Data sets at different temperatures are analyzed serially and then linear correlations with temperature are made for the parameters of the model. The end result is a full set of optimal GSTA parameters, both dimensional and non-dimensional, as well as the corresponding thermodynamic parameters necessary to predict the behavior of the system at temperatures for which data were not available. It will be shown that this code, utilizing the GSTA model, was able to describe a wide variety of gas-solid adsorption systems at equilibrium.In addition, a physical interpretation of these results will be provided, as well as an alternate derivation of the GSTA model, which intends to reaffirm the physical meaning.« less
How much expert knowledge is it worth to put in conceptual hydrological models?
NASA Astrophysics Data System (ADS)
Antonetti, Manuel; Zappa, Massimiliano
2017-04-01
Both modellers and experimentalists agree on using expert knowledge to improve our conceptual hydrological simulations on ungauged basins. However, they use expert knowledge differently for both hydrologically mapping the landscape and parameterising a given hydrological model. Modellers use generally very simplified (e.g. topography-based) mapping approaches and put most of the knowledge for constraining the model by defining parameter and process relational rules. In contrast, experimentalists tend to invest all their detailed and qualitative knowledge about processes to obtain a spatial distribution of areas with different dominant runoff generation processes (DRPs) as realistic as possible, and for defining plausible narrow value ranges for each model parameter. Since, most of the times, the modelling goal is exclusively to simulate runoff at a specific site, even strongly simplified hydrological classifications can lead to satisfying results due to equifinality of hydrological models, overfitting problems and the numerous uncertainty sources affecting runoff simulations. Therefore, to test to which extent expert knowledge can improve simulation results under uncertainty, we applied a typical modellers' modelling framework relying on parameter and process constraints defined based on expert knowledge to several catchments on the Swiss Plateau. To map the spatial distribution of the DRPs, mapping approaches with increasing involvement of expert knowledge were used. Simulation results highlighted the potential added value of using all the expert knowledge available on a catchment. Also, combinations of event types and landscapes, where even a simplified mapping approach can lead to satisfying results, were identified. Finally, the uncertainty originated by the different mapping approaches was compared with the one linked to meteorological input data and catchment initial conditions.
Xue, Ling; Holford, Nick; Ding, Xiao-Liang; Shen, Zhen-Ya; Huang, Chen-Rong; Zhang, Hua; Zhang, Jing-Jing; Guo, Zhe-Ning; Xie, Cheng; Zhou, Ling; Chen, Zhi-Yao; Liu, Lin-Sheng; Miao, Li-Yan
2017-04-01
The aims of this study are to apply a theory-based mechanistic model to describe the pharmacokinetics (PK) and pharmacodynamics (PD) of S- and R-warfarin. Clinical data were obtained from 264 patients. Total concentrations for S- and R-warfarin were measured by ultra-high performance liquid tandem mass spectrometry. Genotypes were measured using pyrosequencing. A sequential population PK parameter with data method was used to describe the international normalized ratio (INR) time course. Data were analyzed with NONMEM. Model evaluation was based on parameter plausibility and prediction-corrected visual predictive checks. Warfarin PK was described using a one-compartment model. CYP2C9 *1/*3 genotype had reduced clearance for S-warfarin, but increased clearance for R-warfarin. The in vitro parameters for the relationship between prothrombin complex activity (PCA) and INR were markedly different (A = 0.560, B = 0.386) from the theory-based values (A = 1, B = 0). There was a small difference between healthy subjects and patients. A sigmoid E max PD model inhibiting PCA synthesis as a function of S-warfarin concentration predicted INR. Small R-warfarin effects was described by competitive antagonism of S-warfarin inhibition. Patients with VKORC1 AA and CYP4F2 CC or CT genotypes had lower C50 for S-warfarin. A theory-based PKPD model describes warfarin concentrations and clinical response. Expected PK and PD genotype effects were confirmed. The role of predicted fat free mass with theory-based allometric scaling of PK parameters was identified. R-warfarin had a minor effect compared with S-warfarin on PCA synthesis. INR is predictable from 1/PCA in vivo. © 2016 The British Pharmacological Society.
Xue, Ling; Holford, Nick; Ding, Xiao‐liang; Shen, Zhen‐ya; Huang, Chen‐rong; Zhang, Hua; Zhang, Jing‐jing; Guo, Zhe‐ning; Xie, Cheng; Zhou, Ling; Chen, Zhi‐yao; Liu, Lin‐sheng
2016-01-01
Aims The aims of this study are to apply a theory‐based mechanistic model to describe the pharmacokinetics (PK) and pharmacodynamics (PD) of S‐ and R‐warfarin. Methods Clinical data were obtained from 264 patients. Total concentrations for S‐ and R‐warfarin were measured by ultra‐high performance liquid tandem mass spectrometry. Genotypes were measured using pyrosequencing. A sequential population PK parameter with data method was used to describe the international normalized ratio (INR) time course. Data were analyzed with NONMEM. Model evaluation was based on parameter plausibility and prediction‐corrected visual predictive checks. Results Warfarin PK was described using a one‐compartment model. CYP2C9 *1/*3 genotype had reduced clearance for S‐warfarin, but increased clearance for R‐warfarin. The in vitro parameters for the relationship between prothrombin complex activity (PCA) and INR were markedly different (A = 0.560, B = 0.386) from the theory‐based values (A = 1, B = 0). There was a small difference between healthy subjects and patients. A sigmoid Emax PD model inhibiting PCA synthesis as a function of S‐warfarin concentration predicted INR. Small R‐warfarin effects was described by competitive antagonism of S‐warfarin inhibition. Patients with VKORC1 AA and CYP4F2 CC or CT genotypes had lower C50 for S‐warfarin. Conclusion A theory‐based PKPD model describes warfarin concentrations and clinical response. Expected PK and PD genotype effects were confirmed. The role of predicted fat free mass with theory‐based allometric scaling of PK parameters was identified. R‐warfarin had a minor effect compared with S‐warfarin on PCA synthesis. INR is predictable from 1/PCA in vivo. PMID:27763679
NASA Astrophysics Data System (ADS)
Lombardi, D.; Sinatra, G. M.
2013-12-01
Critical evaluation and plausibility reappraisal of scientific explanations have been underemphasized in many science classrooms (NRC, 2012). Deep science learning demands that students increase their ability to critically evaluate the quality of scientific knowledge, weigh alternative explanations, and explicitly reappraise their plausibility judgments. Therefore, this lack of instruction about critical evaluation and plausibility reappraisal has, in part, contributed to diminished understanding about complex and controversial topics, such as global climate change. The Model-Evidence Link (MEL) diagram (originally developed by researchers at Rutgers University under an NSF-supported project; Chinn & Buckland, 2012) is an instructional scaffold that promotes students to critically evaluate alternative explanations. We recently developed a climate change MEL and found that the students who used the MEL experienced a significant shift in their plausibility judgments toward the scientifically accepted model of human-induced climate change. Using the MEL for instruction also resulted in conceptual change about the causes of global warming that reflected greater understanding of fundamental scientific principles. Furthermore, students sustained this conceptual change six months after MEL instruction (Lombardi, Sinatra, & Nussbaum, 2013). This presentation will discuss recent educational research that supports use of the MEL to promote critical evaluation, plausibility reappraisal, and conceptual change, and also, how the MEL may be particularly effective for learning about global climate change and other socio-scientific topics. Such instruction to develop these fundamental thinking skills (e.g., critical evaluation and plausibility reappraisal) is demanded by both the Next Generation Science Standards (Achieve, 2013) and the Common Core State Standards for English Language Arts and Mathematics (CCSS Initiative-ELA, 2010; CCSS Initiative-Math, 2010), as well as a society that is equipped to deal with challenges in a way that is beneficial to our national and global community.
Fertility, Human Capital, and Economic Growth over the Demographic Transition
Mason, Andrew
2009-01-01
Do low fertility and population aging lead to economic decline if couples have fewer children, but invest more in each child? By addressing this question, this article extends previous work in which the authors show that population aging leads to an increased demand for wealth that can, under some conditions, lead to increased capital per worker and higher per capita consumption. This article is based on an overlapping generations (OLG) model which highlights the quantity–quality tradeoff and the links between human capital investment and economic growth. It incorporates new national level estimates of human capital investment produced by the National Transfer Accounts project. Simulation analysis is employed to show that, even in the absence of the capital dilution effect, low fertility leads to higher per capita consumption through human capital accumulation, given plausible model parameters. PMID:20495605
A two component model for thermal emission from organic grains in Comet Halley
NASA Technical Reports Server (NTRS)
Chyba, Christopher; Sagan, Carl
1988-01-01
Observations of Comet Halley in the near infrared reveal a triple-peaked emission feature near 3.4 micrometer, characteristic of C-H stretching in hydrocarbons. A variety of plausible cometary materials exhibit these features, including the organic residue of irradiated candidate cometary ices (such as the residue of irradiated methane ice clathrate, and polycyclic aromatic hydrocarbons. Indeed, any molecule containing -CH3 and -CH2 alkanes will emit at 3.4 micrometer under suitable conditions. Therefore tentative identifications must rest on additional evidence, including a plausible account of the origins of the organic material, a plausible model for the infrared emission of this material, and a demonstration that this conjunction of material and model not only matches the 3 to 4 micrometer spectrum, but also does not yield additional emission features where none is observed. In the case of the residue of irradiated low occupancy methane ice clathrate, it is argued that the lab synthesis of the organic residue well simulates the radiation processing experienced by Comet Halley.
Non-Local Thermodynamic Equilibrium Spectrum Synthesis of Type IA Supernovae
NASA Astrophysics Data System (ADS)
Nugent, Peter Edward
1997-09-01
Type Ia supernovae (SNe Ia) are valuable distance indicators for cosmology and the elements they eject are are important for nucleosynthesis. They appear to be thermonuclear disruptions of carbon-oxygen white dwarfs that accrete from companion stars until they approach the Chandrasekbar mass, and there is a suspicion that the propagation of the nuclear burning front involves a transition from a deflagration to a detonation. Detailed modeling of the atmospheres and spectra of SNe Ia is needed to advance our understanding of SNe Ia. Comparison of synthetic and observed spectra provides information on the temperature, density, velocity, and composition of the ejected matter and thus constrain hydrodynamical models. In addition, the expanding photosphere method yields distances to individual events that are independent of distances based on the decay of 56Ni in SNe Ia and of Cepheid variable stars in the parent galaxies. This thesis is broken down into 4 major sections, each highlighting a different way with which to use spectrum synthesis to analyze SNe Ia. Chapters 2 and 3 look at normal SNe Ia and their potential use as distance indicators using SEAM. Chapter 4 examines spectral correlations with luminosity in SNe Ia and provides a plausible explanation for these correlations via spectrum synthesis. In Chapter 5 the spectra of various hydrodynamical models are calculated in an effort to answer the question of which current progenitor/explosion model is the most plausible for a SN Ia. Finally, we look at the importance of NLTE calculations and line identifications in Chapter 6. Also included are two appendices which contain more technical information concerning γ-ray deposition and the thermalization parameter.
Inferring Nonlinear Neuronal Computation Based on Physiologically Plausible Inputs
McFarland, James M.; Cui, Yuwei; Butts, Daniel A.
2013-01-01
The computation represented by a sensory neuron's response to stimuli is constructed from an array of physiological processes both belonging to that neuron and inherited from its inputs. Although many of these physiological processes are known to be nonlinear, linear approximations are commonly used to describe the stimulus selectivity of sensory neurons (i.e., linear receptive fields). Here we present an approach for modeling sensory processing, termed the Nonlinear Input Model (NIM), which is based on the hypothesis that the dominant nonlinearities imposed by physiological mechanisms arise from rectification of a neuron's inputs. Incorporating such ‘upstream nonlinearities’ within the standard linear-nonlinear (LN) cascade modeling structure implicitly allows for the identification of multiple stimulus features driving a neuron's response, which become directly interpretable as either excitatory or inhibitory. Because its form is analogous to an integrate-and-fire neuron receiving excitatory and inhibitory inputs, model fitting can be guided by prior knowledge about the inputs to a given neuron, and elements of the resulting model can often result in specific physiological predictions. Furthermore, by providing an explicit probabilistic model with a relatively simple nonlinear structure, its parameters can be efficiently optimized and appropriately regularized. Parameter estimation is robust and efficient even with large numbers of model components and in the context of high-dimensional stimuli with complex statistical structure (e.g. natural stimuli). We describe detailed methods for estimating the model parameters, and illustrate the advantages of the NIM using a range of example sensory neurons in the visual and auditory systems. We thus present a modeling framework that can capture a broad range of nonlinear response functions while providing physiologically interpretable descriptions of neural computation. PMID:23874185
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2011-12-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.
2012-04-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2012-03-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
Earthquake sequence simulations with measured properties for JFAST core samples
NASA Astrophysics Data System (ADS)
Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro
2017-08-01
Since the 2011 Tohoku-Oki earthquake, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical models of earthquake sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex rate dependences: a and a-b values change with the slip rate. In order to express such complexity, we generalize a rate- and state-dependent friction law to a quadratic form in terms of the logarithmic slip rate. The constraints from experiments reduced the degrees of freedom of the model significantly, and we managed to find a plausible model by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge model parameter space. This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'.
Rational decision-making in inhibitory control.
Shenoy, Pradeep; Yu, Angela J
2011-01-01
An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability.
Rational Decision-Making in Inhibitory Control
Shenoy, Pradeep; Yu, Angela J.
2011-01-01
An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability. PMID:21647306
Earthquake sequence simulations with measured properties for JFAST core samples.
Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro
2017-09-28
Since the 2011 Tohoku-Oki earthquake, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical models of earthquake sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex rate dependences: a and a - b values change with the slip rate. In order to express such complexity, we generalize a rate- and state-dependent friction law to a quadratic form in terms of the logarithmic slip rate. The constraints from experiments reduced the degrees of freedom of the model significantly, and we managed to find a plausible model by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge model parameter space.This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Rödiger, T.; Geyer, S.; Mallast, U.; Merz, R.; Krause, P.; Fischer, C.; Siebert, C.
2014-02-01
A key factor for sustainable management of groundwater systems is the accurate estimation of groundwater recharge. Hydrological models are common tools for such estimations and widely used. As such models need to be calibrated against measured values, the absence of adequate data can be problematic. We present a nested multi-response calibration approach for a semi-distributed hydrological model in the semi-arid catchment of Wadi al Arab in Jordan, with sparsely available runoff data. The basic idea of the calibration approach is to use diverse observations in a nested strategy, in which sub-parts of the model are calibrated to various observation data types in a consecutive manner. First, the available different data sources have to be screened for information content of processes, e.g. if data sources contain information on mean values, spatial or temporal variability etc. for the entire catchment or only sub-catchments. In a second step, the information content has to be mapped to relevant model components, which represent these processes. Then the data source is used to calibrate the respective subset of model parameters, while the remaining model parameters remain unchanged. This mapping is repeated for other available data sources. In that study the gauged spring discharge (GSD) method, flash flood observations and data from the chloride mass balance (CMB) are used to derive plausible parameter ranges for the conceptual hydrological model J2000g. The water table fluctuation (WTF) method is used to validate the model. Results from modelling using a priori parameter values from literature as a benchmark are compared. The estimated recharge rates of the calibrated model deviate less than ±10% from the estimates derived from WTF method. Larger differences are visible in the years with high uncertainties in rainfall input data. The performance of the calibrated model during validation produces better results than applying the model with only a priori parameter values. The model with a priori parameter values from literature tends to overestimate recharge rates with up to 30%, particular in the wet winter of 1991/1992. An overestimation of groundwater recharge and hence available water resources clearly endangers reliable water resource managing in water scarce region. The proposed nested multi-response approach may help to better predict water resources despite data scarcity.
NASA Astrophysics Data System (ADS)
Miki, K.; Panesi, M.; Prudencio, E. E.; Prudhomme, S.
2012-05-01
The objective in this paper is to analyze some stochastic models for estimating the ionization reaction rate constant of atomic Nitrogen (N + e- → N+ + 2e-). Parameters of the models are identified by means of Bayesian inference using spatially resolved absolute radiance data obtained from the Electric Arc Shock Tube (EAST) wind-tunnel. The proposed methodology accounts for uncertainties in the model parameters as well as physical model inadequacies, providing estimates of the rate constant that reflect both types of uncertainties. We present four different probabilistic models by varying the error structure (either additive or multiplicative) and by choosing different descriptions of the statistical correlation among data points. In order to assess the validity of our methodology, we first present some calibration results obtained with manufactured data and then proceed by using experimental data collected at EAST experimental facility. In order to simulate the radiative signature emitted in the shock-heated air plasma, we use a one-dimensional flow solver with Park's two-temperature model that simulates non-equilibrium effects. We also discuss the implications of the choice of the stochastic model on the estimation of the reaction rate and its uncertainties. Our analysis shows that the stochastic models based on correlated multiplicative errors are the most plausible models among the four models proposed in this study. The rate of the atomic Nitrogen ionization is found to be (6.2 ± 3.3) × 1011 cm3 mol-1 s-1 at 10,000 K.
Biologically Plausible, Human-Scale Knowledge Representation.
Crawford, Eric; Gingerich, Matthew; Eliasmith, Chris
2016-05-01
Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, ), "mesh" binding (van der Velde & de Kamps, ), and conjunctive binding (Smolensky, ). Recent theoretical work has suggested that most of these methods will not scale well, that is, that they cannot encode structured representations using any of the tens of thousands of terms in the adult lexicon without making implausible resource assumptions. Here, we empirically demonstrate that the biologically plausible structured representations employed in the Semantic Pointer Architecture (SPA) approach to modeling cognition (Eliasmith, ) do scale appropriately. Specifically, we construct a spiking neural network of about 2.5 million neurons that employs semantic pointers to successfully encode and decode the main lexical relations in WordNet, which has over 100,000 terms. In addition, we show that the same representations can be employed to construct recursively structured sentences consisting of arbitrary WordNet concepts, while preserving the original lexical structure. We argue that these results suggest that semantic pointers are uniquely well-suited to providing a biologically plausible account of the structured representations that underwrite human cognition. Copyright © 2015 Cognitive Science Society, Inc.
A Biomass-based Model to Estimate the Plausibility of Exoplanet Biosignature Gases
NASA Astrophysics Data System (ADS)
Seager, S.; Bains, W.; Hu, R.
2013-10-01
Biosignature gas detection is one of the ultimate future goals for exoplanet atmosphere studies. We have created a framework for linking biosignature gas detectability to biomass estimates, including atmospheric photochemistry and biological thermodynamics. The new framework is intended to liberate predictive atmosphere models from requiring fixed, Earth-like biosignature gas source fluxes. New biosignature gases can be considered with a check that the biomass estimate is physically plausible. We have validated the models on terrestrial production of NO, H2S, CH4, CH3Cl, and DMS. We have applied the models to propose NH3 as a biosignature gas on a "cold Haber World," a planet with a N2-H2 atmosphere, and to demonstrate why gases such as CH3Cl must have too large of a biomass to be a plausible biosignature gas on planets with Earth or early-Earth-like atmospheres orbiting a Sun-like star. To construct the biomass models, we developed a functional classification of biosignature gases, and found that gases (such as CH4, H2S, and N2O) produced from life that extracts energy from chemical potential energy gradients will always have false positives because geochemistry has the same gases to work with as life does, and gases (such as DMS and CH3Cl) produced for secondary metabolic reasons are far less likely to have false positives but because of their highly specialized origin are more likely to be produced in small quantities. The biomass model estimates are valid to one or two orders of magnitude; the goal is an independent approach to testing whether a biosignature gas is plausible rather than a precise quantification of atmospheric biosignature gases and their corresponding biomasses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKone, T.E.; Enoch, K.G.
2002-08-01
CalTOX has been developed as a set of spreadsheet models and spreadsheet data sets to assist in assessing human exposures from continuous releases to multiple environmental media, i.e. air, soil, and water. It has also been used for waste classification and for setting soil clean-up levels at uncontrolled hazardous wastes sites. The modeling components of CalTOX include a multimedia transport and transformation model, multi-pathway exposure scenario models, and add-ins to quantify and evaluate uncertainty and variability. All parameter values used as inputs to CalTOX are distributions, described in terms of mean values and a coefficient of variation, rather than asmore » point estimates or plausible upper values such as most other models employ. This probabilistic approach allows both sensitivity and uncertainty analyses to be directly incorporated into the model operation. This manual provides CalTOX users with a brief overview of the CalTOX spreadsheet model and provides instructions for using the spreadsheet to make deterministic and probabilistic calculations of source-dose-risk relationships.« less
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
Marcano, Mariano; Layton, Anita T; Layton, Harold E
2010-02-01
In a mathematical model of the urine concentrating mechanism of the inner medulla of the rat kidney, a nonlinear optimization technique was used to estimate parameter sets that maximize the urine-to-plasma osmolality ratio (U/P) while maintaining the urine flow rate within a plausible physiologic range. The model, which used a central core formulation, represented loops of Henle turning at all levels of the inner medulla and a composite collecting duct (CD). The parameters varied were: water flow and urea concentration in tubular fluid entering the descending thin limbs and the composite CD at the outer-inner medullary boundary; scaling factors for the number of loops of Henle and CDs as a function of medullary depth; location and increase rate of the urea permeability profile along the CD; and a scaling factor for the maximum rate of NaCl transport from the CD. The optimization algorithm sought to maximize a quantity E that equaled U/P minus a penalty function for insufficient urine flow. Maxima of E were sought by changing parameter values in the direction in parameter space in which E increased. The algorithm attained a maximum E that increased urine osmolality and inner medullary concentrating capability by 37.5% and 80.2%, respectively, above base-case values; the corresponding urine flow rate and the concentrations of NaCl and urea were all within or near reported experimental ranges. Our results predict that urine osmolality is particularly sensitive to three parameters: the urea concentration in tubular fluid entering the CD at the outer-inner medullary boundary, the location and increase rate of the urea permeability profile along the CD, and the rate of decrease of the CD population (and thus of CD surface area) along the cortico-medullary axis.
Assessing compatibility of direct detection data: halo-independent global likelihood analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.
2016-10-18
We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less
Compositional Evolution of Saturn's Rings Due to Meteoroid Bombardment
NASA Technical Reports Server (NTRS)
Cuzzi, J.; Estrada, P.; Young, Richard E. (Technical Monitor)
1997-01-01
In this paper we address the question of compositional evolution in planetary ring systems subsequent to meteoroid bombardment. The huge surface area to mass ratio of planetary rings ensures that this is an important process, even with current uncertainties on the meteoroid flux. We develop a new model which includes both direct deposition of extrinsic meteoritic "pollutants", and ballistic transport of the increasingly polluted ring material as impact ejecta. Our study includes detailed radiative transfer modeling of ring particle spectral reflectivities based on refractive indices of realistic constituents. Voyager data have shown that the lower optical depth regions in Saturn's rings (the C ring and Cassini Division) have darker and less red particles than the optically thicken A and B rings. These coupled structural-compositional groupings have never been explained; we present and explore the hypothesis that global scale color and compositional differences in the main rings of Saturn arise naturally from extrinsic meteoroid bombardment of a ring system which was initially composed primarily, but not entirely, of water ice. We find that the regional color and albedo differences can be understood if all ring material was initially identical (primarily water ice, based on other data, but colored by tiny amounts of intrinsic reddish, plausibly organic, absorber) and then evolved entirely by addition and mixing of extrinsic, nearly neutrally colored. plausibly carbonaceous material. We further demonstrate that the detailed radial profile of color across the abrupt B ring - C ring boundary can.constrain key unknown parameters in the model. Using new alternates of parameter values, we estimate the duration of the exposure to extrinsic meteoroid flux of this part of the rings, at least, to be on the order of 10(exp 8) years. This conclusion is easily extended by inference to the Cassini Division and its surroundings as well. This geologically young "age" is compatible with timescales estimated elsewhere based on the evolution of ring structure due to ballistic transport, and also with other "short timescales" estimated on the grounds of gravitational torques. However, uncertainty in the flux of interplanetary debris and in the ejects yield may preclude ruling out a ring age as old as the solar system at this time.
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
Kim, Steven B; Kodell, Ralph L; Moon, Hojin
2014-03-01
In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.
Primeval galaxies in the sub-mm and mm
NASA Technical Reports Server (NTRS)
Bond, J. Richard; Myers, Steven T.
1993-01-01
Although the results of COBE's FIRAS experiment 1 constrain the deviation in energy from the CMB blackbody in the 500-5000 micron range to be delta E/E, sub cmb less than 0.005, primeval galaxies can still lead to a brilliant sub-mm sky of non-Gaussian sources that are detectable at 10 inch resolution from planned arrays such as SCUBA on the James Clerk Maxwell Telescope and, quite plausibly, at sub-arcsecond resolution in planned mm and sub-mm interferometers. Here, we apply our hierarchical peaks method to a CDM model to construct sub-mm and mm maps of bursting PG's appropriate for these instruments with minimum contours chosen to correspond to realistic observational parameters for them and which pass the FIRAS limits.
AIDA - from Airborne Data Inversion to In-Depth Analysis
NASA Astrophysics Data System (ADS)
Meyer, U.; Goetze, H.; Schroeder, M.; Boerner, R.; Tezkan, B.; Winsemann, J.; Siemon, B.; Alvers, M.; Stoll, J. B.
2011-12-01
The rising competition in land use especially between water economy, agriculture, forestry, building material economy and other industries often leads to irreversible deterioration in the water and soil system (as salinization and degradation) which results in a long term damage of natural resources. A sustainable exploitation of the near subsurface by industry, economy and private households is a fundamental demand of a modern society. To fulfill this demand, a sound and comprehensive knowledge on structures and processes of the near subsurface is an important prerequisite. A spatial survey of the usable underground by aerogeophysical means and a subsequent ground geophysics survey targeted at special locations will deliver essential contributions within short time that make it possible to gain the needed additional knowledge. The complementary use of airborne and ground geophysics as well as the validation, assimilation and improvement of current findings by geological and hydrogeological investigations and plausibility tests leads to the following key questions: a) Which new and/or improved automatic algorithms (joint inversion, data assimilation and such) are useful to describe the structural setting of the usable subsurface by user specific characteristics as i.e. water volume, layer thicknesses, porosities etc.? b) What are the physical relations of the measured parameters (as electrical conductivities, magnetic susceptibilities, densities, etc.)? c) How can we deduce characteristics or parameters from the observations which describe near subsurface structures as ground water systems, their charge, discharge and recharge, vulnerabilities and other quantities? d) How plausible and realistic are the numerically obtained results in relation to user specific questions and parameters? e) Is it possible to compile material flux balances that describe spatial and time dependent impacts of environmental changes on aquifers and soils by repeated airborne surveys? In order to follow up these questions raised the project aims to achieve the following goals: a) Development of new and expansion of existent inversion strategies to improve structural parameter information on different space and time scales. b) Development, modification, and tests for a multi-parameter inversion (joint inversion). c) Development of new quantitative approaches in data assimilation and plausibility studies. d) Compilation of optimized work flows for fast employment by end users. e) Primary goal is to solve comparable society related problems (as salinization, erosion, contamination, degradation etc.) in regions within Germany and abroad by generalization of project results.
Quasispecies dynamics on a network of interacting genotypes and idiotypes: formulation of the model
NASA Astrophysics Data System (ADS)
Barbosa, Valmir C.; Donangelo, Raul; Souza, Sergio R.
2015-01-01
A quasispecies is the stationary state of a set of interrelated genotypes that evolve according to the usual principles of selection and mutation. Quasispecies studies have for the most part concentrated on the possibility of errors during genotype replication and their role in promoting either the survival or the demise of the quasispecies. In a previous work, we introduced a network model of quasispecies dynamics, based on a single probability parameter (p) and capable of addressing several plausibility issues of previous models. Here we extend that model by pairing its network with another one aimed at modeling the dynamics of the immune system when confronted with the quasispecies. The new network is based on the idiotypic-network model of immunity and, together with the previous one, constitutes a network model of interacting genotypes and idiotypes. The resulting model requires further parameters and as a consequence leads to a vast phase space. We have focused on a particular niche in which it is possible to observe the trade-offs involved in the quasispecies' survival or destruction. Within this niche, we give simulation results that highlight some key preconditions for quasispecies survival. These include a minimum initial abundance of genotypes relative to that of the idiotypes and a minimum value of p. The latter, in particular, is to be contrasted with the stand-alone quasispecies network of our previous work, in which arbitrarily low values of p constitute a guarantee of quasispecies survival.
NASA Astrophysics Data System (ADS)
Hemmings, J. C. P.; Challenor, P. G.
2012-04-01
A wide variety of different plankton system models have been coupled with ocean circulation models, with the aim of understanding and predicting aspects of environmental change. However, an ability to make reliable inferences about real-world processes from the model behaviour demands a quantitative understanding of model error that remains elusive. Assessment of coupled model output is inhibited by relatively limited observing system coverage of biogeochemical components. Any direct assessment of the plankton model is further inhibited by uncertainty in the physical state. Furthermore, comparative evaluation of plankton models on the basis of their design is inhibited by the sensitivity of their dynamics to many adjustable parameters. Parameter uncertainty has been widely addressed by calibrating models at data-rich ocean sites. However, relatively little attention has been given to quantifying uncertainty in the physical fields required by the plankton models at these sites, and tendencies in the biogeochemical properties due to the effects of horizontal processes are often neglected. Here we use model twin experiments, in which synthetic data are assimilated to estimate a system's known "true" parameters, to investigate the impact of error in a plankton model's environmental input data. The experiments are supported by a new software tool, the Marine Model Optimization Testbed, designed for rigorous analysis of plankton models in a multi-site 1-D framework. Simulated errors are derived from statistical characterizations of the mixed layer depth, the horizontal flux divergence tendencies of the biogeochemical tracers and the initial state. Plausible patterns of uncertainty in these data are shown to produce strong temporal and spatial variability in the expected simulation error variance over an annual cycle, indicating variation in the significance attributable to individual model-data differences. An inverse scheme using ensemble-based estimates of the simulation error variance to allow for this environment error performs well compared with weighting schemes used in previous calibration studies, giving improved estimates of the known parameters. The efficacy of the new scheme in real-world applications will depend on the quality of statistical characterizations of the input data. Practical approaches towards developing reliable characterizations are discussed.
Laszlo, Sarah; Plaut, David C
2012-03-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally. Copyright © 2011 Elsevier Inc. All rights reserved.
Miner, Daniel C; Triesch, Jochen
2014-01-01
The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results.
Miner, Daniel C.; Triesch, Jochen
2014-01-01
The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results. PMID:25414647
Dual fuel gradients in uranium silicide plates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pace, B.W.
1997-08-01
Babcock & Wilcox has been able to achieve dual gradient plates with good repeatability in small lots of U{sub 3}Si{sub 2} plates. Improvements in homogeneity and other processing parameters and techniques have allowed the development of contoured fuel within the cladding. The most difficult obstacles to overcome have been the ability to evaluate the bidirectional fuel loadings in comparison to the perfect loading model and the different methods of instilling the gradients in the early compact stage. The overriding conclusion is that to control the contour of the fuel, a known relationship between the compact, the frames and final coremore » gradient must exist. Therefore, further development in the creation and control of dual gradients in fuel plates will involve arriving at a plausible gradient requirement and building the correct model between the compact configuration and the final contoured loading requirements.« less
Advances in Modal Analysis Using a Robust and Multiscale Method
NASA Astrophysics Data System (ADS)
Picard, Cécile; Frisson, Christian; Faure, François; Drettakis, George; Kry, Paul G.
2010-12-01
This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.
New methods in hydrologic modeling and decision support for culvert flood risk under climate change
NASA Astrophysics Data System (ADS)
Rosner, A.; Letcher, B. H.; Vogel, R. M.; Rees, P. S.
2015-12-01
Assessing culvert flood vulnerability under climate change poses an unusual combination of challenges. We seek a robust method of planning for an uncertain future, and therefore must consider a wide range of plausible future conditions. Culverts in our case study area, northwestern Massachusetts, USA, are predominantly found in small, ungaged basins. The need to predict flows both at numerous sites and under numerous plausible climate conditions requires a statistical model with low data and computational requirements. We present a statistical streamflow model that is driven by precipitation and temperature, allowing us to predict flows without reliance on reference gages of observed flows. The hydrological analysis is used to determine each culvert's risk of failure under current conditions. We also explore the hydrological response to a range of plausible future climate conditions. These results are used to determine the tolerance of each culvert to future increases in precipitation. In a decision support context, current flood risk as well as tolerance to potential climate changes are used to provide a robust assessment and prioritization for culvert replacements.
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-01-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272
Absolute Parameters and Physical Nature of the Low-amplitude Contact Binary HI Draconis
NASA Astrophysics Data System (ADS)
Papageorgiou, A.; Christopoulou, P.-E.
2015-05-01
We present a detailed investigation of the low-amplitude contact binary HI Dra based on the new VRcIc CCD photometric light curves (LCs) combined with published radial velocity (RV) curves. Our completely covered LCs were analyzed using PHOEBE and revealed that HI Dra is an overcontact binary with low fill-out factor f = 24 ± 4(%) and temperature difference between the components of 330 K. Two spotted models are proposed to explain the LC asymmetry, between which the A subtype of W UMa type eclipsing systems, with a cool spot on the less massive and cooler component, proves to be more plausible on evolutionary grounds. The results and stability of the solutions were explored by heuristic scan and parameter perturbation to provide a consistent and reliable set of parameters and their errors. Our photometric modeling and RV curve solution give the following absolute parameters of the hot and cool components, respectively: Mh = 1.72 ± 0.08 {{M}⊙ } and Mc = 0.43 ± 0.02 {{M}⊙ }, Rh = 1.98 ± 0.03 {{R}⊙ } and Rc = 1.08 ± 0.02 {{R}⊙ }, and Lh = 9.6 ± 0.1 {{L}⊙ } and Lc = 2.4 ± 0.1 {{L}⊙ }. Based on these results the initial masses of the progenitors (1.11 ± 0.03 {{M}⊙ } and 2.25 ± 0.07 {{M}⊙ }, respectively) and a rough estimate of the age of the system of 2.4 Gyr are discussed.
NASA Astrophysics Data System (ADS)
Brauer, Claudia; Torfs, Paul; Teuling, Ryan; Uijlenhoet, Remko
2015-04-01
Recently, we developed the Wageningen Lowland Runoff Simulator (WALRUS) to fill the gap between complex, spatially distributed models often used in lowland catchments and simple, parametric models which have mostly been developed for mountainous catchments (Brauer et al., 2014ab). This parametric rainfall-runoff model can be used all over the world in both freely draining lowland catchments and polders with controlled water levels. The open source model code is implemented in R and can be downloaded from www.github.com/ClaudiaBrauer/WALRUS. The structure and code of WALRUS are simple, which facilitates detailed investigation of the effect of parameters on all model variables. WALRUS contains only four parameters requiring calibration; they are intended to have a strong, qualitative relation with catchment characteristics. Parameter estimation remains a challenge, however. The model structure contains three main feedbacks: (1) between groundwater and surface water; (2) between saturated and unsaturated zone; (3) between catchment wetness and (quick/slow) flowroute division. These feedbacks represent essential rainfall-runoff processes in lowland catchments, but increase the risk of parameter dependence and equifinality. Therefore, model performance should not only be judged based on a comparison between modelled and observed discharges, but also based on the plausibility of the internal modelled variables. Here, we present a method to analyse the effect of parameter values on internal model states and fluxes in a qualitative and intuitive way using interactive parallel plotting. We applied WALRUS to ten Dutch catchments with different sizes, slopes and soil types and both freely draining and polder areas. The model was run with a large number of parameter sets, which were created using Latin Hypercube Sampling. The model output was characterised in terms of several signatures, both measures of goodness of fit and statistics of internal model variables (such as the percentage of rain water travelling through the quickflow reservoir). End users can then eliminate parameter combinations with unrealistic outcomes based on expert knowledge using interactive parallel plots. In these plots, for instance, ranges can be selected for each signature and only model runs which yield signature values in these ranges are highlighted. The resulting selection of realistic parameter sets can be used for ensemble simulations. C.C. Brauer, A.J. Teuling, P.J.J.F. Torfs, R. Uijlenhoet (2014a): The Wageningen Lowland Runoff Simulator (WALRUS): a lumped rainfall-runoff model for catchments with shallow groundwater, Geoscientific Model Development, 7, 2313-2332, www.geosci-model-dev.net/7/2313/2014/gmd-7-2313-2014.pdf C.C. Brauer, P.J.J.F. Torfs, A.J. Teuling, R. Uijlenhoet (2014b): The Wageningen Lowland Runoff Simulator (WALRUS): application to the Hupsel Brook catchment and Cabauw polder, Hydrology and Earth System Sciences, 18, 4007-4028, www.hydrol-earth-syst-sci.net/18/4007/2014/hess-18-4007-2014.pdf
Resonant Tidal Excitation of Internal Waves in the Earth's Fluid Core
NASA Technical Reports Server (NTRS)
Tyler, Robert H.; Kuang, Weijia
2014-01-01
It has long been speculated that there is a stably stratified layer below the core-mantle boundary, and two recent studies have improved the constraints on the parameters describing this stratification. Here we consider the dynamical implications of this layer using a simplified model. We first show that the stratification in this surface layer has sensitive control over the rate at which tidal energy is transferred to the core. We then show that when the stratification parameters from the recent studies are used in this model, a resonant configuration arrives whereby tidal forces perform elevated rates of work in exciting core flow. Specifically, the internal wave speed derived from the two independent studies (150 and 155 m/s) are in remarkable agreement with the speed (152 m/s) required for excitation of the primary normal mode of oscillation as calculated from full solutions of the Laplace Tidal Equations applied to a reduced-gravity idealized model representing the stratified layer. In evaluating this agreement it is noteworthy that the idealized model assumed may be regarded as the most reduced representation of the stratified dynamics of the layer, in that there are no non-essential dynamical terms in the governing equations assumed. While it is certainly possible that a more realistic treatment may require additional dynamical terms or coupling, it is also clear that this reduced representation includes no freedom for coercing the correlation described. This suggests that one must accept either (1) that tidal forces resonantly excite core flow and this is predicted by a simple model or (2) that either the independent estimates or the dynamical model does not accurately portray the core surface layer and there has simply been an unlikely coincidence between three estimates of a stratification parameter which would otherwise have a broad plausible range.
Bavassi, M Luz; Tagliazucchi, Enzo; Laje, Rodrigo
2013-02-01
Time processing in the few hundred milliseconds range is involved in the human skill of sensorimotor synchronization, like playing music in an ensemble or finger tapping to an external beat. In finger tapping, a mechanistic explanation in biologically plausible terms of how the brain achieves synchronization is still missing despite considerable research. In this work we show that nonlinear effects are important for the recovery of synchronization following a perturbation (a step change in stimulus period), even for perturbation magnitudes smaller than 10% of the period, which is well below the amount of perturbation needed to evoke other nonlinear effects like saturation. We build a nonlinear mathematical model for the error correction mechanism and test its predictions, and further propose a framework that allows us to unify the description of the three common types of perturbations. While previous authors have used two different model mechanisms for fitting different perturbation types, or have fitted different parameter value sets for different perturbation magnitudes, we propose the first unified description of the behavior following all perturbation types and magnitudes as the dynamical response of a compound model with fixed terms and a single set of parameter values. Copyright © 2012 Elsevier B.V. All rights reserved.
Mathematical Modeling of RNA-Based Architectures for Closed Loop Control of Gene Expression.
Agrawal, Deepak K; Tang, Xun; Westbrook, Alexandra; Marshall, Ryan; Maxwell, Colin S; Lucks, Julius; Noireaux, Vincent; Beisel, Chase L; Dunlop, Mary J; Franco, Elisa
2018-05-08
Feedback allows biological systems to control gene expression precisely and reliably, even in the presence of uncertainty, by sensing and processing environmental changes. Taking inspiration from natural architectures, synthetic biologists have engineered feedback loops to tune the dynamics and improve the robustness and predictability of gene expression. However, experimental implementations of biomolecular control systems are still far from satisfying performance specifications typically achieved by electrical or mechanical control systems. To address this gap, we present mathematical models of biomolecular controllers that enable reference tracking, disturbance rejection, and tuning of the temporal response of gene expression. These controllers employ RNA transcriptional regulators to achieve closed loop control where feedback is introduced via molecular sequestration. Sensitivity analysis of the models allows us to identify which parameters influence the transient and steady state response of a target gene expression process, as well as which biologically plausible parameter values enable perfect reference tracking. We quantify performance using typical control theory metrics to characterize response properties and provide clear selection guidelines for practical applications. Our results indicate that RNA regulators are well-suited for building robust and precise feedback controllers for gene expression. Additionally, our approach illustrates several quantitative methods useful for assessing the performance of biomolecular feedback control systems.
Labrada-Martagón, Vanessa; Méndez-Rodríguez, Lia C; Mangel, Marc; Zenteno-Savín, Tania
2013-09-01
Generalized linear models were fitted to evaluate the relationship between 17β-estradiol (E2), testosterone (T) and thyroxine (T4) levels in immature East Pacific green sea turtles (Chelonia mydas) and their body condition, size, mass, blood biochemistry parameters, handling time, year, season and site of capture. According to external (tail size) and morphological (<77.3 straight carapace length) characteristics, 95% of the individuals were juveniles. Hormone levels, assessed on sea turtles subjected to a capture stress protocol, were <34.7nmolTL(-1), <532.3pmolE2 L(-1) and <43.8nmolT4L(-1). The statistical model explained biologically plausible metabolic relationships between hormone concentrations and blood biochemistry parameters (e.g. glucose, cholesterol) and the potential effect of environmental variables (season and study site). The variables handling time and year did not contribute significantly to explain hormone levels. Differences in sex steroids between season and study sites found by the models coincided with specific nutritional, physiological and body condition differences related to the specific habitat conditions. The models correctly predicted the median levels of the measured hormones in green sea turtles, which confirms the fitted model's utility. It is suggested that quantitative predictions could be possible when the model is tested with additional data. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Akhtar, Taimoor; Shoemaker, Christine
2016-04-01
Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.
NASA Astrophysics Data System (ADS)
Pohlman, Matthew Michael
The study of heat transfer and fluid flow in a vertical Bridgman device is motivated by current industrial difficulties in growing crystals with as few defects as possible. For example, Gallium Arsenide (GaAs) is of great interest to the semiconductor industry but remains an uneconomical alternative to silicon because of the manufacturing problems. This dissertation is a two dimensional study of the fluid in an idealized Bridgman device. The model nonlinear PDEs are discretized using second order finite differencing. Newton's method solves the resulting nonlinear discrete equations. The large sparse linear systems involving the Jacobian are solved iteratively using the Generalized Minimum Residual method (GMRES). By adapting fast direct solvers for elliptic equations with simple boundary conditions, a good preconditioner is developed which is essential for GMRES to converge quickly. Trends of the fluid flow and heat transfer for typical ranges of the physical parameters are determined. Also, the size of the terms in the mathematical model are found by numerical investigation, in order to find what terms are in balance as the physical parameters vary. The results suggest the plausibility of simpler asymptotic solutions.
Psychosocial influences on HIV-1 disease progression: neural, endocrine, and virologic mechanisms.
Cole, Steve W
2008-06-01
This review surveys empirical research pertinent to the hypothesis that activity of the hypothalamus-pituitary-adrenal (HPA) axis and/or the sympathetic nervous system (SNS) might mediate biobehavioral influences on HIV-1 pathogenesis and disease progression. Data are considered based on causal effects of neuroeffector molecules on HIV-1 replication, prospective relationships between neural/endocrine parameters and HIV-relevant biological or clinical markers, and correlational data consistent with in vivo neural/endocrine mediation in human or animal studies. Results show that HPA and SNS effector molecules can enhance HIV-1 replication in cellular models via effects on viral infectivity, viral gene expression, and the innate immune response to infection. Animal models and human clinical studies both provide evidence consistent with SNS regulation of viral replication, but data on HPA mediation are less clear. Regulation of leukocyte biology by neuroeffector molecules provides a plausible biological mechanism by which psychosocial factors might influence HIV-1 pathogenesis, even in the era of effective antiretroviral therapy. As such, neural and endocrine parameters might provide useful biomarkers for gauging the promise of behavioral interventions and suggest novel adjunctive strategies for controlling HIV-1 disease progression.
A comparison of viscoelastic damping models
NASA Technical Reports Server (NTRS)
Slater, Joseph C.; Belvin, W. Keith; Inman, Daniel J.
1993-01-01
Modern finite element methods (FEM's) enable the precise modeling of mass and stiffness properties in what were in the past overwhelmingly large and complex structures. These models allow the accurate determination of natural frequencies and mode shapes. However, adequate methods for modeling highly damped and high frequency dependent structures did not exist until recently. The most commonly used method, Modal Strain Energy, does not correctly predict complex mode shapes since it is based on the assumption that the mode shapes of a structure are real. Recently, many techniques have been developed which allow the modeling of frequency dependent damping properties of materials in a finite element compatible form. Two of these methods, the Golla-Hughes-McTavish method and the Lesieutre-Mingori method, model the frequency dependent effects by adding coordinates to the existing system thus maintaining the linearity of the model. The third model, proposed by Bagley and Torvik, is based on the Fractional Calculus method and requires fewer empirical parameters to model the frequency dependence at the expense of linearity of the governing equations. This work examines the Modal Strain Energy, Golla-Hughes-McTavish and Bagley and Torvik models and compares them to determine the plausibility of using them for modeling viscoelastic damping in large structures.
Solar Effects on Global Climate Due to Cosmic Rays and Solar Energetic Particles
NASA Technical Reports Server (NTRS)
Turco, R. P.; Raeder, J.; DAuria, R.
2005-01-01
Although the work reported here does not directly connect solar variability with global climate change, this research establishes a plausible quantitative causative link between observed solar activity and apparently correlated variations in terrestrial climate parameters. Specifically, we have demonstrated that ion-mediated nucleation of atmospheric particles is a likely, and likely widespread, phenomenon that relates solar variability to changes in the microphysical properties of clouds. To investigate this relationship, we have constructed and applied a new model describing the formation and evolution of ionic clusters under a range of atmospheric conditions throughout the lower atmosphere. The activation of large ionic clusters into cloud nuclei is predicted to be favorable in the upper troposphere and mesosphere, and possibly in the lower stratosphere. The model developed under this grant needs to be extended to include additional cluster families, and should be incorporated into microphysical models to further test the cause-and-effect linkages that may ultimately explain key aspects of the connections between solar variability and climate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El-Atwani, O.; Norris, S. A.; Ludwig, K.
In this study, several proposed mechanisms and theoretical models exist concerning nanostructure evolution on III-V semiconductors (particularly GaSb) via ion beam irradiation. However, making quantitative contact between experiment on the one hand and model-parameter dependent predictions from different theories on the other is usually difficult. In this study, we take a different approach and provide an experimental investigation with a range of targets (GaSb, GaAs, GaP) and ion species (Ne, Ar, Kr, Xe) to determine new parametric trends regarding nanostructure evolution. Concurrently, atomistic simulations using binary collision approximation over the same ion/target combinations were performed to determine parametric trends onmore » several quantities related to existing model. A comparison of experimental and numerical trends reveals that the two are broadly consistent under the assumption that instabilities are driven by chemical instability based on phase separation. Furthermore, the atomistic simulations and a survey of material thermodynamic properties suggest that a plausible microscopic mechanism for this process is an ion-enhanced mobility associated with energy deposition by collision cascades.« less
Fan, Yurui; Huang, Guohe; Veawab, Amornvadee
2012-01-01
In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.
Xian, Jiahui; Liu, Min; Chen, Wei; Zhang, Chunyong; Fu, Degang
2018-05-01
The electrochemical incineration of diethylenetriaminepentaacetic acid (DTPA) with boron-doped diamond (BDD) anode had been initially performed under galvanostatic conditions. The main and interaction effects of four operating parameters (flow rate, applied current density, sulfate concentration and initial DTPA concentration) on mineralization performance were investigated. Under similar experimental conditions, Doehlert matrix (DM) and central composite rotatable design (CCRD) were used as statistical multivariate methods in the optimization of the anodic oxidation processes. A comparison between DM model and CCRD model revealed that the former was more accurate, possibly due to its higher operating level numbers employed (7 levels for two variables). Despite this, these two models resulted in quite similar optimum operating conditions. The maximum TOC removal percentages at 180 min were 76.2% and 73.8% for case of DM and CCRD, respectively. In addition, with the aid of quantum chemistry calculation and LC/MS analysis, a plausible degradation sequence of DTPA on BDD anode was also proposed. Copyright © 2018 Elsevier Ltd. All rights reserved.
Emergent neutrality drives phytoplankton species coexistence
Segura, Angel M.; Calliari, Danilo; Kruk, Carla; Conde, Daniel; Bonilla, Sylvia; Fort, Hugo
2011-01-01
The mechanisms that drive species coexistence and community dynamics have long puzzled ecologists. Here, we explain species coexistence, size structure and diversity patterns in a phytoplankton community using a combination of four fundamental factors: organism traits, size-based constraints, hydrology and species competition. Using a ‘microscopic’ Lotka–Volterra competition (MLVC) model (i.e. with explicit recipes to compute its parameters), we provide a mechanistic explanation of species coexistence along a niche axis (i.e. organismic volume). We based our model on empirically measured quantities, minimal ecological assumptions and stochastic processes. In nature, we found aggregated patterns of species biovolume (i.e. clumps) along the volume axis and a peak in species richness. Both patterns were reproduced by the MLVC model. Observed clumps corresponded to niche zones (volumes) where species fitness was highest, or where fitness was equal among competing species. The latter implies the action of equalizing processes, which would suggest emergent neutrality as a plausible mechanism to explain community patterns. PMID:21177680
The emergence of DNA in the RNA world: an in silico simulation study of genetic takeover.
Ma, Wentao; Yu, Chunwu; Zhang, Wentao; Wu, Sanmao; Feng, Yu
2015-12-07
It is now popularly accepted that there was an "RNA world" in early evolution of life. This idea has a direct consequence that later on there should have been a takeover of genetic material - RNA by DNA. However, since genetic material carries genetic information, the "source code" of all living activities, it is actually reasonable to question the plausibility of such a "revolutionary" transition. Due to our inability to model relevant "primitive living systems" in reality, it is as yet impossible to explore the plausibility and mechanisms of the "genetic takeover" by experiments. Here we investigated this issue by computer simulation using a Monte-Carlo method. It shows that an RNA-by-DNA genetic takeover may be triggered by the emergence of a nucleotide reductase ribozyme with a moderate activity in a pure RNA system. The transition is unstable and limited in scale (i.e., cannot spread in the population), but can get strengthened and globalized if certain parameters are changed against RNA (i.e., in favor of DNA). In relation to the subsequent evolution, an advanced system with a larger genome, which uses DNA as genetic material and RNA as functional material, is modeled - the system cannot sustain if the nucleotide reductase ribozyme is "turned off" (thus, DNA cannot be synthesized). Moreover, the advanced system cannot sustain if only DNA's stability, template suitability or replication fidelity (any of the three) is turned down to the level of RNA's. Genetic takeover should be plausible. In the RNA world, such a takeover may have been triggered by the emergence of some ribozyme favoring the formation of deoxynucleotides. The transition may initially have been "weak", but could have been reinforced by environmental changes unfavorable to RNA (such as temperature or pH rise), and would have ultimately become irreversible accompanying the genome's enlargement. Several virtues of DNA (versus RNA) - higher stability against hydrolysis, greater suitability as template and higher fidelity in replication, should have, each in its own way, all been significant for the genetic takeover in evolution. This study enhances our understandings of the relationship between information and material in the living world.
Flood hydrology for Dry Creek, Lake County, Northwestern Montana
Parrett, C.; Jarrett, R.D.
2004-01-01
Dry Creek drains about 22.6 square kilometers of rugged mountainous terrain upstream from Tabor Dam in the Mission Range near St. Ignatius, Montana. Because of uncertainty about plausible peak discharges and concerns regarding the ability of the Tabor Dam spillway to safely convey these discharges, the flood hydrology for Dry Creek was evaluated on the basis of three hydrologic and geologic methods. The first method involved determining an envelope line relating flood discharge to drainage area on the basis of regional historical data and calculating a 500-year flood for Dry Creek using a regression equation. The second method involved paleoflood methods to estimate the maximum plausible discharge for 35 sites in the study area. The third method involved rainfall-runoff modeling for the Dry Creek basin in conjunction with regional precipitation information to determine plausible peak discharges. All of these methods resulted in estimates of plausible peak discharges that are substantially less than those predicted by the more generally applied probable maximum flood technique. Copyright ASCE 2004.
NASA Astrophysics Data System (ADS)
Sampath, D. M. R.; Boski, T.
2018-05-01
Large-scale geomorphological evolution of an estuarine system was simulated by means of a hybrid estuarine sedimentation model (HESM) applied to the Guadiana Estuary, in Southwest Iberia. The model simulates the decadal-scale morphodynamics of the system under environmental forcing, using a set of analytical solutions to simplified equations of tidal wave propagation in shallow waters, constrained by empirical knowledge of estuarine sedimentary dynamics and topography. The key controlling parameters of the model are bed friction (f), current velocity power of the erosion rate function (N), and sea-level rise rate. An assessment of sensitivity of the simulated sediment surface elevation (SSE) change to these controlling parameters was performed. The model predicted the spatial differentiation of accretion and erosion, the latter especially marked in the mudflats within mean sea level and low tide level and accretion was mainly in a subtidal channel. The average SSE change mutually depended on both the friction coefficient and power of the current velocity. Analysis of the average annual SSE change suggests that the state of intertidal and subtidal compartments of the estuarine system vary differently according to the dominant processes (erosion and accretion). As the Guadiana estuarine system shows dominant erosional behaviour in the context of sea-level rise and sediment supply reduction after the closure of the Alqueva Dam, the most plausible sets of parameter values for the Guadiana Estuary are N = 1.8 and f = 0.8f0, or N = 2 and f = f0, where f0 is the empirically estimated value. For these sets of parameter values, the relative errors in SSE change did not exceed ±20% in 73% of simulation cells in the studied area. Such a limit of accuracy can be acceptable for an idealized modelling of coastal evolution in response to uncertain sea-level rise scenarios in the context of reduced sediment supply due to flow regulation. Therefore, the idealized but cost-effective HESM model will be suitable for estimating the morphological impacts of sea-level rise on estuarine systems on a decadal timescale.
FORWARD MODELING OF STANDING KINK MODES IN CORONAL LOOPS. II. APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Ding; Doorsselaere, Tom Van, E-mail: DYuan2@uclan.ac.uk
2016-04-15
Magnetohydrodynamic waves are believed to play a significant role in coronal heating, and could be used for remote diagnostics of solar plasma. Both the heating and diagnostic applications rely on a correct inversion (or backward modeling) of the observables into the thermal and magnetic structures of the plasma. However, due to the limited availability of observables, this is an ill-posed issue. Forward modeling is designed to establish a plausible mapping of plasma structuring into observables. In this study, we set up forward models of standing kink modes in coronal loops and simulate optically thin emissions in the extreme ultraviolet bandpasses,more » and then adjust plasma parameters and viewing angles to match three events of transverse loop oscillations observed by the Solar Dynamics Observatory/Atmospheric Imaging Assembly. We demonstrate that forward models could be effectively used to identify the oscillation overtone and polarization, to reproduce the general profile of oscillation amplitude and phase, and to predict multiple harmonic periodicities in the associated emission intensity and loop width variation.« less
Walters, D M; Stringer, S M
2010-07-01
A key question in understanding the neural basis of path integration is how individual, spatially responsive, neurons may self-organize into networks that can, through learning, integrate velocity signals to update a continuous representation of location within an environment. It is of vital importance that this internal representation of position is updated at the correct speed, and in real time, to accurately reflect the motion of the animal. In this article, we present a biologically plausible model of velocity path integration of head direction that can solve this problem using neuronal time constants to effect natural time delays, over which associations can be learned through associative Hebbian learning rules. The model comprises a linked continuous attractor network and competitive network. In simulation, we show that the same model is able to learn two different speeds of rotation when implemented with two different values for the time constant, and without the need to alter any other model parameters. The proposed model could be extended to path integration of place in the environment, and path integration of spatial view.
NASA Technical Reports Server (NTRS)
Treuhaft, Robert N.; Law, Beverly E.; Siqueira, Paul R.
2000-01-01
Parameters describing the vertical structure of forests, for example tree height, height-to-base-of-live-crown, underlying topography, and leaf area density, bear on land-surface, biogeochemical, and climate modeling efforts. Single, fixed-baseline interferometric synthetic aperture radar (INSAR) normalized cross-correlations constitute two observations from which to estimate forest vertical structure parameters: Cross-correlation amplitude and phase. Multialtitude INSAR observations increase the effective number of baselines potentially enabling the estimation of a larger set of vertical-structure parameters. Polarimetry and polarimetric interferometry can further extend the observation set. This paper describes the first acquisition of multialtitude INSAR for the purpose of estimating the parameters describing a vegetated land surface. These data were collected over ponderosa pine in central Oregon near longitude and latitude -121 37 25 and 44 29 56. The JPL interferometric TOPSAR system was flown at the standard 8-km altitude, and also at 4-km and 2-km altitudes, in a race track. A reference line including the above coordinates was maintained at 35 deg for both the north-east heading and the return southwest heading, at all altitudes. In addition to the three altitudes for interferometry, one line was flown with full zero-baseline polarimetry at the 8-km altitude. A preliminary analysis of part of the data collected suggests that they are consistent with one of two physical models describing the vegetation: 1) a single-layer, randomly oriented forest volume with a very strong ground return or 2) a multilayered randomly oriented volume; a homogeneous, single-layer model with no ground return cannot account for the multialtitude correlation amplitudes. Below the inconsistency of the data with a single-layer model is followed by analysis scenarios which include either the ground or a layered structure. The ground returns suggested by this preliminary analysis seem too strong to be plausible, but parameters describing a two-layer compare reasonably well to a field-measured probability distribution of tree heights in the area.
Walder, J.S.; O'Connor, J. E.; Costa, J.E.; ,
1997-01-01
We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V.D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether < ??? 1 or < ??? 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.We analyze a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?????1 or ?????1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas
2016-06-01
Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.
The impact of variation in scaling factors on the estimation of ...
Many physiologically based pharmacokinetic (PBPK) models include values for metabolic rate parameters extrapolated from in vitro metabolism studies using scaling factors such as mg of microsomal protein per gram of liver (MPPGL) and liver mass (FVL). Variation in scaling factor values impacts metabolic rate parameter estimates (Vmax) and hence estimates of internal dose used in dose response analysis. The impacts of adult human variation in MPPGL and FVL on estimates of internal dose were assessed using a human PBPK model for BDCM for several internal dose metrics for two exposure scenarios (single 0.25 liter drink of water or 10 minute shower) under plausible (5 micrograms/L) and high level (20 micrograms/L) water concentrations. For both concentrations, all internal dose metrics were changed less than 5% for the showering scenario (combined inhalation and dermal exposure). In contrast, a 27-fold variation in area under the curve for BDCM in venous blood was observed at both oral exposure concentrations, whereas total amount of BDCM metabolized in liver was relatively unchanged. This analysis demonstrates that variability in the scaling factors used for in vitro to in vivo extrapolation (IVIVE) for metabolic rate parameters can have a significant route-dependent impact on estimates of internal dose under environmentally relevant exposure scenarios. This indicates the need to evaluate both uncertainty and variability for scaling factors used for IVIVE. Sca
Possibilities and limits of Internet-based registers.
Wild, Michael; Candrian, Aron; Wenda, Klaus
2009-03-01
The Internet is an inexpensive platform for the investigation of medical questions in case of low prevalence. By accessing www.ao-nailregister.org, every interested participant may participate in the English-language survey of the complications specific to the femoral nail. The address data of the participant, the anonymised key data of the patients and the medical parameters are entered. In real time, these data are checked for plausibility, evaluated and published on the Internet where they are freely accessible immediately. Because of national differences, data acquisition caused considerable difficulties at the beginning. In addition, wrong data were entered because of linguistic or contextual misunderstandings. After having reworked the questionnaire completely, facilitating data input and implementing an automated plausibility check, these difficulties could be cleared. In a next step, the automatic evaluation of the data was implemented. Only very few data still have to be checked for plausibility manually to exclude wrong entries, which cannot be verified by the computer. The effort required for data acquisition and evaluation of the Internet-based femoral nail register was reduced distinctly. The possibility of free international participation as well as the freely accessible representation of the results offers transparency.
Miconi, Thomas
2017-01-01
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior. DOI: http://dx.doi.org/10.7554/eLife.20899.001 PMID:28230528
Miconi, Thomas
2017-02-23
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.
EVOLUTIONARY MODELS OF SUPER-EARTHS AND MINI-NEPTUNES INCORPORATING COOLING AND MASS LOSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howe, Alex R.; Burrows, Adam, E-mail: arhowe@astro.princeton.edu, E-mail: burrows@astro.princeton.edu
We construct models of the structural evolution of super-Earth- and mini-Neptune-type exoplanets with H{sub 2}–He envelopes, incorporating radiative cooling and XUV-driven mass loss. We conduct a parameter study of these models, focusing on initial mass, radius, and envelope mass fractions, as well as orbital distance, metallicity, and the specific prescription for mass loss. From these calculations, we investigate how the observed masses and radii of exoplanets today relate to the distribution of their initial conditions. Orbital distance and the initial envelope mass fraction are the most important factors determining planetary evolution, particularly radius evolution. Initial mass also becomes important belowmore » a “turnoff mass,” which varies with orbital distance, with mass–radius curves being approximately flat for higher masses. Initial radius is the least important parameter we study, with very little difference between the hot start and cold start limits after an age of 100 Myr. Model sets with no mass loss fail to produce results consistent with observations, but a plausible range of mass-loss scenarios is allowed. In addition, we present scenarios for the formation of the Kepler-11 planets. Our best fit to observations of Kepler-11b and Kepler-11c involves formation beyond the snow line, after which they moved inward, circularized, and underwent a reduced degree of mass loss.« less
n-dimensional isotropic Finch-Skea stars
NASA Astrophysics Data System (ADS)
Chilambwe, Brian; Hansraj, Sudan
2015-02-01
We study the impact of dimension on the physical properties of the Finch-Skea astrophysical model. It is shown that a positive definite, monotonically decreasing pressure and density are evident. A decrease in stellar radius emerges as the order of the dimension increases. This is accompanied by a corresponding increase in energy density. The model continues to display the necessary qualitative features inherent in the 4-dimensional Finch-Skea star and the conformity to the Walecka theory is preserved under dimensional increase. The causality condition is always satisfied for all dimensions considered resulting in the proposed models demonstrating a subluminal sound speed throughout the interior of the distribution. Moreover, the pressure and density decrease monotonically outwards from the centre and a pressure-free hypersurface exists demarcating the boundary of the perfect-fluid sphere. Since the study of the physical conditions is performed graphically, it is necessary to specify certain constants in the model. Reasonable values for such constants are arrived at on examining the behaviour of the model at the centre and demanding the satisfaction of all elementary conditions for physical plausibility. Finally two constants of integration are settled on matching of our solutions with the appropriate Schwarzschild-Tangherlini exterior metrics. Furthermore, the solution admits a barotropic equation of state despite the higher dimension. The compactification parameter as well as the density variation parameter are also computed. The models satisfy the weak, strong and dominant energy conditions in the interior of the stellar configuration.
NASA Astrophysics Data System (ADS)
Wetzel, Peter J.; Boone, Aaron
1995-07-01
This paper presents a general description of, and demonstrates the capabilities of, the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE). The PLACE model is a detailed process model of the partly cloudy atmospheric boundary layer and underlying heterogeneous land surfaces. In its development, particular attention has been given to three of the model's subprocesses: the prediction of boundary layer cloud amount, the treatment of surface and soil subgrid heterogeneity, and the liquid water budget. The model includes a three-parameter nonprecipitating cumulus model that feeds back to the surface and boundary layer through radiative effects. Surface heterogeneity in the PLACE model is treated both statistically and by resolving explicit subgrid patches. The model maintains a vertical column of liquid water that is divided into seven reservoirs, from the surface interception store down to bedrock.Five single-day demonstration cases are presented, in which the PLACE model was initialized, run, and compared to field observations from four diverse sites. The model is shown to predict cloud amount well in these while predicting the surface fluxes with similar accuracy. A slight tendency to underpredict boundary layer depth is noted in all cases.Sensitivity tests were also run using anemometer-level forcing provided by the Project for Inter-comparison of Land-surface Parameterization Schemes (PILPS). The purpose is to demonstrate the relative impact of heterogeneity of surface parameters on the predicted annual mean surface fluxes. Significant sensitivity to subgrid variability of certain parameters is demonstrated, particularly to parameters related to soil moisture. A major result is that the PLACE-computed impact of total (homogeneous) deforestation of a rain forest is comparable in magnitude to the effect of imposing heterogeneity of certain surface variables, and is similarly comparable to the overall variance among the other PILPS participant models. Were this result to be bourne out by further analysis, it would suggest that today's average land surface parameterization has little credibility when applied to discriminating the local impacts of any plausible future climate change.
NASA Astrophysics Data System (ADS)
Anselmino, Matteo; Scarsoglio, Stefania; Saglietto, Andrea; Gaita, Fiorenzo; Ridolfi, Luca
2016-06-01
Atrial fibrillation (AF) is associated with an increased risk of dementia and cognitive decline, independent of strokes. Several mechanisms have been proposed to explain this association, but altered cerebral blood flow dynamics during AF has been poorly investigated: in particular, it is unknown how AF influences hemodynamic parameters of the distal cerebral circulation, at the arteriolar and capillary level. Two coupled lumped-parameter models (systemic and cerebrovascular circulations, respectively) were here used to simulate sinus rhythm (SR) and AF. For each simulation 5000 cardiac cycles were analyzed and cerebral hemodynamic parameters were calculated. With respect to SR, AF triggered a higher variability of the cerebral hemodynamic variables which increases proceeding towards the distal circulation, reaching the maximum extent at the arteriolar and capillary levels. This variability led to critical cerebral hemodynamic events of excessive pressure or reduced blood flow: 303 hypoperfusions occurred at the arteriolar level, while 387 hypertensive events occurred at the capillary level during AF. By contrast, neither hypoperfusions nor hypertensive events occurred during SR. Thus, the impact of AF per se on cerebral hemodynamics candidates as a relevant mechanism into the genesis of AF-related cognitive impairment/dementia.
NASA Astrophysics Data System (ADS)
Kikuchi, C.; Ferre, P. A.; Vrugt, J. A.
2011-12-01
Hydrologic models are developed, tested, and refined based on the ability of those models to explain available hydrologic data. The optimization of model performance based upon mismatch between model outputs and real world observations has been extensively studied. However, identification of plausible models is sensitive not only to the models themselves - including model structure and model parameters - but also to the location, timing, type, and number of observations used in model calibration. Therefore, careful selection of hydrologic observations has the potential to significantly improve the performance of hydrologic models. In this research, we seek to reduce prediction uncertainty through optimization of the data collection process. A new tool - multiple model analysis with discriminatory data collection (MMA-DDC) - was developed to address this challenge. In this approach, multiple hydrologic models are developed and treated as competing hypotheses. Potential new data are then evaluated on their ability to discriminate between competing hypotheses. MMA-DDC is well-suited for use in recursive mode, in which new observations are continuously used in the optimization of subsequent observations. This new approach was applied to a synthetic solute transport experiment, in which ranges of parameter values constitute the multiple hydrologic models, and model predictions are calculated using likelihood-weighted model averaging. MMA-DDC was used to determine the optimal location, timing, number, and type of new observations. From comparison with an exhaustive search of all possible observation sequences, we find that MMA-DDC consistently selects observations which lead to the highest reduction in model prediction uncertainty. We conclude that using MMA-DDC to evaluate potential observations may significantly improve the performance of hydrologic models while reducing the cost associated with collecting new data.
A combined radio and GeV γ-ray view of the 2012 and 2013 flares of Mrk 421
Hovatta, Talvikki; Petropoulou, M.; Richards, J. L.; ...
2015-03-09
In 2012 Markarian 421 underwent the largest flare ever observed in this blazar at radio frequencies. In the present study, we start exploring this unique event and compare it to a less extreme event in 2013. We use 15 GHz radio data obtained with the Owens Valley Radio Observatory 40-m telescope, 95 GHz millimetre data from the Combined Array for Research in Millimeter-Wave Astronomy, and GeV γ-ray data from the Fermi Gamma-ray Space Telescope. Here, the radio light curves during the flaring periods in 2012 and 2013 have very different appearances, in both shape and peak flux density. Assuming thatmore » the radio and γ-ray flares are physically connected, we attempt to model the most prominent sub-flares of the 2012 and 2013 activity periods by using the simplest possible theoretical framework. We first fit a one-zone synchrotron self-Compton (SSC) model to the less extreme 2013 flare and estimate parameters describing the emission region. We then model the major γ-ray and radio flares of 2012 using the same framework. The 2012 γ-ray flare shows two distinct spikes of similar amplitude, so we examine scenarios associating the radio flare with each spike in turn. In the first scenario, we cannot explain the sharp radio flare with a simple SSC model, but we can accommodate this by adding plausible time variations to the Doppler beaming factor. In the second scenario, a varying Doppler factor is not needed, but the SSC model parameters require fine-tuning. Both alternatives indicate that the sharp radio flare, if physically connected to the preceding γ-ray flares, can be reproduced only for a very specific choice of parameters.« less
Christensen, Nikolaj K; Minsley, Burke J.; Christensen, Steen
2017-01-01
We present a new methodology to combine spatially dense high-resolution airborne electromagnetic (AEM) data and sparse borehole information to construct multiple plausible geological structures using a stochastic approach. The method developed allows for quantification of the performance of groundwater models built from different geological realizations of structure. Multiple structural realizations are generated using geostatistical Monte Carlo simulations that treat sparse borehole lithological observations as hard data and dense geophysically derived structural probabilities as soft data. Each structural model is used to define 3-D hydrostratigraphical zones of a groundwater model, and the hydraulic parameter values of the zones are estimated by using nonlinear regression to fit hydrological data (hydraulic head and river discharge measurements). Use of the methodology is demonstrated for a synthetic domain having structures of categorical deposits consisting of sand, silt, or clay. It is shown that using dense AEM data with the methodology can significantly improve the estimated accuracy of the sediment distribution as compared to when borehole data are used alone. It is also shown that this use of AEM data can improve the predictive capability of a calibrated groundwater model that uses the geological structures as zones. However, such structural models will always contain errors because even with dense AEM data it is not possible to perfectly resolve the structures of a groundwater system. It is shown that when using such erroneous structures in a groundwater model, they can lead to biased parameter estimates and biased model predictions, therefore impairing the model's predictive capability.
Uncertainty analysis of a groundwater flow model in East-central Florida.
Sepúlveda, Nicasio; Doherty, John
2015-01-01
A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan Aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The "Null Space Monte Carlo" method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model's capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial or temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context. © 2014, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Christensen, N. K.; Minsley, B. J.; Christensen, S.
2017-02-01
We present a new methodology to combine spatially dense high-resolution airborne electromagnetic (AEM) data and sparse borehole information to construct multiple plausible geological structures using a stochastic approach. The method developed allows for quantification of the performance of groundwater models built from different geological realizations of structure. Multiple structural realizations are generated using geostatistical Monte Carlo simulations that treat sparse borehole lithological observations as hard data and dense geophysically derived structural probabilities as soft data. Each structural model is used to define 3-D hydrostratigraphical zones of a groundwater model, and the hydraulic parameter values of the zones are estimated by using nonlinear regression to fit hydrological data (hydraulic head and river discharge measurements). Use of the methodology is demonstrated for a synthetic domain having structures of categorical deposits consisting of sand, silt, or clay. It is shown that using dense AEM data with the methodology can significantly improve the estimated accuracy of the sediment distribution as compared to when borehole data are used alone. It is also shown that this use of AEM data can improve the predictive capability of a calibrated groundwater model that uses the geological structures as zones. However, such structural models will always contain errors because even with dense AEM data it is not possible to perfectly resolve the structures of a groundwater system. It is shown that when using such erroneous structures in a groundwater model, they can lead to biased parameter estimates and biased model predictions, therefore impairing the model's predictive capability.
NASA Astrophysics Data System (ADS)
Doten, Colleen O.; Bowling, Laura C.; Lanini, Jordan S.; Maurer, Edwin P.; Lettenmaier, Dennis P.
2006-04-01
Erosion and sediment transport in a temperate forested watershed are predicted with a new sediment model that represents the main sources of sediment generation in forested environments (mass wasting, hillslope erosion, and road surface erosion) within the distributed hydrology-soil-vegetation model (DHSVM) environment. The model produces slope failures on the basis of a factor-of-safety analysis with the infinite slope model through use of stochastically generated soil and vegetation parameters. Failed material is routed downslope with a rule-based scheme that determines sediment delivery to streams. Sediment from hillslopes and road surfaces is also transported to the channel network. A simple channel routing scheme is implemented to predict basin sediment yield. We demonstrate through an initial application of this model to the Rainy Creek catchment, a tributary of the Wenatchee River, which drains the east slopes of the Cascade Mountains, that the model produces plausible sediment yield and ratios of landsliding and surface erosion when compared to published rates for similar catchments in the Pacific Northwest. A road removal scenario and a basin-wide fire scenario are both evaluated with the model.
Low-speed impact phenomena and orbital resonances in the moon- and planet-building process
NASA Technical Reports Server (NTRS)
Chapman, C. R.
1977-01-01
A simulation of collisional and gravitational interaction in the early solar system generates planets approximately 1000 km in diameter from an initial swarm of kilometer sized planetesimals. The model treats collisions according to experimental and theoretical impact results (such as rebound, cratering, and catastrophic fragmentation) for a variety of materials whose parameters span plausible values for early solid objects. The small planets form in approximately 1000 yr, during which time most of the mass of the system continues to reside in particles near the original size. The simulation is terminated when the largest objects' random motion is of smaller dimension than their collision cross-sections. The few 1000 km planets may act as seeds for the subsequent, gradual, accretional growth into full-sized planets.
Prada, A F; Chu, M L; Guzman, J A; Moriasi, D N
2017-05-15
Evaluating the effectiveness of agricultural land management practices in minimizing environmental impacts using models is challenged by the presence of inherent uncertainties during the model development stage. One issue faced during the model development stage is the uncertainty involved in model parameterization. Using a single optimized set of parameters (one snapshot) to represent baseline conditions of the system limits the applicability and robustness of the model to properly represent future or alternative scenarios. The objective of this study was to develop a framework that facilitates model parameter selection while evaluating uncertainty to assess the impacts of land management practices at the watershed scale. The model framework was applied to the Lake Creek watershed located in southwestern Oklahoma, USA. A two-step probabilistic approach was implemented to parameterize the Agricultural Policy/Environmental eXtender (APEX) model using global uncertainty and sensitivity analysis to estimate the full spectrum of total monthly water yield (WYLD) and total monthly Nitrogen loads (N) in the watershed under different land management practices. Twenty-seven models were found to represent the baseline scenario in which uncertainty of up to 29% and 400% in WYLD and N, respectively, is plausible. Changing the land cover to pasture manifested the highest decrease in N to up to 30% for a full pasture coverage while changing to full winter wheat cover can increase the N up to 11%. The methodology developed in this study was able to quantify the full spectrum of system responses, the uncertainty associated with them, and the most important parameters that drive their variability. Results from this study can be used to develop strategic decisions on the risks and tradeoffs associated with different management alternatives that aim to increase productivity while also minimizing their environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago
2016-01-01
Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.
Using field observations to inform thermal hydrology models of permafrost dynamics with ATS (v0.83)
Atchley, Adam L.; Painter, Scott L.; Harp, Dylan R.; ...
2015-09-01
Climate change is profoundly transforming the carbon-rich Arctic tundra landscape, potentially moving it from a carbon sink to a carbon source by increasing the thickness of soil that thaws on a seasonal basis. Thus, the modeling capability and precise parameterizations of the physical characteristics needed to estimate projected active layer thickness (ALT) are limited in Earth system models (ESMs). In particular, discrepancies in spatial scale between field measurements and Earth system models challenge validation and parameterization of hydrothermal models. A recently developed surface–subsurface model for permafrost thermal hydrology, the Advanced Terrestrial Simulator (ATS), is used in combination with field measurementsmore » to achieve the goals of constructing a process-rich model based on plausible parameters and to identify fine-scale controls of ALT in ice-wedge polygon tundra in Barrow, Alaska. An iterative model refinement procedure that cycles between borehole temperature and snow cover measurements and simulations functions to evaluate and parameterize different model processes necessary to simulate freeze–thaw processes and ALT formation. After model refinement and calibration, reasonable matches between simulated and measured soil temperatures are obtained, with the largest errors occurring during early summer above ice wedges (e.g., troughs). The results suggest that properly constructed and calibrated one-dimensional thermal hydrology models have the potential to provide reasonable representation of the subsurface thermal response and can be used to infer model input parameters and process representations. The models for soil thermal conductivity and snow distribution were found to be the most sensitive process representations. However, information on lateral flow and snowpack evolution might be needed to constrain model representations of surface hydrology and snow depth.« less
Uncertainty analysis of a groundwater flow model in east-central Florida
Sepúlveda, Nicasio; Doherty, John E.
2014-01-01
A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The “Null Space Monte Carlo” method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model’s capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial/temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context.
NASA Astrophysics Data System (ADS)
Wiebe, K.; Lotze-Campen, H.; Bodirsky, B.; Kavallari, A.; Mason-d'Croz, D.; van der Mensbrugghe, D.; Robinson, S.; Sands, R.; Tabeau, A.; Willenbockel, D.; Islam, S.; van Meijl, H.; Mueller, C.; Robertson, R.
2014-12-01
Previous studies have combined climate, crop and economic models to examine the impact of climate change on agricultural production and food security, but results have varied widely due to differences in models, scenarios and data. Recent work has examined (and narrowed) these differences through systematic model intercomparison using a high-emissions pathway to highlight the differences. New work extends that analysis to cover a range of plausible socioeconomic scenarios and emission pathways. Results from three general circulation models are combined with one crop model and five global economic models to examine the global and regional impacts of climate change on yields, area, production, prices and trade for coarse grains, rice, wheat, oilseeds and sugar to 2050. Results show that yield impacts vary with changes in population, income and technology as well as emissions, but are reduced in all cases by endogenous changes in prices and other variables.
Comparing supply and demand models for future photovoltaic power generation in the USA
Basore, Paul A.; Cole, Wesley J.
2018-02-22
We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less
Comparing supply and demand models for future photovoltaic power generation in the USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basore, Paul A.; Cole, Wesley J.
We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less
Enlightening Students about Dark Matter
NASA Astrophysics Data System (ADS)
Hamilton, Kathleen; Barr, Alex; Eidelman, Dave
2018-01-01
Dark matter pervades the universe. While it is invisible to us, we can detect its influence on matter we can see. To illuminate this concept, we have created an interactive javascript program illustrating predictions made by six different models for dark matter distributions in galaxies. Students are able to match the predicted data with actual experimental results, drawn from several astronomy papers discussing dark matter’s impact on galactic rotation curves. Programming each new model requires integration of density equations with parameters determined by nonlinear curve-fitting using MATLAB scripts we developed. Using our javascript simulation, students can determine the most plausible dark matter models as well as the average percentage of dark matter lurking in galaxies, areas where the scientific community is still continuing to research. In that light, we strive to use the most up-to-date and accepted concepts: two of our dark matter models are the pseudo-isothermal halo and Navarro-Frenk-White, and we integrate out to each galaxy’s virial radius. Currently, our simulation includes NGC3198, NGC2403, and our own Milky Way.
NASA Astrophysics Data System (ADS)
Jacobs-Crisioni, C.; Koopmans, C. C.
2016-07-01
This paper introduces a GIS-based model that simulates the geographic expansion of transport networks by several decision-makers with varying objectives. The model progressively adds extensions to a growing network by choosing the most attractive investments from a limited choice set. Attractiveness is defined as a function of variables in which revenue and broader societal benefits may play a role and can be based on empirically underpinned parameters that may differ according to private or public interests. The choice set is selected from an exhaustive set of links and presumably contains those investment options that best meet private operator's objectives by balancing the revenues of additional fare against construction costs. The investment options consist of geographically plausible routes with potential detours. These routes are generated using a fine-meshed regularly latticed network and shortest path finding methods. Additionally, two indicators of the geographic accuracy of the simulated networks are introduced. A historical case study is presented to demonstrate the model's first results. These results show that the modelled networks reproduce relevant results of the historically built network with reasonable accuracy.
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Seyrich, Maximilian; Sornette, Didier
2016-04-01
We present a plausible micro-founded model for the previously postulated power law finite time singular form of the crash hazard rate in the Johansen-Ledoit-Sornette (JLS) model of rational expectation bubbles. The model is based on a percolation picture of the network of traders and the concept that clusters of connected traders share the same opinion. The key ingredient is the notion that a shift of position from buyer to seller of a sufficiently large group of traders can trigger a crash. This provides a formula to estimate the crash hazard rate by summation over percolation clusters above a minimum size of a power sa (with a>1) of the cluster sizes s, similarly to a generalized percolation susceptibility. The power sa of cluster sizes emerges from the super-linear dependence of group activity as a function of group size, previously documented in the literature. The crash hazard rate exhibits explosive finite time singular behaviors when the control parameter (fraction of occupied sites, or density of traders in the network) approaches the percolation threshold pc. Realistic dynamics are generated by modeling the density of traders on the percolation network by an Ornstein-Uhlenbeck process, whose memory controls the spontaneous excursion of the control parameter close to the critical region of bubble formation. Our numerical simulations recover the main stylized properties of the JLS model with intermittent explosive super-exponential bubbles interrupted by crashes.
Robustness of Reconstructed Ancestral Protein Functions to Statistical Uncertainty.
Eick, Geeta N; Bridgham, Jamie T; Anderson, Douglas P; Harms, Michael J; Thornton, Joseph W
2017-02-01
Hypotheses about the functions of ancient proteins and the effects of historical mutations on them are often tested using ancestral protein reconstruction (APR)-phylogenetic inference of ancestral sequences followed by synthesis and experimental characterization. Usually, some sequence sites are ambiguously reconstructed, with two or more statistically plausible states. The extent to which the inferred functions and mutational effects are robust to uncertainty about the ancestral sequence has not been studied systematically. To address this issue, we reconstructed ancestral proteins in three domain families that have different functions, architectures, and degrees of uncertainty; we then experimentally characterized the functional robustness of these proteins when uncertainty was incorporated using several approaches, including sampling amino acid states from the posterior distribution at each site and incorporating the alternative amino acid state at every ambiguous site in the sequence into a single "worst plausible case" protein. In every case, qualitative conclusions about the ancestral proteins' functions and the effects of key historical mutations were robust to sequence uncertainty, with similar functions observed even when scores of alternate amino acids were incorporated. There was some variation in quantitative descriptors of function among plausible sequences, suggesting that experimentally characterizing robustness is particularly important when quantitative estimates of ancient biochemical parameters are desired. The worst plausible case method appears to provide an efficient strategy for characterizing the functional robustness of ancestral proteins to large amounts of sequence uncertainty. Sampling from the posterior distribution sometimes produced artifactually nonfunctional proteins for sequences reconstructed with substantial ambiguity. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Barry, Dwight; McDonald, Shea
2013-01-01
Climate change could significantly influence seasonal streamflow and water availability in the snowpack-fed watersheds of Washington, USA. Descriptions of snowpack decline often use linear ordinary least squares (OLS) models to quantify this change. However, the region's precipitation is known to be related to climate cycles. If snowpack decline is more closely related to these cycles, an OLS model cannot account for this effect, and thus both descriptions of trends and estimates of decline could be inaccurate. We used intervention analysis to determine whether snow water equivalent (SWE) in 25 long-term snow courses within the Olympic and Cascade Mountains are more accurately described by OLS (to represent gradual change), stationary (to represent no change), or step-stationary (to represent climate cycling) models. We used Bayesian information-theoretic methods to determine these models' relative likelihood, and we found 90 models that could plausibly describe the statistical structure of the 25 snow courses' time series. Posterior model probabilities of the 29 "most plausible" models ranged from 0.33 to 0.91 (mean = 0.58, s = 0.15). The majority of these time series (55%) were best represented as step-stationary models with a single breakpoint at 1976/77, coinciding with a major shift in the Pacific Decadal Oscillation. However, estimates of SWE decline differed by as much as 35% between statistically plausible models of a single time series. This ambiguity is a critical problem for water management policy. Approaches such as intervention analysis should become part of the basic analytical toolkit for snowpack or other climatic time series data.
Lu, Yanling; Longman, Emma; Davis, Kenneth G.; Ortega, Álvaro; Grossmann, J. Günter; Michaelsen, Terje E.; de la Torre, José García; Harding, Stephen E.
2006-01-01
Crystallohydrodynamics describes the domain orientation in solution of antibodies and other multidomain protein assemblies where the crystal structures may be known for the domains but not the intact structure. The approach removes the necessity for an ad hoc assumed value for protein hydration. Previous studies have involved only the sedimentation coefficient leading to considerable degeneracy or multiplicity of possible models for the conformation of a given protein assembly, all agreeing with the experimental data. This degeneracy can be considerably reduced by using additional solution parameters. Conformation charts are generated for the three universal (i.e., size-independent) shape parameters P (obtained from the sedimentation coefficient or translational diffusion coefficient), ν (from the intrinsic viscosity), and G (from the radius of gyration), and calculated for a wide range of plausible orientations of the domains (represented as bead-shell ellipsoidal models derived from their crystal structures) and after allowance for any linker or hinge regions. Matches are then sought with the set of functions P, ν, and G calculated from experimental data (allowing for experimental error). The number of solutions can be further reduced by the employment of the Dmax parameter (maximum particle dimension) from x-ray scattering data. Using this approach we are able to reduce the degeneracy of possible solution models for IgG3 to a possible representative structure in which the Fab domains are directed away from the plane of the Fc domain, a structure in accord with the recognition that IgG3 is the most efficient complement activator among human IgG subclasses. PMID:16766619
Continuous model for the rock-scissors-paper game between bacteriocin producing bacteria.
Neumann, Gunter; Schuster, Stefan
2007-06-01
In this work, important aspects of bacteriocin producing bacteria and their interplay are elucidated. Various attempts to model the resistant, producer and sensitive Escherichia coli strains in the so-called rock-scissors-paper (RSP) game had been made in the literature. The question arose whether there is a continuous model with a cyclic structure and admitting an oscillatory dynamics as observed in various experiments. The May-Leonard system admits a Hopf bifurcation, which is, however, degenerate and hence inadequate. The traditional differential equation model of the RSP-game cannot be applied either to the bacteriocin system because it involves positive interaction terms. In this paper, a plausible competitive Lotka-Volterra system model of the RSP game is presented and the dynamics generated by that model is analyzed. For the first time, a continuous, spatially homogeneous model that describes the competitive interaction between bacteriocin-producing, resistant and sensitive bacteria is established. The interaction terms have negative coefficients. In some experiments, for example, in mice cultures, migration seemed to be essential for the reinfection in the RSP cycle. Often statistical and spatial effects such as migration and mutation are regarded to be essential for periodicity. Our model gives rise to oscillatory dynamics in the RSP game without such effects. Here, a normal form description of the limit cycle and conditions for its stability are derived. The toxicity of the bacteriocin is used as a bifurcation parameter. Exact parameter ranges are obtained for which a stable (robust) limit cycle and a stable heteroclinic cycle exist in the three-species game. These parameters are in good accordance with the observed relations for the E. coli strains. The roles of growth rate and growth yield of the three strains are discussed. Numerical calculations show that the sensitive, which might be regarded as the weakest, can have the longest sojourn times.
Giant impactors - Plausible sizes and populations
NASA Technical Reports Server (NTRS)
Hartmann, William K.; Vail, S. M.
1986-01-01
The largest sizes of planetesimals required to explain spin properties of planets are investigated in the context of the impact-trigger hypothesis of lunar origin. Solar system models with different large impactor sources are constructed and stochastic variations in obliquities and rotation periods resulting from each source are studied. The present study finds it highly plausible that earth was struck by a body of about 0.03-0.12 earth masses with enough energy and angular momentum to dislodge mantle material and form the present earth-moon system.
Isolating the anthropogenic component of Arctic warming
Chylek, Petr; Hengartner, Nicholas; Lesins, Glen; ...
2014-05-28
Structural equation modeling is used in statistical applications as both confirmatory and exploratory modeling to test models and to suggest the most plausible explanation for a relationship between the independent and the dependent variables. Although structural analysis cannot prove causation, it can suggest the most plausible set of factors that influence the observed variable. Here, we apply structural model analysis to the annual mean Arctic surface air temperature from 1900 to 2012 to find the most effective set of predictors and to isolate the anthropogenic component of the recent Arctic warming by subtracting the effects of natural forcing and variabilitymore » from the observed temperature. We also find that anthropogenic greenhouse gases and aerosols radiative forcing and the Atlantic Multidecadal Oscillation internal mode dominate Arctic temperature variability. Finally, our structural model analysis of observational data suggests that about half of the recent Arctic warming of 0.64 K/decade may have anthropogenic causes.« less
van den Berg, Ronald; Roerdink, Jos B T M; Cornelissen, Frans W
2010-01-22
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
The millimeter wave spectrum of silver monoxide, AgO
NASA Astrophysics Data System (ADS)
Steimle, T.; Tanimoto, M.; Namiki, K.; Saito, S.
1998-05-01
The pure rotational spectra of 107AgO and 109AgO were recorded in the 117-380 GHz spectral region using a dc-sputtering absorption cell. The 107Ag(I=1/2) and 109Ag(I=1/2) magnetic hyperfine parameters are interpreted in terms of plausible electronic configuration contributions to the X 2Πi state. It is shown that the determined unusual sign of the Λ-doubling and Fermi contact parameters implies that the X 2Πi state is dominated by a three open shell configuration. A comparison with isovalent CuO is made.
Morality Principles for Risk Modelling: Needs and Links with the Origins of Plausible Inference
NASA Astrophysics Data System (ADS)
Solana-Ortega, Alberto; Solana, Vicente
2009-12-01
In comparison with the foundations of probability calculus, the inescapable and controversial issue of how to assign probabilities has only recently become a matter of formal study. The introduction of information as a technical concept was a milestone, but the most promising entropic assignment methods still face unsolved difficulties, manifesting the incompleteness of plausible inference theory. In this paper we examine the situation faced by risk analysts in the critical field of extreme events modelling, where the former difficulties are especially visible, due to scarcity of observational data, the large impact of these phenomena and the obligation to assume professional responsibilities. To respond to the claim for a sound framework to deal with extremes, we propose a metafoundational approach to inference, based on a canon of extramathematical requirements. We highlight their strong moral content, and show how this emphasis in morality, far from being new, is connected with the historic origins of plausible inference. Special attention is paid to the contributions of Caramuel, a contemporary of Pascal, unfortunately ignored in the usual mathematical accounts of probability.
Optimal Cytoplasmic Transport in Viral Infections
D'Orsogna, Maria R.; Chou, Tom
2009-01-01
For many viruses, the ability to infect eukaryotic cells depends on their transport through the cytoplasm and across the nuclear membrane of the host cell. During this journey, viral contents are biochemically processed into complexes capable of both nuclear penetration and genomic integration. We develop a stochastic model of viral entry that incorporates all relevant aspects of transport, including convection along microtubules, biochemical conversion, degradation, and nuclear entry. Analysis of the nuclear infection probabilities in terms of the transport velocity, degradation, and biochemical conversion rates shows how certain values of key parameters can maximize the nuclear entry probability of the viral material. The existence of such “optimal” infection scenarios depends on the details of the biochemical conversion process and implies potentially counterintuitive effects in viral infection, suggesting new avenues for antiviral treatment. Such optimal parameter values provide a plausible transport-based explanation of the action of restriction factors and of experimentally observed optimal capsid stability. Finally, we propose a new interpretation of how genetic mutations unrelated to the mechanism of drug action may nonetheless confer novel types of overall drug resistance. PMID:20046829
Assessing the causal effect of policies: an example using stochastic interventions.
Díaz, Iván; van der Laan, Mark J
2013-11-19
Assessing the causal effect of an exposure often involves the definition of counterfactual outcomes in a hypothetical world in which the stochastic nature of the exposure is modified. Although stochastic interventions are a powerful tool to measure the causal effect of a realistic intervention that intends to alter the population distribution of an exposure, their importance to answer questions about plausible policy interventions has been obscured by the generalized use of deterministic interventions. In this article, we follow the approach described in Díaz and van der Laan (2012) to define and estimate the effect of an intervention that is expected to cause a truncation in the population distribution of the exposure. The observed data parameter that identifies the causal parameter of interest is established, as well as its efficient influence function under the non-parametric model. Inverse probability of treatment weighted (IPTW), augmented IPTW and targeted minimum loss-based estimators (TMLE) are proposed, their consistency and efficiency properties are determined. An extension to longitudinal data structures is presented and its use is demonstrated with a real data example.
Henrich, Andrea; Joerger, Markus; Kraff, Stefanie; Jaehde, Ulrich; Huisinga, Wilhelm; Kloft, Charlotte; Parra-Guillen, Zinnia Patricia
2017-08-01
Paclitaxel is a commonly used cytotoxic anticancer drug with potentially life-threatening toxicity at therapeutic doses and high interindividual pharmacokinetic variability. Thus, drug and effect monitoring is indicated to control dose-limiting neutropenia. Joerger et al. (2016) developed a dose individualization algorithm based on a pharmacokinetic (PK)/pharmacodynamic (PD) model describing paclitaxel and neutrophil concentrations. Furthermore, the algorithm was prospectively compared in a clinical trial against standard dosing (Central European Society for Anticancer Drug Research Study of Paclitaxel Therapeutic Drug Monitoring; 365 patients, 720 cycles) but did not substantially improve neutropenia. This might be caused by misspecifications in the PK/PD model underlying the algorithm, especially without consideration of the observed cumulative pattern of neutropenia or the platinum-based combination therapy, both impacting neutropenia. This work aimed to externally evaluate the original PK/PD model for potential misspecifications and to refine the PK/PD model while considering the cumulative neutropenia pattern and the combination therapy. An underprediction was observed for the PK (658 samples), the PK parameters, and these parameters were re-estimated using the original estimates as prior information. Neutrophil concentrations (3274 samples) were overpredicted by the PK/PD model, especially for later treatment cycles when the cumulative pattern aggravated neutropenia. Three different modeling approaches (two from the literature and one newly developed) were investigated. The newly developed model, which implemented the bone marrow hypothesis semiphysiologically, was superior. This model further included an additive effect for toxicity of carboplatin combination therapy. Overall, a physiologically plausible PK/PD model was developed that can be used for dose adaptation simulations and prospective studies to further improve paclitaxel/carboplatin combination therapy. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.
An enhanced temperature index model for debris-covered glaciers accounting for thickness effect
NASA Astrophysics Data System (ADS)
Carenzo, M.; Pellicciotti, F.; Mabillard, J.; Reid, T.; Brock, B. W.
2016-08-01
Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the Østrem curve. Its large number of parameters might be a limitation, but we show that the model is transferable in time and space to a second glacier with little loss of performance. We thus suggest that the new DETI model can be included in continuous mass balance models of debris-covered glaciers, because of its limited data requirements. As such, we expect its application to lead to an improvement in simulations of the debris-covered glacier response to climate in comparison with models that simply recalibrate empirical parameters to prescribe a constant across glacier reduction in melt.
An enhanced temperature index model for debris-covered glaciers accounting for thickness effect.
Carenzo, M; Pellicciotti, F; Mabillard, J; Reid, T; Brock, B W
2016-08-01
Debris-covered glaciers are increasingly studied because it is assumed that debris cover extent and thickness could increase in a warming climate, with more regular rockfalls from the surrounding slopes and more englacial melt-out material. Debris energy-balance models have been developed to account for the melt rate enhancement/reduction due to a thin/thick debris layer, respectively. However, such models require a large amount of input data that are not often available, especially in remote mountain areas such as the Himalaya, and can be difficult to extrapolate. Due to their lower data requirements, empirical models have been used extensively in clean glacier melt modelling. For debris-covered glaciers, however, they generally simplify the debris effect by using a single melt-reduction factor which does not account for the influence of varying debris thickness on melt and prescribe a constant reduction for the entire melt across a glacier. In this paper, we present a new temperature-index model that accounts for debris thickness in the computation of melt rates at the debris-ice interface. The model empirical parameters are optimized at the point scale for varying debris thicknesses against melt rates simulated by a physically-based debris energy balance model. The latter is validated against ablation stake readings and surface temperature measurements. Each parameter is then related to a plausible set of debris thickness values to provide a general and transferable parameterization. We develop the model on Miage Glacier, Italy, and then test its transferability on Haut Glacier d'Arolla, Switzerland. The performance of the new debris temperature-index (DETI) model in simulating the glacier melt rate at the point scale is comparable to the one of the physically based approach, and the definition of model parameters as a function of debris thickness allows the simulation of the nonlinear relationship of melt rate to debris thickness, summarised by the Østrem curve. Its large number of parameters might be a limitation, but we show that the model is transferable in time and space to a second glacier with little loss of performance. We thus suggest that the new DETI model can be included in continuous mass balance models of debris-covered glaciers, because of its limited data requirements. As such, we expect its application to lead to an improvement in simulations of the debris-covered glacier response to climate in comparison with models that simply recalibrate empirical parameters to prescribe a constant across glacier reduction in melt.
NASA Astrophysics Data System (ADS)
Anderson, Christian Carl
This Dissertation explores the physics underlying the propagation of ultrasonic waves in bone and in heart tissue through the use of Bayesian probability theory. Quantitative ultrasound is a noninvasive modality used for clinical detection, characterization, and evaluation of bone quality and cardiovascular disease. Approaches that extend the state of knowledge of the physics underpinning the interaction of ultrasound with inherently inhomogeneous and isotropic tissue have the potential to enhance its clinical utility. Simulations of fast and slow compressional wave propagation in cancellous bone were carried out to demonstrate the plausibility of a proposed explanation for the widely reported anomalous negative dispersion in cancellous bone. The results showed that negative dispersion could arise from analysis that proceeded under the assumption that the data consist of only a single ultrasonic wave, when in fact two overlapping and interfering waves are present. The confounding effect of overlapping fast and slow waves was addressed by applying Bayesian parameter estimation to simulated data, to experimental data acquired on bone-mimicking phantoms, and to data acquired in vitro on cancellous bone. The Bayesian approach successfully estimated the properties of the individual fast and slow waves even when they strongly overlapped in the acquired data. The Bayesian parameter estimation technique was further applied to an investigation of the anisotropy of ultrasonic properties in cancellous bone. The degree to which fast and slow waves overlap is partially determined by the angle of insonation of ultrasound relative to the predominant direction of trabecular orientation. In the past, studies of anisotropy have been limited by interference between fast and slow waves over a portion of the range of insonation angles. Bayesian analysis estimated attenuation, velocity, and amplitude parameters over the entire range of insonation angles, allowing a more complete characterization of anisotropy. A novel piecewise linear model for the cyclic variation of ultrasonic backscatter from myocardium was proposed. Models of cyclic variation for 100 type 2 diabetes patients and 43 normal control subjects were constructed using Bayesian parameter estimation. Parameters determined from the model, specifically rise time and slew rate, were found to be more reliable in differentiating between subject groups than the previously employed magnitude parameter.
Mountain, James E.; Santer, Peter; O’Neill, David P.; Smith, Nicholas M. J.; Ciaffoni, Luca; Couper, John H.; Ritchie, Grant A. D.; Hancock, Gus; Whiteley, Jonathan P.
2018-01-01
Inhomogeneity in the lung impairs gas exchange and can be an early marker of lung disease. We hypothesized that highly precise measurements of gas exchange contain sufficient information to quantify many aspects of the inhomogeneity noninvasively. Our aim was to explore whether one parameterization of lung inhomogeneity could both fit such data and provide reliable parameter estimates. A mathematical model of gas exchange in an inhomogeneous lung was developed, containing inhomogeneity parameters for compliance, vascular conductance, and dead space, all relative to lung volume. Inputs were respiratory flow, cardiac output, and the inspiratory and pulmonary arterial gas compositions. Outputs were expiratory and pulmonary venous gas compositions. All values were specified every 10 ms. Some parameters were set to physiologically plausible values. To estimate the remaining unknown parameters and inputs, the model was embedded within a nonlinear estimation routine to minimize the deviations between model and data for CO2, O2, and N2 flows during expiration. Three groups, each of six individuals, were studied: young (20–30 yr); old (70–80 yr); and patients with mild to moderate chronic obstructive pulmonary disease (COPD). Each participant undertook a 15-min measurement protocol six times. For all parameters reflecting inhomogeneity, highly significant differences were found between the three participant groups (P < 0.001, ANOVA). Intraclass correlation coefficients were 0.96, 0.99, and 0.94 for the parameters reflecting inhomogeneity in deadspace, compliance, and vascular conductance, respectively. We conclude that, for the particular participants selected, highly repeatable estimates for parameters reflecting inhomogeneity could be obtained from noninvasive measurements of respiratory gas exchange. NEW & NOTEWORTHY This study describes a new method, based on highly precise measures of gas exchange, that quantifies three distributions that are intrinsic to the lung. These distributions represent three fundamentally different types of inhomogeneity that together give rise to ventilation-perfusion mismatch and result in impaired gas exchange. The measurement technique has potentially broad clinical applicability because it is simple for both patient and operator, it does not involve ionizing radiation, and it is completely noninvasive. PMID:29074714
Steps Toward Unveiling the True Population of AGN: Photometric Selection of Broad-Line AGN
NASA Astrophysics Data System (ADS)
Schneider, Evan; Impey, C.
2012-01-01
We present an AGN selection technique that enables identification of broad-line AGN using only photometric data. An extension of infrared selection techniques, our method involves fitting a given spectral energy distribution with a model consisting of three physically motivated components: infrared power law emission, optical accretion disk emission, and host galaxy emission. Each component can be varied in intensity, and a reduced chi-square minimization routine is used to determine the optimum parameters for each object. Using this model, both broad- and narrow-line AGN are seen to fall within discrete ranges of parameter space that have plausible bounds, allowing physical trends with luminosity and redshift to be determined. Based on a fiducial sample of AGN from the catalog of Trump et al. (2009), we find the region occupied by broad-line AGN to be distinct from that of quiescent or star-bursting galaxies. Because this technique relies only on photometry, it will allow us to find AGN at fainter magnitudes than are accessible in spectroscopic surveys, and thus probe a population of less luminous and/or higher redshift objects. With the vast availability of photometric data in large surveys, this technique should have broad applicability and result in large samples that will complement X-ray AGN catalogs.
Tepekule, Burcu; Uecker, Hildegard; Derungs, Isabel; Frenoy, Antoine; Bonhoeffer, Sebastian
2017-09-01
Multiple treatment strategies are available for empiric antibiotic therapy in hospitals, but neither clinical studies nor theoretical investigations have yielded a clear picture when which strategy is optimal and why. Extending earlier work of others and us, we present a mathematical model capturing treatment strategies using two drugs, i.e the multi-drug therapies referred to as cycling, mixing, and combination therapy, as well as monotherapy with either drug. We randomly sample a large parameter space to determine the conditions determining success or failure of these strategies. We find that combination therapy tends to outperform the other treatment strategies. By using linear discriminant analysis and particle swarm optimization, we find that the most important parameters determining success or failure of combination therapy relative to the other treatment strategies are the de novo rate of emergence of double resistance in patients infected with sensitive bacteria and the fitness costs associated with double resistance. The rate at which double resistance is imported into the hospital via patients admitted from the outside community has little influence, as all treatment strategies are affected equally. The parameter sets for which combination therapy fails tend to fall into areas with low biological plausibility as they are characterised by very high rates of de novo emergence of resistance to both drugs compared to a single drug, and the cost of double resistance is considerably smaller than the sum of the costs of single resistance.
NASA Astrophysics Data System (ADS)
Mergili, Martin; Fischer, Jan-Thomas; Krenn, Julia; Pudasaini, Shiva P.
2017-02-01
r.avaflow represents an innovative open-source computational tool for routing rapid mass flows, avalanches, or process chains from a defined release area down an arbitrary topography to a deposition area. In contrast to most existing computational tools, r.avaflow (i) employs a two-phase, interacting solid and fluid mixture model (Pudasaini, 2012); (ii) is suitable for modelling more or less complex process chains and interactions; (iii) explicitly considers both entrainment and stopping with deposition, i.e. the change of the basal topography; (iv) allows for the definition of multiple release masses, and/or hydrographs; and (v) serves with built-in functionalities for validation, parameter optimization, and sensitivity analysis. r.avaflow is freely available as a raster module of the GRASS GIS software, employing the programming languages Python and C along with the statistical software R. We exemplify the functionalities of r.avaflow by means of two sets of computational experiments: (1) generic process chains consisting in bulk mass and hydrograph release into a reservoir with entrainment of the dam and impact downstream; (2) the prehistoric Acheron rock avalanche, New Zealand. The simulation results are generally plausible for (1) and, after the optimization of two key parameters, reasonably in line with the corresponding observations for (2). However, we identify some potential to enhance the analytic and numerical concepts. Further, thorough parameter studies will be necessary in order to make r.avaflow fit for reliable forward simulations of possible future mass flow events.
Neural theory for the perception of causal actions.
Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A
2012-07-01
The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.
Piantadosi, Steven T.; Hayden, Benjamin Y.
2015-01-01
Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613
An argument for mechanism-based statistical inference in cancer
Ochs, Michael; Price, Nathan D.; Tomasetti, Cristian; Younes, Laurent
2015-01-01
Cancer is perhaps the prototypical systems disease, and as such has been the focus of extensive study in quantitative systems biology. However, translating these programs into personalized clinical care remains elusive and incomplete. In this perspective, we argue that realizing this agenda—in particular, predicting disease phenotypes, progression and treatment response for individuals—requires going well beyond standard computational and bioinformatics tools and algorithms. It entails designing global mathematical models over network-scale configurations of genomic states and molecular concentrations, and learning the model parameters from limited available samples of high-dimensional and integrative omics data. As such, any plausible design should accommodate: biological mechanism, necessary for both feasible learning and interpretable decision making; stochasticity, to deal with uncertainty and observed variation at many scales; and a capacity for statistical inference at the patient level. This program, which requires a close, sustained collaboration between mathematicians and biologists, is illustrated in several contexts, including learning bio-markers, metabolism, cell signaling, network inference and tumorigenesis. PMID:25381197
A mathematical model of diurnal variations in human plasma melatonin levels
NASA Technical Reports Server (NTRS)
Brown, E. N.; Choe, Y.; Shanahan, T. L.; Czeisler, C. A.
1997-01-01
Studies in animals and humans suggest that the diurnal pattern in plasma melatonin levels is due to the hormone's rates of synthesis, circulatory infusion and clearance, circadian control of synthesis onset and offset, environmental lighting conditions, and error in the melatonin immunoassay. A two-dimensional linear differential equation model of the hormone is formulated and is used to analyze plasma melatonin levels in 18 normal healthy male subjects during a constant routine. Recently developed Bayesian statistical procedures are used to incorporate correctly the magnitude of the immunoassay error into the analysis. The estimated parameters [median (range)] were clearance half-life of 23.67 (14.79-59.93) min, synthesis onset time of 2206 (1940-0029), synthesis offset time of 0621 (0246-0817), and maximum N-acetyltransferase activity of 7.17(2.34-17.93) pmol x l(-1) x min(-1). All were in good agreement with values from previous reports. The difference between synthesis offset time and the phase of the core temperature minimum was 1 h 15 min (-4 h 38 min-2 h 43 min). The correlation between synthesis onset and the dim light melatonin onset was 0.93. Our model provides a more physiologically plausible estimate of the melatonin synthesis onset time than that given by the dim light melatonin onset and the first reliable means of estimating the phase of synthesis offset. Our analysis shows that the circadian and pharmacokinetics parameters of melatonin can be reliably estimated from a single model.
NASA Astrophysics Data System (ADS)
Schindewolf, Marcus; Schultze, Nico; Amorim, Ricardo S. S.; Schmidt, Jürgen
2015-04-01
The corridor along the Brazilian Highway 163 in the Southern Amazon is affected by radical changes in land use patterns. In order to enable a model based assessment of erosion risks on different land use and soil types a transportable disc type rainfall simulator is applied to identify the most important infiltration and erosion parameters of the EROSION 3D model. Since particle detachment highly depends on experimental plot length, a combined runoff supply is used for the virtually extension of the plot length to more than 20 m. Simulations were conducted on the most common regional land use, soil management and soil types for dry and wet runs. The experiments are characterized by high final infiltration rates (0.3 - 2.5 mm*min^-1), low sediment concentrations (0.2-6.5 g*L^-1) and accordingly low soil loss rates (0.002-50 Kg*m^-2), strongly related to land use, applied management and soil type. Ploughed pastures and clear cuts reveal highest soil losses whereas croplands are less affected. Due to higher aggregate stabilities Ferrasols are less endangered than Acrisols. Derived model parameters are plausible, comparable to existing data bases and reproduce the effects of land use and soil management on soil loss. Thus it is possible to apply the EROSION 3D soil loss model in Southern Amazonia for erosion risk assessment and scenario simulation under changing climate and land use conditions.
Reilly, Thomas E.; Plummer, Niel; Phillips, Patrick J.; Busenberg, Eurybiades
1994-01-01
Measurements of the concentrations of chlorofluorocarbons (CFCs), tritium, and other environmental tracers can be used to calculate recharge ages of shallow groundwater and estimate rates of groundwater movement. Numerical simulation also provides quantitative estimates of flow rates, flow paths, and mixing properties of the groundwater system. The environmental tracer techniques and the hydraulic analyses each contribute to the understanding and quantification of the flow of shallow groundwater. However, when combined, the two methods provide feedback that improves the quantification of the flow system and provides insight into the processes that are the most uncertain. A case study near Locust Grove, Maryland, is used to investigate the utility of combining groundwater age dating, based on CFCs and tritium, and hydraulic analyses using numerical simulation techniques. The results of the feedback between an advective transport model and the estimates of groundwater ages determined by the CFCs improve a quantitative description of the system by refining the system conceptualization and estimating system parameters. The plausible system developed with this feedback between the advective flow model and the CFC ages is further tested using a solute transport simulation to reproduce the observed tritium distribution in the groundwater. The solute transport simulation corroborates the plausible system developed and also indicates that, for the system under investigation with the data obtained from 0.9-m-long (3-foot-long) well screens, the hydrodynamic dispersion is negligible. Together the two methods enable a coherent explanation of the flow paths and rates of movement while indicating weaknesses in the understanding of the system that will require future data collection and conceptual refinement of the groundwater system.
Mode of action in relevance of rodent liver tumors to human cancer risk.
Holsapple, Michael P; Pitot, Henri C; Cohen, Samuel M; Cohen, Samuel H; Boobis, Alan R; Klaunig, James E; Pastoor, Timothy; Dellarco, Vicki L; Dragan, Yvonne P
2006-01-01
Hazard identification and risk assessment paradigms depend on the presumption of the similarity of rodents to humans, yet species specific responses, and the extrapolation of high-dose effects to low-dose exposures can affect the estimation of human risk from rodent data. As a consequence, a human relevance framework concept was developed by the International Programme on Chemical Safety (IPCS) and International Life Sciences Institute (ILSI) Risk Science Institute (RSI) with the central tenet being the identification of a mode of action (MOA). To perform a MOA analysis, the key biochemical, cellular, and molecular events need to first be established, and the temporal and dose-dependent concordance of each of the key events in the MOA can then be determined. The key events can be used to bridge species and dose for a given MOA. The next step in the MOA analysis is the assessment of biological plausibility for determining the relevance of the specified MOA in an animal model for human cancer risk based on kinetic and dynamic parameters. Using the framework approach, a MOA in animals could not be defined for metal overload. The MOA for phenobarbital (PB)-like P450 inducers was determined to be unlikely in humans after kinetic and dynamic factors were considered. In contrast, after these factors were considered with reference to estrogen, the conclusion was drawn that estrogen-induced tumors were plausible in humans. Finally, it was concluded that the induction of rodent liver tumors by porphyrogenic compounds followed a cytotoxic MOA, and that liver tumors formed as a result of sustained cytotoxicity and regenerative proliferation are considered relevant for evaluating human cancer risk if appropriate metabolism occurs in the animal models and in humans.
DOT National Transportation Integrated Search
2002-01-01
Business models and cost recovery are the critical factors for determining the sustainability of the traveler information service, and 511. In March 2001 the Policy Committee directed the 511 Working Group to investigate plausible business models and...
Capture of Planetesimals into a Circumterrestrial Swarm
NASA Technical Reports Server (NTRS)
Weidenschilling, S. J.
1985-01-01
The lunar origin model considered in this report involves processing of protolunar material through a circumterrestrial swarm of particles. Once such a swarm has formed, it can gain mass by capturing infalling planetesimals and ejecta from giant impacts on the Earth, although the angular momentum supply from these sources remains a problem. The first stage of formation of a geocentric swarm by capture of planetesimals from initially heliocentric orbits is examined. The only plausible capture mechanism that is not dependent on very low approach velocities is the mutual collision of planetesimals passing within Earth's sphere of influence. The dissipation of energy in inelastic collisions or accretion events changes the value of the Jacobi parameter, allowing capture into bound geocentric orbits. This capture scenario was tested directly by many body numerical integration of planetesimal orbits in near Earth space.
Armstrong, Mitchell R; Senthilnathan, Sethuraman; Balzer, Christopher J; Shan, Bohan; Chen, Liang; Mu, Bin
2017-01-01
Systematic studies of key operating parameters for the sonochemical synthesis of the metal organic framework (MOF) HKUST-1(also called CuBTC) were performed including reaction time, reactor volume, sonication amplitude, sonication tip size, solvent composition, and reactant concentrations analyzed through SEM particle size analysis. Trends in the particle size and size distributions show reproducible control of average particle sizes between 1 and 4μm. These results along with complementary studies in sonofragmentation and temperature control were conducted to compare these results to kinetic crystal growth models found in literature to develop a plausible hypothetical mechanism for ultrasound-assisted growth of metal-organic-frameworks composed of a competitive mechanism including constructive solid-on-solid (SOS) crystal growth and a deconstructive sonofragmentation. Copyright © 2016 Elsevier B.V. All rights reserved.
Random topologies and the emergence of cooperation: the role of short-cuts
NASA Astrophysics Data System (ADS)
Vilone, D.; Sánchez, A.; Gómez-Gardeñes, J.
2011-04-01
We study in detail the role of short-cuts in promoting the emergence of cooperation in a network of agents playing the Prisoner's Dilemma game (PDG). We introduce a model whose topology interpolates between the one-dimensional Euclidean lattice (a ring) and the complete graph by changing the value of one parameter (the probability p of adding a link between two nodes not already connected in the Euclidean configuration). We show that there is a region of values of p in which cooperation is greatly enhanced, whilst for smaller values of p only a few cooperators are present in the final state, and for p\\rightarrow 1^- cooperation is totally suppressed. We present analytical arguments that provide a very plausible interpretation of the simulation results, thus unveiling the mechanism by which short-cuts contribute to promoting (or suppressing) cooperation.
Ricklefs, Robert E; Bermingham, Eldredge
2004-08-01
Understanding patterns of diversity can be furthered by analysis of the dynamics of colonization, speciation, and extinction on islands using historical information provided by molecular phylogeography. The land birds of the Lesser Antilles are one of the most thoroughly described regional faunas in this context. In an analysis of colonization times, Ricklefs and Bermingham (2001) found that the cumulative distribution of lineages with respect to increasing time since colonization exhibits a striking change in slope at a genetic distance of about 2% mitochondrial DNA sequence divergence (about one million years). They further showed how this heterogeneity could be explained by either an abrupt increase in colonization rates or a mass extinction event. Cherry et al. (2002), referring to a model developed by Johnson et al. (2000), argued instead that the pattern resulted from a speciation threshold for reproductive isolation of island populations from their continental source populations. Prior to this threshold, genetic divergence is slowed by migration from the source, and species of varying age accumulate at a low genetic distance. After the threshold is reached, source and island populations diverge more rapidly, creating heterogeneity in the distribution of apparent ages of island taxa. We simulated of Johnson et al.'s speciation-threshold model, incorporating genetic divergence at rate k and fixation at rate M of genes that have migrated between the source and the island population. Fixation resets the divergence clock to zero. The speciation-threshold model fits the distribution of divergence times of Lesser Antillean birds well with biologically plausible parameter estimates. Application of the model to the Hawaiian avifauna, which does not exhibit marked heterogeneity of genetic divergence, and the West Indian herpetofauna, which does, required unreasonably high migration-fixation rates, several orders of magnitude greater than the colonization rate. However, the plausibility of the speciation-divergence model for Lesser Antillean birds emphasizes the importance of further investigation of historical biogeography on a regional scale for whole biotas, as well as the migration of genes between populations on long time scales and the achievement of reproductive isolation.
Borland, Ron; Villanti, Andrea C.; Niaura, Raymond; Yuan, Zhe; Zhang, Yian; Meza, Rafael; Holford, Theodore R.; Fong, Geoffrey T.; Cummings, K. Michael; Abrams, David B.
2017-01-01
Introduction: The public health impact of vaporized nicotine products (VNPs) such as e-cigarettes is unknown at this time. VNP uptake may encourage or deflect progression to cigarette smoking in those who would not have otherwise smoked, thereby undermining or accelerating reductions in smoking prevalence seen in recent years. Methods: The public health impact of VNP use are modeled in terms of how it alters smoking patterns among those who would have otherwise smoked cigarettes and among those who would not have otherwise smoked cigarettes in the absence of VNPs. The model incorporates transitions from trial to established VNP use, transitions to exclusive VNP and dual use, and the effects of cessation at later ages. Public health impact on deaths and life years lost is estimated for a recent birth cohort incorporating evidence-informed parameter estimates. Results: Based on current use patterns and conservative assumptions, we project a reduction of 21% in smoking-attributable deaths and of 20% in life years lost as a result of VNP use by the 1997 US birth cohort compared to a scenario without VNPs. In sensitivity analysis, health gains from VNP use are especially sensitive to VNP risks and VNP use rates among those likely to smoke cigarettes. Conclusions: Under most plausible scenarios, VNP use generally has a positive public health impact. However, very high VNP use rates could result in net harms. More accurate projections of VNP impacts will require better longitudinal measures of transitions into and out of VNP, cigarette and dual use. Implications: Previous models of VNP use do not incorporate whether youth and young adults initiating VNP would have been likely to have been a smoker in the absence of VNPs. This study provides a decision-theoretic model of VNP use in a young cohort that incorporates tendencies toward smoking and shows that, under most plausible scenarios, VNP use yields public health gains. The model makes explicit the type of surveillance information needed to better estimate the effect of new products and thereby inform public policy. PMID:27613952
Testing adaptive toolbox models: a Bayesian hierarchical approach.
Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan
2013-01-01
Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.
Nodal liquids in extended t-J models and dynamical supersymmetry
NASA Astrophysics Data System (ADS)
Mavromatos, Nick E.; Sarkar, Sarben
2000-08-01
In the context of extended t-J models, with intersite Coulomb interactions of the form -V∑ninj, with ni denoting the electron number operator at site i, nodal liquids are discussed. We use the spin-charge separation ansatz as applied to the nodes of a d-wave superconducting gap. Such a situation may be of relevance to the physics of high-temperature superconductivity. We point out the possibility of existence of certain points in the parameter space of the model characterized by dynamical supersymmetries between the spinon and holon degrees of freedom, which are quite different from the symmetries in conventional supersymmetric t-J models. Such symmetries pertain to the continuum effective-field theory of the nodal liquid, and one's hope is that the ancestor lattice model may differ from the continuum theory only by renormalization-group irrelevant operators in the infrared. We give plausible arguments that nodal liquids at such supersymmetric points are characterized by superconductivity of Kosterlitz-Thouless type. The fact that quantum fluctuations around such points can be studied in a controlled way, probably makes such systems of special importance for an eventual nonperturbative understanding of the complex phase diagram of the associated high-temperature superconducting materials.
Comparison of two integration methods for dynamic causal modeling of electrophysiological data.
Lemaréchal, Jean-Didier; George, Nathalie; David, Olivier
2018-06-01
Dynamic causal modeling (DCM) is a methodological approach to study effective connectivity among brain regions. Based on a set of observations and a biophysical model of brain interactions, DCM uses a Bayesian framework to estimate the posterior distribution of the free parameters of the model (e.g. modulation of connectivity) and infer architectural properties of the most plausible model (i.e. model selection). When modeling electrophysiological event-related responses, the estimation of the model relies on the integration of the system of delay differential equations (DDEs) that describe the dynamics of the system. In this technical note, we compared two numerical schemes for the integration of DDEs. The first, and standard, scheme approximates the DDEs (more precisely, the state of the system, with respect to conduction delays among brain regions) using ordinary differential equations (ODEs) and solves it with a fixed step size. The second scheme uses a dedicated DDEs solver with adaptive step sizes to control error, making it theoretically more accurate. To highlight the effects of the approximation used by the first integration scheme in regard to parameter estimation and Bayesian model selection, we performed simulations of local field potentials using first, a simple model comprising 2 regions and second, a more complex model comprising 6 regions. In these simulations, the second integration scheme served as the standard to which the first one was compared. Then, the performances of the two integration schemes were directly compared by fitting a public mismatch negativity EEG dataset with different models. The simulations revealed that the use of the standard DCM integration scheme was acceptable for Bayesian model selection but underestimated the connectivity parameters and did not allow an accurate estimation of conduction delays. Fitting to empirical data showed that the models systematically obtained an increased accuracy when using the second integration scheme. We conclude that inference on connectivity strength and delay based on DCM for EEG/MEG requires an accurate integration scheme. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Infrared surface photometry of 3C 65: Stellar evolution and the Tolman signal
NASA Astrophysics Data System (ADS)
Rigler, M. A.; Lilly, S. J.
1994-06-01
We present an analysis of the infrared surface brightness profile of the high-redshift radio galaxy 3C 65 (z = 1.176), which is well fitted by a de Vaucouleurs r1/4 law. A model surface fitting routine yields characteristic photometric parameters comparable to those of low-redshift radio galaxies and brightest cluster members (BCMs) in standard cosmologies. The small displacement of this galaxy from the locus of low-redshift systems on the mur - log(re) plane suggests that little or no luminosity evolution is required in a cosmological model with (Omega0, lambda0 = (1,0), while a modest degree of luminosity evolution, accountable by passive evolution of the stellar population, is implied in models with (0, 0) or (0.1, 0.9). A nonexpanding cosmology is unlikely because it would require 3C 65 to lie at the extreme end of the distribution of properties of local gE galaxies, and the effects of plausible stellar and/or dynamic evolution would make 3C 65 even more extreme by the present epoch.
Modeling CO2 mass transfer in amine mixtures: PZ-AMP and PZ-MDEA.
Puxty, Graeme; Rowland, Robert
2011-03-15
The most common method of carbon dioxide (CO(2)) capture is the absorption of CO(2) into a falling thin film of an aqueous amine solution. Modeling of mass transfer during CO(2) absorption is an important way to gain insight and understanding about the underlying processes that are occurring. In this work a new software tool has been used to model CO(2) absorption into aqueous piperazine (PZ) and binary mixtures of PZ with 2-amino-2-methyl-1-propanol (AMP) or methyldiethanolamine (MDEA). The tool solves partial differential and simultaneous equations describing diffusion and chemical reaction automatically derived from reactions written using chemical notation. It has been demonstrated that by using reactions that are chemically plausible the mass transfer in binary mixtures can be fully described by combining the chemical reactions and their associated parameters determined for single amines. The observed enhanced mass transfer in binary mixtures can be explained through chemical interactions occurring in the mixture without need to resort to using additional reactions or unusual transport phenomena such as the "shuttle mechanism".
On the Importance of the Dynamics of Discretizations
NASA Technical Reports Server (NTRS)
Sweby, Peter K.; Yee, H. C.; Rai, ManMohan (Technical Monitor)
1995-01-01
It has been realized recently that the discrete maps resulting from numerical discretizations of differential equations can possess asymptotic dynamical behavior quite different from that of the original systems. This is the case not only for systems of Ordinary Differential Equations (ODEs) but in a more complicated manner for Partial Differential Equations (PDEs) used to model complex physics. The impact of the modified dynamics may be mild and even not observed for some numerical methods. For other classes of discretizations the impact may be pronounced, but not always obvious depending on the nonlinear model equations, the time steps, the grid spacings and the initial conditions. Non-convergence or convergence to periodic solutions might be easily recognizable but convergence to incorrect but plausible solutions may not be so obvious - even for discretized parameters within the linearized stability constraint. Based on our past four years of research, we will illustrate some of the pathology of the dynamics of discretizations, its possible impact and the usage of these schemes for model nonlinear ODEs, convection-diffusion equations and grid adaptations.
El-Atwani, O.; Norris, S. A.; Ludwig, K.; ...
2015-12-16
In this study, several proposed mechanisms and theoretical models exist concerning nanostructure evolution on III-V semiconductors (particularly GaSb) via ion beam irradiation. However, making quantitative contact between experiment on the one hand and model-parameter dependent predictions from different theories on the other is usually difficult. In this study, we take a different approach and provide an experimental investigation with a range of targets (GaSb, GaAs, GaP) and ion species (Ne, Ar, Kr, Xe) to determine new parametric trends regarding nanostructure evolution. Concurrently, atomistic simulations using binary collision approximation over the same ion/target combinations were performed to determine parametric trends onmore » several quantities related to existing model. A comparison of experimental and numerical trends reveals that the two are broadly consistent under the assumption that instabilities are driven by chemical instability based on phase separation. Furthermore, the atomistic simulations and a survey of material thermodynamic properties suggest that a plausible microscopic mechanism for this process is an ion-enhanced mobility associated with energy deposition by collision cascades.« less
Effect of diseases on symbiotic systems.
Tiwari, Pankaj Kumar; Sasmal, Sourav Kumar; Sha, Amar; Venturino, Ezio; Chattopadhyay, Joydev
2017-09-01
There are many species living in symbiotic communities. In this study, we analyzed models in which populations are in the mutualism symbiotic relations subject to a disease spreading among one of the species. The main goal is the characterization of symbiotic relations of coexisting species through their mutual influences on their respective carrying capacities, taking into account that this influence can be quite strong. The functional dependence of the carrying capacities reflects the fact that the correlations between populations cannot be realized merely through direct interactions, as in the usual predator-prey Lotka-Volterra model, but also through the influence of each species on the carrying capacities of the other one. Equilibria are analyzed for feasibility and stability, substantiated via numerical simulations, and global sensitivity analysis identifies the important parameters having a significant impact on the model dynamics. The infective growth rate and the disease-related mortality rate may alter the stability behavior of the system. Our results show that introducing a symbiotic species is a plausible way to control the disease in the population. Copyright © 2017 Elsevier B.V. All rights reserved.
The role of selective predation in harmful algal blooms
NASA Astrophysics Data System (ADS)
Solé, Jordi; Garcia-Ladona, Emilio; Estrada, Marta
2006-08-01
A feature of marine plankton communities is the occurrence of rapid population explosions. When the blooming species are directly or indirectly noxious for humans, these proliferations are denoted as harmful algal blooms (HAB). The importance of biological interactions for the appearance of HABs, in particular when the proliferating microalgae produce toxins that affect other organisms in the food web, remains still poorly understood. Here we analyse the role of toxins produced by a microalgal species and affecting its predators, in determining the success of that species as a bloom former. A three-species predator-prey model is used to define a criterion that determines whether a toxic microalga will be able to initiate a bloom in competition against a non-toxic one with higher growth rate. Dominance of the toxic species depends on a critical parameter that defines the degree of feeding selectivity by grazers. The criterion is applied to a particular simplified model and to numerical simulations of a full marine ecosystem model. The results suggest that the release of toxic compounds affecting predators may be a plausible biological factor in allowing the development of HABs.
Generalised filtering and stochastic DCM for fMRI.
Li, Baojuan; Daunizeau, Jean; Stephan, Klaas E; Penny, Will; Hu, Dewen; Friston, Karl
2011-09-15
This paper is about the fitting or inversion of dynamic causal models (DCMs) of fMRI time series. It tries to establish the validity of stochastic DCMs that accommodate random fluctuations in hidden neuronal and physiological states. We compare and contrast deterministic and stochastic DCMs, which do and do not ignore random fluctuations or noise on hidden states. We then compare stochastic DCMs, which do and do not ignore conditional dependence between hidden states and model parameters (generalised filtering and dynamic expectation maximisation, respectively). We first characterise state-noise by comparing the log evidence of models with different a priori assumptions about its amplitude, form and smoothness. Face validity of the inversion scheme is then established using data simulated with and without state-noise to ensure that DCM can identify the parameters and model that generated the data. Finally, we address construct validity using real data from an fMRI study of internet addiction. Our analyses suggest the following. (i) The inversion of stochastic causal models is feasible, given typical fMRI data. (ii) State-noise has nontrivial amplitude and smoothness. (iii) Stochastic DCM has face validity, in the sense that Bayesian model comparison can distinguish between data that have been generated with high and low levels of physiological noise and model inversion provides veridical estimates of effective connectivity. (iv) Relaxing conditional independence assumptions can have greater construct validity, in terms of revealing group differences not disclosed by variational schemes. Finally, we note that the ability to model endogenous or random fluctuations on hidden neuronal (and physiological) states provides a new and possibly more plausible perspective on how regionally specific signals in fMRI are generated. Copyright © 2011. Published by Elsevier Inc.
Moore, C.T.; Conroy, M.J.
2006-01-01
Stochastic and structural uncertainties about forest dynamics present challenges in the management of ephemeral habitat conditions for endangered forest species. Maintaining critical foraging and breeding habitat for the endangered red-cockaded woodpecker (Picoides borealis) requires an uninterrupted supply of old-growth forest. We constructed and optimized a dynamic forest growth model for the Piedmont National Wildlife Refuge (Georgia, USA) with the objective of perpetuating a maximum stream of old-growth forest habitat. Our model accommodates stochastic disturbances and hardwood succession rates, and uncertainty about model structure. We produced a regeneration policy that was indexed by current forest state and by current weight of evidence among alternative model forms. We used adaptive stochastic dynamic programming, which anticipates that model probabilities, as well as forest states, may change through time, with consequent evolution of the optimal decision for any given forest state. In light of considerable uncertainty about forest dynamics, we analyzed a set of competing models incorporating extreme, but plausible, parameter values. Under any of these models, forest silviculture practices currently recommended for the creation of woodpecker habitat are suboptimal. We endorse fully adaptive approaches to the management of endangered species habitats in which predictive modeling, monitoring, and assessment are tightly linked.
NASA Astrophysics Data System (ADS)
Abadie, J.; Abbott, B. P.; Abbott, R.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adhikari, R.; Ajith, P.; Allen, B.; Allen, G.; Amador Ceron, E.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Antonucci, F.; Aoudia, S.; Arain, M. A.; Araya, M.; Aronsson, M.; Arun, K. G.; Aso, Y.; Aston, S.; Astone, P.; Atkinson, D. E.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Barker, D.; Barnum, S.; Barone, F.; Barr, B.; Barriga, P.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Bauchrowitz, J.; Bauer, Th S.; Behnke, B.; Beker, M. G.; Belczynski, K.; Benacquista, M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bigotta, S.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birindelli, S.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Blomberg, A.; Boccara, C.; Bock, O.; Bodiya, T. P.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bork, R.; Born, M.; Bose, S.; Bosi, L.; Boyle, M.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Breyer, J.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Budzyński, R.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet-Castell, J.; Burmeister, O.; Buskulic, D.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campagna, E.; Campsie, P.; Cannizzo, J.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C.; Carbognani, F.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Chalkley, E.; Charlton, P.; Chassande Mottin, E.; Chelkowski, S.; Chen, Y.; Chincarini, A.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Clark, D.; Clark, J.; Clayton, J. H.; Cleva, F.; Coccia, E.; Colacino, C. N.; Colas, J.; Colla, A.; Colombini, M.; Conte, R.; Cook, D.; Corbitt, T. R.; Corda, C.; Cornish, N.; Corsi, A.; Costa, C. A.; Coulon, J. P.; Coward, D.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Culter, R. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Danilishin, S. L.; Dannenberg, R.; D'Antonio, S.; Danzmann, K.; Dari, A.; Das, K.; Dattilo, V.; Daudert, B.; Davier, M.; Davies, G.; Davis, A.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; DeBra, D.; Degallaix, J.; del Prete, M.; Dergachev, V.; DeRosa, R.; DeSalvo, R.; Devanka, P.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Emilio, M. Di Paolo; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doomes, E. E.; Dorsher, S.; Douglas, E. S. D.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Dueck, J.; Dumas, J. C.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Engel, R.; Etzel, T.; Evans, M.; Evans, T.; Fafone, V.; Fairhurst, S.; Fan, Y.; Farr, B. F.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Flaminio, R.; Flanigan, M.; Flasch, K.; Foley, S.; Forrest, C.; Forsi, E.; Fotopoulos, N.; Fournier, J. D.; Franc, J.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gammaitoni, L.; Garofoli, J. A.; Garufi, F.; Gemme, G.; Genin, E.; Gennai, A.; Gholami, I.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gill, C.; Goetz, E.; Goggin, L. M.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Greverie, C.; Grosso, R.; Grote, H.; Grunewald, S.; Guidi, G. M.; Gustafson, E. K.; Gustafson, R.; Hage, B.; Hall, P.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Haughian, K.; Hayama, K.; Heefner, J.; Heitmann, H.; Hello, P.; Heng, I. S.; Heptonstall, A.; Hewitson, M.; Hild, S.; Hirose, E.; Hoak, D.; Hodge, K. A.; Holt, K.; Hosken, D. J.; Hough, J.; Howell, E.; Hoyland, D.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Jaranowski, P.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kanner, J.; Katsavounidis, E.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, C.; Kim, H.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kondrashov, V.; Kopparapu, R.; Koranda, S.; Kowalska, I.; Kozak, D.; Krause, T.; Kringel, V.; Krishnamurthy, S.; Krishnan, B.; Królak, A.; Kuehn, G.; Kullman, J.; Kumar, R.; Kwee, P.; Landry, M.; Lang, M.; Lantz, B.; Lastzka, N.; Lazzarini, A.; Leaci, P.; Leong, J.; Leonor, I.; Leroy, N.; Letendre, N.; Li, J.; Li, T. G. F.; Lin, H.; Lindquist, P. E.; Lockerbie, N. A.; Lodhia, D.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lu, P.; Luan, J.; Lubiński, M.; Lucianetti, A.; Lück, H.; Lundgren, A.; Machenschalk, B.; MacInnis, M.; Mackowski, J. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Mak, C.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIvor, G.; McKechan, D. J. A.; Meadors, G.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Merill, L.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mino, Y.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohanty, S. D.; Mohapatra, S. R. P.; Moraru, D.; Moreau, J.; Moreno, G.; Morgado, N.; Morgia, A.; Morioka, T.; Mors, K.; Mosca, S.; Moscatelli, V.; Mossavi, K.; Mours, B.; MowLowry, C.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murray, P. G.; Nash, T.; Nawrodt, R.; Nelson, J.; Neri, I.; Newton, G.; Nishizawa, A.; Nocera, F.; Nolting, D.; Ochsner, E.; O'Dell, J.; Ogin, G. H.; Oldenburg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Pagliaroli, G.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Papa, M. A.; Pardi, S.; Pareja, M.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patel, P.; Pedraza, M.; Pekowsky, L.; Penn, S.; Peralta, C.; Perreca, A.; Persichetti, G.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pietka, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Postiglione, F.; Prato, M.; Predoi, V.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Raab, F. J.; Rabaste, O.; Rabeling, D. S.; Radke, T.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, P.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Röver, C.; Rogstad, S.; Rolland, L.; Rollins, J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sakata, S.; Sakosky, M.; Salemi, F.; Sammut, L.; Sancho de la Jordana, L.; Sandberg, V.; Sannibale, V.; Santamaría, L.; Santostasi, G.; Saraf, S.; Sassolas, B.; Sathyaprakash, B. S.; Sato, S.; Satterthwaite, M.; Saulson, P. R.; Savage, R.; Schilling, R.; Schnabel, R.; Schofield, R.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Searle, A. C.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Singer, A.; Sintes, A. M.; Skelton, G.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Stein, A. J.; Stein, L. C.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Toncelli, A.; Tonelli, M.; Torres, C.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Trias, M.; Trummer, J.; Tseng, K.; Ugolini, D.; Urbanek, K.; Vahlbruch, H.; Vaishnav, B.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Veggel, A. A.; Vass, S.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Veltkamp, C.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A.; Vinet, J.-Y.; Vocca, H.; Vorvick, C.; Vyachanin, S. P.; Waldman, S. J.; Wallace, L.; Wanner, A.; Ward, R. L.; Was, M.; Wei, P.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wen, S.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, L.; Willke, B.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Woan, G.; Wooley, R.; Worden, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yeaton-Massey, D.; Yoshida, S.; Yu, P. P.; Yvert, M.; Zanolin, M.; Zhang, L.; Zhang, Z.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2010-09-01
We present an up-to-date, comprehensive summary of the rates for all types of compact binary coalescence sources detectable by the initial and advanced versions of the ground-based gravitational-wave detectors LIGO and Virgo. Astrophysical estimates for compact-binary coalescence rates depend on a number of assumptions and unknown model parameters and are still uncertain. The most confident among these estimates are the rate predictions for coalescing binary neutron stars which are based on extrapolations from observed binary pulsars in our galaxy. These yield a likely coalescence rate of 100 Myr-1 per Milky Way Equivalent Galaxy (MWEG), although the rate could plausibly range from 1 Myr-1 MWEG-1 to 1000 Myr-1 MWEG-1 (Kalogera et al 2004 Astrophys. J. 601 L179; Kalogera et al 2004 Astrophys. J. 614 L137 (erratum)). We convert coalescence rates into detection rates based on data from the LIGO S5 and Virgo VSR2 science runs and projected sensitivities for our advanced detectors. Using the detector sensitivities derived from these data, we find a likely detection rate of 0.02 per year for Initial LIGO-Virgo interferometers, with a plausible range between 2 × 10-4 and 0.2 per year. The likely binary neutron-star detection rate for the Advanced LIGO-Virgo network increases to 40 events per year, with a range between 0.4 and 400 per year.
Puttarajappa, Chethan; Wijkstrom, Martin; Ganoza, Armando; Lopez, Roberto; Tevar, Amit
2018-01-01
Background Recent studies have reported a significant decrease in wound problems and hospital stay in obese patients undergoing renal transplantation by robotic-assisted minimally invasive techniques with no difference in graft function. Objective Due to the lack of cost-benefit studies on the use of robotic-assisted renal transplantation versus open surgical procedure, the primary aim of our study is to develop a Markov model to analyze the cost-benefit of robotic surgery versus open traditional surgery in obese patients in need of a renal transplant. Methods Electronic searches will be conducted to identify studies comparing open renal transplantation versus robotic-assisted renal transplantation. Costs associated with the two surgical techniques will incorporate the expenses of the resources used for the operations. A decision analysis model will be developed to simulate a randomized controlled trial comparing three interventional arms: (1) continuation of renal replacement therapy for patients who are considered non-suitable candidates for renal transplantation due to obesity, (2) transplant recipients undergoing open transplant surgery, and (3) transplant patients undergoing robotic-assisted renal transplantation. TreeAge Pro 2017 R1 TreeAge Software, Williamstown, MA, USA) will be used to create a Markov model and microsimulation will be used to compare costs and benefits for the two competing surgical interventions. Results The model will simulate a randomized controlled trial of adult obese patients affected by end-stage renal disease undergoing renal transplantation. The absorbing state of the model will be patients' death from any cause. By choosing death as the absorbing state, we will be able simulate the population of renal transplant recipients from the day of their randomization to transplant surgery or continuation on renal replacement therapy to their death and perform sensitivity analysis around patients' age at the time of randomization to determine if age is a critical variable for cost-benefit analysis or cost-effectiveness analysis comparing renal replacement therapy, robotic-assisted surgery or open renal transplant surgery. After running the model, one of the three competing strategies will result as the most cost-beneficial or cost-effective under common circumstances. To assess the robustness of the results of the model, a multivariable probabilistic sensitivity analysis will be performed by modifying the mean values and confidence intervals of key parameters with the main intent of assessing if the winning strategy is sensitive to rigorous and plausible variations of those values. Conclusions After running the model, one of the three competing strategies will result as the most cost-beneficial or cost-effective under common circumstances. To assess the robustness of the results of the model, a multivariable probabilistic sensitivity analysis will be performed by modifying the mean values and confidence intervals of key parameters with the main intent of assessing if the winning strategy is sensitive to rigorous and plausible variations of those values. PMID:29519780
Of paradox and plausibility: the dynamic of change in medical law.
Harrington, John
2014-01-01
This article develops a model of change in medical law. Drawing on systems theory, it argues that medical law participates in a dynamic of 'deparadoxification' and 'reparadoxification' whereby the underlying contingency of the law is variously concealed through plausible argumentation, or revealed by critical challenge. Medical law is, thus, thoroughly rhetorical. An examination of the development of the law on abortion and on the sterilization of incompetent adults shows that plausibility is achieved through the deployment of substantive common sense and formal stylistic devices. It is undermined where these elements are shown to be arbitrary and constructed. In conclusion, it is argued that the politics of medical law are constituted by this antagonistic process of establishing and challenging provisionally stable normative regimes. © The Author [2014]. Published by Oxford University Press; all rights reserved. For Permissions, please email: journals.permissions@oup.com.
Utilization of Prosodic Information in Syntactic Ambiguity Resolution
2010-01-01
Two self paced listening experiments examined the role of prosodic phrasing in syntactic ambiguity resolution. In Experiment 1, the stimuli consisted of early closure sentences (e.g., “While the parents watched, the child sang a song.”) containing transitive-biased subordinate verbs paired with plausible direct objects or intransitive-biased subordinate verbs paired with implausible direct objects. Experiment 2 also contained early closure sentences with transitively and intransitive-biased subordinate verbs, but the subordinate verbs were always followed by plausible direct objects. In both experiments, there were two prosodic conditions. In the subject-biased prosodic condition, an intonational phrase boundary marked the clausal boundary following the subordinate verb. In the object-biased prosodic condition, the clause boundary was unmarked. The results indicate that lexical and prosodic cues interact at the subordinate verb and plausibility further affects processing at the ambiguous noun. Results are discussed with respect to models of the role of prosody in sentence comprehension. PMID:20033849
Assessing Land Management Change Effects on Forest Carbon and Emissions Under Changing Climate
NASA Astrophysics Data System (ADS)
Law, B. E.
2014-12-01
There has been limited focus on fine-scale land management change effects on forest carbon under future environmental conditions (climate, nitrogen deposition, increased atmospheric CO2). Forest management decisions are often made at the landscape to regional levels before analyses have been conducted to determine the potential outcomes and effectiveness of such actions. Scientists need to evaluate plausible land management actions in a timely manner to help shape policy and strategic land management. Issues of interest include species-level adaptation to climate, resilience and vulnerability to mortality within forested landscapes and regions. Efforts are underway to improve land system model simulation of future mortality related to climate, and to develop and evaluate plausible land management options that could help mitigate or avoid future die-offs. Vulnerability to drought-related mortality varies among species and with tree size or age. Predictors of species ability to survive in specific environments are still not resolved. A challenge is limited observations for fine-scale (e.g. 4 km2) modeling, particularly physiological parameters. Uncertainties are primarily associated with future land management and policy decisions. They include the interface with economic factors and with other ecosystem services (biodiversity, water availability, wildlife habitat). The outcomes of future management scenarios should be compared with business-as-usual management under the same environmental conditions to determine the effects of management changes on forest carbon and net emissions to the atmosphere. For example, in the western U.S., land system modeling and life cycle assessment of several management options to reduce impacts of fire reduced long-term forest carbon gain and increased carbon emissions compared with business-as-usual management under future environmental conditions. The enhanced net carbon uptake with climate and reduced fire emissions after thinning did not compensate for the increased wood removals over 90 years, leading to reduced net biome production. Analysis of land management change scenarios at fine scales is needed, and should consider other ecological values in addition to carbon.
Theories and models on the biological of cells in space
NASA Technical Reports Server (NTRS)
Todd, P.; Klaus, D. M.
1996-01-01
A wide variety of observations on cells in space, admittedly made under constraining and unnatural conditions in may cases, have led to experimental results that were surprising or unexpected. Reproducibility, freedom from artifacts, and plausibility must be considered in all cases, even when results are not surprising. The papers in symposium on 'Theories and Models on the Biology of Cells in Space' are dedicated to the subject of the plausibility of cellular responses to gravity -- inertial accelerations between 0 and 9.8 m/sq s and higher. The mechanical phenomena inside the cell, the gravitactic locomotion of single eukaryotic and prokaryotic cells, and the effects of inertial unloading on cellular physiology are addressed in theoretical and experimental studies.
NASA Astrophysics Data System (ADS)
Kirshen, P. H.; Knott, J. F.; Ray, P.; Elshaer, M.; Daniel, J.; Jacobs, J. M.
2016-12-01
Transportation climate change vulnerability and adaptation studies have primarily focused on surface-water flooding from sea-level rise (SLR); little attention has been given to the effects of climate change and SLR on groundwater and subsequent impacts on the unbound foundation layers of coastal-road infrastructure. The magnitude of service-life reduction depends on the height of the groundwater in the unbound pavement materials, the pavement structure itself, and the loading. Using a steady-state groundwater model, and a multi-layer elastic pavement evaluation model, the strain changes in the layers can be determined as a function of parameter values and the strain changes translated into failure as measured by number of loading cycles to failure. For a section of a major coastal road in New Hampshire, future changes in sea-level, precipitation, temperature, land use, and groundwater pumping are characterized by deep uncertainty. Parameters that describe the groundwater system such as hydraulic conductivity can be probabilistically described while road characteristics are assumed to be deterministic. To understand the vulnerability of this road section, a bottom-up planning approach was employed over time where the combinations of parameter values that cause failure were determined and their plausibility of their occurring was analyzed. To design a robust adaptation strategy that will function reasonably well in the present and the future given the large number of uncertain parameter values, performance of adaptation options were investigated. Adaptation strategies that were considered include raising the road, load restrictions, increasing pavement layer thicknesses, replacing moisture-sensitive materials with materials that are not moisture sensitive, improving drainage systems, and treatment of the underlying materials.
The pure rotational spectrum of CaNC
NASA Astrophysics Data System (ADS)
Scurlock, C. T.; Steimle, T. C.; Suenram, R. D.; Lovas, F. J.
1994-03-01
The pure rotational spectrum of calcium isocyanide, CaNC, in its (0,0,0) X 2Σ+ vibronic state was measured using a combination of Fourier transform microwave (FTMW) and pump/probe microwave-optical double resonance (PPMODR) spectroscopy. Gaseous CaNC was generated using a laser ablation/supersonic expansion source. The determined spectroscopic parameters are (in MHz), B=4048.754 332 (29); γ=18.055 06 (23); bF=12.481 49 (93); c=2.0735 (14); and eQq0=-2.6974 (11). The hyperfine parameters are qualitatively interpreted in terms of a plausible molecular orbital descriptions and a comparison with the alkaline earth monohalides and the alkali monocyanides is given.
NASA Astrophysics Data System (ADS)
Moresi, L.; May, D.; Peachey, T.; Enticott, C.; Abramson, D.; Robinson, T.
2004-12-01
Can you teach intuition ? Obviously we think that this is possible (though it's still just a hunch). People undoubtedly develop intuition for non-linear systems through painstaking repetition of complex tasks until they have sufficient feedback to begin to "see" the emergent behaviour. The better the exploration of the system can be exposed, the quicker the potential for developing an intuitive understanding. We have spent some time considering how to incorporate the intuitive knowledge of field geologists into mechanical modeling of geological processes. Our solution has been to allow expert geologist to steer (via a GUI) a genetic algorithm inversion of a mechanical forward model towards "structures" or patterns which are plausible in nature. The expert knowledge is then captured by analysis of the individual model parameters which are constrained by the steering (and by analysis of those which are unconstrained). The same system can also be used in reverse to expose the influence of individual parameters to the non-expert who is trying to learn just what does make a good match between model and observation. The ``distance'' between models preferred by experts, and those by an individual can be shown graphically to provide feedback. The examples we choose are from numerical models of extensional basins. We will first try to give each person some background information on the scientific problem from the poster and then we will let them loose on the numerical modeling tools with specific tasks to achieve. This will be an experiment in progress - we will later analyse how people use the GUI and whether there is really any significant difference between so-called experts and self-styled novices.
Development of an in silico stochastic 4D model of tumor growth with angiogenesis.
Forster, Jake C; Douglass, Michael J J; Harriss-Phillips, Wendy M; Bezak, Eva
2017-04-01
A stochastic computer model of tumour growth with spatial and temporal components that includes tumour angiogenesis was developed. In the current work it was used to simulate head and neck tumour growth. The model also provides the foundation for a 4D cellular radiotherapy simulation tool. The model, developed in Matlab, contains cell positions randomised in 3D space without overlap. Blood vessels are represented by strings of blood vessel units which branch outwards to achieve the desired tumour relative vascular volume. Hypoxic cells have an increased cell cycle time and become quiescent at oxygen tensions less than 1 mmHg. Necrotic cells are resorbed. A hierarchy of stem cells, transit cells and differentiated cells is considered along with differentiated cell loss. Model parameters include the relative vascular volume (2-10%), blood oxygenation (20-100 mmHg), distance from vessels to the onset of necrosis (80-300 μm) and probability for stem cells to undergo symmetric division (2%). Simulations were performed to observe the effects of hypoxia on tumour growth rate for head and neck cancers. Simulations were run on a supercomputer with eligible parts running in parallel on 12 cores. Using biologically plausible model parameters for head and neck cancers, the tumour volume doubling time varied from 45 ± 5 days (n = 3) for well oxygenated tumours to 87 ± 5 days (n = 3) for severely hypoxic tumours. The main achievements of the current model were randomised cell positions and the connected vasculature structure between the cells. These developments will also be beneficial when irradiating the simulated tumours using Monte Carlo track structure methods. © 2017 American Association of Physicists in Medicine.
Uncertainty analysis of least-cost modeling for designing wildlife linkages.
Beier, Paul; Majka, Daniel R; Newell, Shawn L
2009-12-01
Least-cost models for focal species are widely used to design wildlife corridors. To evaluate the least-cost modeling approach used to develop 15 linkage designs in southern California, USA, we assessed robustness of the largest and least constrained linkage. Species experts parameterized models for eight species with weights for four habitat factors (land cover, topographic position, elevation, road density) and resistance values for each class within a factor (e.g., each class of land cover). Each model produced a proposed corridor for that species. We examined the extent to which uncertainty in factor weights and class resistance values affected two key conservation-relevant outputs, namely, the location and modeled resistance to movement of each proposed corridor. To do so, we compared the proposed corridor to 13 alternative corridors created with parameter sets that spanned the plausible ranges of biological uncertainty in these parameters. Models for five species were highly robust (mean overlap 88%, little or no increase in resistance). Although the proposed corridors for the other three focal species overlapped as little as 0% (mean 58%) of the alternative corridors, resistance in the proposed corridors for these three species was rarely higher than resistance in the alternative corridors (mean difference was 0.025 on a scale of 1 10; worst difference was 0.39). As long as the model had the correct rank order of resistance values and factor weights, our results suggest that the predicted corridor is robust to uncertainty. The three carnivore focal species, alone or in combination, were not effective umbrellas for the other focal species. The carnivore corridors failed to overlap the predicted corridors of most other focal species and provided relatively high resistance for the other focal species (mean increase of 2.7 resistance units). Least-cost modelers should conduct uncertainty analysis so that decision-makers can appreciate the potential impact of model uncertainty on conservation decisions. Our approach to uncertainty analysis (which can be called a worst-case scenario approach) is appropriate for complex models in which distribution of the input parameters cannot be specified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Jason W.; Linscott, Ethan; Shporer, Avi, E-mail: jwbarnes@uidaho.edu
We model the asymmetry of the KOI-13.01 transit lightcurve assuming a gravity-darkened rapidly rotating host star in order to constrain the system's spin-orbit alignment and transit parameters. We find that our model can reproduce the Kepler lightcurve for KOI-13.01 with a sky-projected alignment of {lambda} = 23 Degree-Sign {+-} 4 Degree-Sign and with the star's north pole tilted away from the observer by 48 Degree-Sign {+-} 4 Degree-Sign (assuming M{sub *} = 2.05 M{sub Sun }). With both these determinations, we calculate that the net misalignment between this planet's orbit normal and its star's rotational pole is 56 Degree-Sign {+-}more » 4 Degree-Sign . Degeneracies in our geometric interpretation also allow a retrograde spin-orbit angle of 124 Degree-Sign {+-} 4 Degree-Sign . This is the first spin-orbit measurement to come from gravity darkening and is one of only a few measurements of the full (not just the sky-projected) spin-orbit misalignment of an extrasolar planet. We also measure accurate transit parameters incorporating stellar oblateness and gravity darkening: R{sub *} 1.756 {+-} 0.014 R{sub Sun }, R{sub p} = 1.445 {+-} 0.016 R{sub Jup}, and i = 85.{sup 0}9 {+-} 0.{sup 0}4. The new lower planetary radius falls within the planetary mass regime for plausible interior models for the transiting body. A simple initial calculation shows that KOI-13.01's circular orbit is apparently inconsistent with the Kozai mechanism having driven its spin-orbit misalignment; planet-planet scattering and stellar spin migration remain viable mechanisms. Future Kepler data will improve the precision of the KOI-13.01 transit lightcurve, allowing more precise determination of transit parameters and the opportunity to use the Photometric Rossiter-McLaughlin effect to resolve the prograde/retrograde orbit determination degeneracy.« less
ERIC Educational Resources Information Center
Bacharach, Samuel; Bamberger, Peter
1992-01-01
Survey data from 215 nurses (10 male) and 430 civil engineers (10 female) supported the plausibility of occupation-specific models (positing direct paths between role stressors, antecedents, and consequences) compared to generic models. A weakness of generic models is the tendency to ignore differences in occupational structure and culture. (SK)
Further Studies into Synthetic Image Generation using CameoSim
2011-08-01
preparation of the validation effort a study of BRDF models has been completed, which includes the physical plausibility of models , how measured data...the visible to shortwave infrared. In preparation of the validation effort a study of BRDF models has been completed, which includes the physical...Example..................................................................................................................... 17 4. MODELLING BRDFS
Rohrmeier, Martin A; Cross, Ian
2014-07-01
Humans rapidly learn complex structures in various domains. Findings of above-chance performance of some untrained control groups in artificial grammar learning studies raise questions about the extent to which learning can occur in an untrained, unsupervised testing situation with both correct and incorrect structures. The plausibility of unsupervised online-learning effects was modelled with n-gram, chunking and simple recurrent network models. A novel evaluation framework was applied, which alternates forced binary grammaticality judgments and subsequent learning of the same stimulus. Our results indicate a strong online learning effect for n-gram and chunking models and a weaker effect for simple recurrent network models. Such findings suggest that online learning is a plausible effect of statistical chunk learning that is possible when ungrammatical sequences contain a large proportion of grammatical chunks. Such common effects of continuous statistical learning may underlie statistical and implicit learning paradigms and raise implications for study design and testing methodologies. Copyright © 2014 Elsevier Inc. All rights reserved.
van den Berg, Ronald; Roerdink, Jos B. T. M.; Cornelissen, Frans W.
2010-01-01
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality. PMID:20098499
Spatio-temporal Model of Xenobiotic Distribution and Metabolism in an in Silico Mouse Liver Lobule
NASA Astrophysics Data System (ADS)
Fu, Xiao; Sluka, James; Clendenon, Sherry; Glazier, James; Ryan, Jennifer; Dunn, Kenneth; Wang, Zemin; Klaunig, James
Our study aims to construct a structurally plausible in silico model of a mouse liver lobule to simulate the transport of xenobiotics and the production of their metabolites. We use a physiologically-based model to calculate blood-flow rates in a network of mouse liver sinusoids and simulate transport, uptake and biotransformation of xenobiotics within the in silico lobule. Using our base model, we then explore the effects of variations of compound-specific (diffusion, transport and metabolism) and compound-independent (temporal alteration of blood flow pattern) parameters, and examine their influence on the distribution of xenobiotics and metabolites. Our simulations show that the transport mechanism (diffusive and transporter-mediated) of xenobiotics and blood flow both impact the regional distribution of xenobiotics in a mouse hepatic lobule. Furthermore, differential expression of metabolic enzymes along each sinusoid's portal to central axis, together with differential cellular availability of xenobiotics, induce non-uniform production of metabolites. Thus, the heterogeneity of the biochemical and biophysical properties of xenobiotics, along with the complexity of blood flow, result in different exposures to xenobiotics for hepatocytes at different lobular locations. We acknowledge support from National Institute of Health GM 077138 and GM 111243.
Chemotherapy-induced pulmonary hypertension: role of alkylating agents.
Ranchoux, Benoît; Günther, Sven; Quarck, Rozenn; Chaumais, Marie-Camille; Dorfmüller, Peter; Antigny, Fabrice; Dumas, Sébastien J; Raymond, Nicolas; Lau, Edmund; Savale, Laurent; Jaïs, Xavier; Sitbon, Olivier; Simonneau, Gérald; Stenmark, Kurt; Cohen-Kaminsky, Sylvia; Humbert, Marc; Montani, David; Perros, Frédéric
2015-02-01
Pulmonary veno-occlusive disease (PVOD) is an uncommon form of pulmonary hypertension (PH) characterized by progressive obstruction of small pulmonary veins and a dismal prognosis. Limited case series have reported a possible association between different chemotherapeutic agents and PVOD. We evaluated the relationship between chemotherapeutic agents and PVOD. Cases of chemotherapy-induced PVOD from the French PH network and literature were reviewed. Consequences of chemotherapy exposure on the pulmonary vasculature and hemodynamics were investigated in three different animal models (mouse, rat, and rabbit). Thirty-seven cases of chemotherapy-associated PVOD were identified in the French PH network and systematic literature analysis. Exposure to alkylating agents was observed in 83.8% of cases, mostly represented by cyclophosphamide (43.2%). In three different animal models, cyclophosphamide was able to induce PH on the basis of hemodynamic, morphological, and biological parameters. In these models, histopathological assessment confirmed significant pulmonary venous involvement highly suggestive of PVOD. Together, clinical data and animal models demonstrated a plausible cause-effect relationship between alkylating agents and PVOD. Clinicians should be aware of this uncommon, but severe, pulmonary vascular complication of alkylating agents. Copyright © 2015 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.
Constraining MHD Disk-Winds with X-ray Absorbers
NASA Astrophysics Data System (ADS)
Fukumura, Keigo; Tombesi, F.; Shrader, C. R.; Kazanas, D.; Contopoulos, J.; Behar, E.
2014-01-01
From the state-of-the-art spectroscopic observations of active galactic nuclei (AGNs) the robust features of absorption lines (e.g. most notably by H/He-like ions), called warm absorbers (WAs), have been often detected in soft X-rays (< 2 keV). While the identified WAs are often mildly blueshifted to yield line-of-sight velocities up to ~100-3,000 km/sec in typical X-ray-bright Seyfert 1 AGNs, a fraction of Seyfert galaxies such as PG 1211+143 exhibits even faster absorbers (v/ 0.1-0.2) called ultra-fast outflows (UFOs) whose physical condition is much more extreme compared with the WAs. Motivated by these recent X-ray data we show that the magnetically- driven accretion-disk wind model is a plausible scenario to explain the characteristic property of these X-ray absorbers. As a preliminary case study we demonstrate that the wind model parameters (e.g. viewing angle and wind density) can be constrained by data from PG 1211+143 at a statistically significant level with chi-squared spectral analysis. Our wind models can thus be implemented into the standard analysis package, XSPEC, as a table spectrum model for general analysis of X-ray absorbers.
Sulfidic Anion Concentrations on Early Earth for Surficial Origins-of-Life Chemistry.
Ranjan, Sukrit; Todd, Zoe R; Sutherland, John D; Sasselov, Dimitar D
2018-04-08
A key challenge in origin-of-life studies is understanding the environmental conditions on early Earth under which abiogenesis occurred. While some constraints do exist (e.g., zircon evidence for surface liquid water), relatively few constraints exist on the abundances of trace chemical species, which are relevant to assessing the plausibility and guiding the development of postulated prebiotic chemical pathways which depend on these species. In this work, we combine literature photochemistry models with simple equilibrium chemistry calculations to place constraints on the plausible range of concentrations of sulfidic anions (HS - , HSO 3 - , SO 3 2- ) available in surficial aquatic reservoirs on early Earth due to outgassing of SO 2 and H 2 S and their dissolution into small shallow surface water reservoirs like lakes. We find that this mechanism could have supplied prebiotically relevant levels of SO 2 -derived anions, but not H 2 S-derived anions. Radiative transfer modeling suggests UV light would have remained abundant on the planet surface for all but the largest volcanic explosions. We apply our results to the case study of the proposed prebiotic reaction network of Patel et al. ( 2015 ) and discuss the implications for improving its prebiotic plausibility. In general, epochs of moderately high volcanism could have been especially conducive to cyanosulfidic prebiotic chemistry. Our work can be similarly applied to assess and improve the prebiotic plausibility of other postulated surficial prebiotic chemistries that are sensitive to sulfidic anions, and our methods adapted to study other atmospherically derived trace species. Key Words: Early Earth-Origin of life-Prebiotic chemistry-Volcanism-UV radiation-Planetary environments. Astrobiology 18, xxx-xxx.
Analysis of trend changes in Northern African palaeo-climate by using Bayesian inference
NASA Astrophysics Data System (ADS)
Schütz, Nadine; Trauth, Martin H.; Holschneider, Matthias
2010-05-01
Climate variability of Northern Africa is of high interest due to climate-evolutionary linkages under study. The reconstruction of the palaeo-climate over long time scales, including the expected linkages (> 3 Ma), is mainly accessible by proxy data from deep sea drilling cores. By concentrating on published data sets, we try to decipher rhythms and trends to detect correlations between different proxy time series by advanced mathematical methods. Our preliminary data is dust concentration, as an indicator for climatic changes such as humidity, from the ODP sites 659, 721 and 967 situated around Northern Africa. Our interest is in challenging the available time series with advanced statistical methods to detect significant trend changes and to compare different model assumptions. For that purpose, we want to avoid the rescaling of the time axis to obtain equidistant time steps for filtering methods. Additionally we demand an plausible description of the errors for the estimated parameters, in terms of confidence intervals. Finally, depending on what model we restrict on, we also want an insight in the parameter structure of the assumed models. To gain this information, we focus on Bayesian inference by formulating the problem as a linear mixed model, so that the expectation and deviation are of linear structure. By using the Bayesian method we can formulate the posteriori density as a function of the model parameters and calculate this probability density in the parameter space. Depending which parameters are of interest, we analytically and numerically marginalize the posteriori with respect to the remaining parameters of less interest. We apply a simple linear mixed model to calculate the posteriori densities of the ODP sites 659 and 721 concerning the last 5 Ma at maximum. From preliminary calculations on these data sets, we can confirm results gained by the method of breakfit regression combined with block bootstrapping ([1]). We obtain a significant change point around (1.63 - 1.82) Ma, which correlates with a global climate transition due to the establishment of the Walker circulation ([2]). Furthermore we detect another significant change point around (2.7 - 3.2) Ma, which correlates with the end of the Pliocene warm period (permanent El Niño-like conditions) and the onset of a colder global climate ([3], [4]). The discussion on the algorithm, the results of calculated confidence intervals, the available information about the applied model in the parameter space and the comparison of multiple change point models will be presented. [1] Trauth, M.H., et al., Quaternary Science Reviews, 28, 2009 [2] Wara, M.W., et al., Science, Vol. 309, 2005 [3] Chiang, J.C.H., Annual Review of Earth and Planetary Sciences, Vol. 37, 2009 [4] deMenocal, P., Earth and Planetary Science Letters, 220, 2004
The Variance Reaction Time Model
ERIC Educational Resources Information Center
Sikstrom, Sverker
2004-01-01
The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…
ERIC Educational Resources Information Center
Cangelosi, Angelo; Riga, Thomas
2006-01-01
The grounding of symbols in computational models of linguistic abilities is one of the fundamental properties of psychologically plausible cognitive models. In this article, we present an embodied model for the grounding of language in action based on epigenetic robots. Epigenetic robotics is one of the new cognitive modeling approaches to…
Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe; Smith, Colette; Ratmann, Oliver; Cambiano, Valentina; Albert, Jan; Amato-Gauci, Andrew; Bezemer, Daniela; Campbell, Colin; Commenges, Daniel; Donoghoe, Martin; Ford, Deborah; Kouyos, Roger; Lodwick, Rebecca; Lundgren, Jens; Pantazis, Nikos; Pharris, Anastasia; Quinten, Chantal; Thorne, Claire; Touloumi, Giota; Delpech, Valerie; Phillips, Andrew
2016-03-01
It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90% plausibility range: 39,900-45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population are thought to have viral load <500 copies/ml. In the pseudo-epidemic example, HIV estimates have narrower plausibility ranges and are closer to the true number, the greater the data availability to calibrate the model. We demonstrate that our method can be applied to settings with less data, however plausibility ranges for estimates will be wider to reflect greater uncertainty of the data used to fit the model.
Confronting the Uncertainty in Aerosol Forcing Using Comprehensive Observational Data
NASA Astrophysics Data System (ADS)
Johnson, J. S.; Regayre, L. A.; Yoshioka, M.; Pringle, K.; Sexton, D.; Lee, L.; Carslaw, K. S.
2017-12-01
The effect of aerosols on cloud droplet concentrations and radiative properties is the largest uncertainty in the overall radiative forcing of climate over the industrial period. In this study, we take advantage of a large perturbed parameter ensemble of simulations from the UK Met Office HadGEM-UKCA model (the aerosol component of the UK Earth System Model) to comprehensively sample uncertainty in aerosol forcing. Uncertain aerosol and atmospheric parameters cause substantial aerosol forcing uncertainty in climatically important regions. As the aerosol radiative forcing itself is unobservable, we investigate the potential for observations of aerosol and radiative properties to act as constraints on the large forcing uncertainty. We test how eight different theoretically perfect aerosol and radiation observations can constrain the forcing uncertainty over Europe. We find that the achievable constraint is weak unless many diverse observations are used simultaneously. This is due to the complex relationships between model output responses and the multiple interacting parameter uncertainties: compensating model errors mean there are many ways to produce the same model output (known as model equifinality) which impacts on the achievable constraint. However, using all eight observable quantities together we show that the aerosol forcing uncertainty can potentially be reduced by around 50%. This reduction occurs as we reduce a large sample of model variants (over 1 million) that cover the full parametric uncertainty to around 1% that are observationally plausible.Constraining the forcing uncertainty using real observations is a more complex undertaking, in which we must account for multiple further uncertainties including measurement uncertainties, structural model uncertainties and the model discrepancy from reality. Here, we make a first attempt to determine the true potential constraint on the forcing uncertainty from our model that is achievable using a comprehensive set of real aerosol and radiation observations taken from ground stations, flight campaigns and satellite. This research has been supported by the UK-China Research & Innovation Partnership Fund through the Met Office Climate Science for Service Partnership (CSSP) China as part of the Newton Fund, and by the NERC funded GASSP project.
Reducing uncertainty in Climate Response Time Scale by Bayesian Analysis of the 8.2 ka event
NASA Astrophysics Data System (ADS)
Lorenz, A.; Held, H.; Bauer, E.; Schneider von Deimling, T.
2009-04-01
We analyze the possibility of uncertainty reduction in Climate Response Time Scale by utilizing Greenland ice-core data that contain the 8.2 ka event within a Bayesian model-data intercomparison with the Earth system model of intermediate complexity, CLIMBER-2.3. Within a stochastic version of the model it has been possible to mimic the 8.2 ka event within a plausible experimental setting and with relatively good accuracy considering the timing of the event in comparison to other modeling exercises [1]. The simulation of the centennial cold event is effectively determined by the oceanic cooling rate which depends largely on the ocean diffusivity described by diffusion coefficients of relatively wide uncertainty ranges. The idea now is to discriminate between the different values of diffusivities according to their likelihood to rightly represent the duration of the 8.2 ka event and thus to exploit the paleo data to constrain uncertainty in model parameters in analogue to [2]. Implementing this inverse Bayesian Analysis with this model the technical difficulty arises to establish the related likelihood numerically in addition to the uncertain model parameters: While mainstream uncertainty analyses can assume a quasi-Gaussian shape of likelihood, with weather fluctuating around a long term mean, the 8.2 ka event as a highly nonlinear effect precludes such an a priori assumption. As a result of this study [3] the Bayesian Analysis showed a reduction of uncertainty in vertical ocean diffusivity parameters of factor 2 compared to prior knowledge. This learning effect on the model parameters is propagated to other model outputs of interest; e.g. the inverse ocean heat capacity, which is important for the dominant time scale of climate response to anthropogenic forcing which, in combination with climate sensitivity, strongly influences the climate systems reaction for the near- and medium-term future. 1 References [1] E. Bauer, A. Ganopolski, M. Montoya: Simulation of the cold climate event 8200 years ago by meltwater outburst from lake Agassiz. Paleoceanography 19:PA3014, (2004) [2] T. Schneider von Deimling, H. Held, A. Ganopolski, S. Rahmstorf, Climate sensitivity estimated from ensemble simulations of glacial climates, Climate Dynamics 27, 149-163, DOI 10.1007/s00382-006-0126-8 (2006). [3] A. Lorenz, Diploma Thesis, U Potsdam (2007).
On the formation mechanisms of compact elliptical galaxies
NASA Astrophysics Data System (ADS)
Ferré-Mateu, Anna; Forbes, Duncan A.; Romanowsky, Aaron J.; Janz, Joachim; Dixon, Christopher
2018-01-01
In order to investigate the formation mechanisms of the rare compact elliptical (cE) galaxies, we have compiled a sample of 25 cEs with good SDSS spectra, covering a range of stellar masses, sizes and environments. They have been visually classified according to the interaction with their host, representing different evolutionary stages. We have included clearly disrupted galaxies, galaxies that despite not showing signs of interaction are located close to a massive neighbour (thus are good candidates for a stripping process), and cEs with no host nearby. For the latter, tidal stripping is less likely to have happened and instead they could simply represent the very low-mass, faint end of the ellipticals. We study a set of properties (structural parameters, stellar populations, star formation histories and mass ratios) that can be used to discriminate between an intrinsic or stripped origin. We find that one diagnostic tool alone is inconclusive for the majority of objects. However, if we combine all the tools a clear picture emerges. The most plausible origin, as well as the evolutionary stage and progenitor type, can be then determined. Our results favour the stripping mechanism for those galaxies in groups and clusters that have a plausible host nearby, but favours an intrinsic origin for those rare cEs without a plausible host and that are located in looser environments.
Nutrient control of phytoplankton photosynthesis in the western North Atlantic
NASA Technical Reports Server (NTRS)
Platt, Trevor; Sathyendranath, Shubha; Ulloa, Osvaldo; Harrison, William G.; Hoepffner, Nicolas; Goes, Joaquim
1992-01-01
Results from several years of oceanographic cruises are reported which show that the parameters of the photosynthesis-light curve of the flora of the North Sargasso Sea are remarkably constant in magnitude, except during the spring phytoplankton bloom when their magnitudes are noticeably higher. These results are interpreted as providing direct evidence for nutrient control of photosynthesis in the open ocean. The findings also reinforce the plausibility of using biogeochemical provinces to partition the ocean into manageable units for basin- or global-scale analysis. They show that seasonal changes in critical parameter should not be overlooked if robust carbon budgets are to be constructed, and illustrate the value of attacking the parameters that control the key fluxes, rather than the fluxes themselves, when investigating the ocean carbon cycle.
Projected Impact of Dengue Vaccination in Yucatán, Mexico
Pearson, Carl A. B.; Chao, Dennis L.; Rojas, Diana Patricia; Recchia, Gabriel L.; Gómez-Dantés, Héctor; Halloran, M. Elizabeth; Pulliam, Juliet R. C.; Longini, Ira M.
2016-01-01
Dengue vaccines will soon provide a new tool for reducing dengue disease, but the effectiveness of widespread vaccination campaigns has not yet been determined. We developed an agent-based dengue model representing movement of and transmission dynamics among people and mosquitoes in Yucatán, Mexico, and simulated various vaccine scenarios to evaluate effectiveness under those conditions. This model includes detailed spatial representation of the Yucatán population, including the location and movement of 1.8 million people between 375,000 households and 100,000 workplaces and schools. Where possible, we designed the model to use data sources with international coverage, to simplify re-parameterization for other regions. The simulation and analysis integrate 35 years of mild and severe case data (including dengue serotype when available), results of a seroprevalence survey, satellite imagery, and climatological, census, and economic data. To fit model parameters that are not directly informed by available data, such as disease reporting rates and dengue transmission parameters, we developed a parameter estimation toolkit called AbcSmc, which we have made publicly available. After fitting the simulation model to dengue case data, we forecasted transmission and assessed the relative effectiveness of several vaccination strategies over a 20 year period. Vaccine efficacy is based on phase III trial results for the Sanofi-Pasteur vaccine, Dengvaxia. We consider routine vaccination of 2, 9, or 16 year-olds, with and without a one-time catch-up campaign to age 30. Because the durability of Dengvaxia is not yet established, we consider hypothetical vaccines that confer either durable or waning immunity, and we evaluate the use of booster doses to counter waning. We find that plausible vaccination scenarios with a durable vaccine reduce annual dengue incidence by as much as 80% within five years. However, if vaccine efficacy wanes after administration, we find that there can be years with larger epidemics than would occur without any vaccination, and that vaccine booster doses are necessary to prevent this outcome. PMID:27227883
Projected Impact of Dengue Vaccination in Yucatán, Mexico.
Hladish, Thomas J; Pearson, Carl A B; Chao, Dennis L; Rojas, Diana Patricia; Recchia, Gabriel L; Gómez-Dantés, Héctor; Halloran, M Elizabeth; Pulliam, Juliet R C; Longini, Ira M
2016-05-01
Dengue vaccines will soon provide a new tool for reducing dengue disease, but the effectiveness of widespread vaccination campaigns has not yet been determined. We developed an agent-based dengue model representing movement of and transmission dynamics among people and mosquitoes in Yucatán, Mexico, and simulated various vaccine scenarios to evaluate effectiveness under those conditions. This model includes detailed spatial representation of the Yucatán population, including the location and movement of 1.8 million people between 375,000 households and 100,000 workplaces and schools. Where possible, we designed the model to use data sources with international coverage, to simplify re-parameterization for other regions. The simulation and analysis integrate 35 years of mild and severe case data (including dengue serotype when available), results of a seroprevalence survey, satellite imagery, and climatological, census, and economic data. To fit model parameters that are not directly informed by available data, such as disease reporting rates and dengue transmission parameters, we developed a parameter estimation toolkit called AbcSmc, which we have made publicly available. After fitting the simulation model to dengue case data, we forecasted transmission and assessed the relative effectiveness of several vaccination strategies over a 20 year period. Vaccine efficacy is based on phase III trial results for the Sanofi-Pasteur vaccine, Dengvaxia. We consider routine vaccination of 2, 9, or 16 year-olds, with and without a one-time catch-up campaign to age 30. Because the durability of Dengvaxia is not yet established, we consider hypothetical vaccines that confer either durable or waning immunity, and we evaluate the use of booster doses to counter waning. We find that plausible vaccination scenarios with a durable vaccine reduce annual dengue incidence by as much as 80% within five years. However, if vaccine efficacy wanes after administration, we find that there can be years with larger epidemics than would occur without any vaccination, and that vaccine booster doses are necessary to prevent this outcome.
Jiang, Ping; Chiba, Ryosuke; Takakusaki, Kaoru; Ota, Jun
2016-01-01
The development of a physiologically plausible computational model of a neural controller that can realize a human-like biped stance is important for a large number of potential applications, such as assisting device development and designing robotic control systems. In this paper, we develop a computational model of a neural controller that can maintain a musculoskeletal model in a standing position, while incorporating a 120-ms neurological time delay. Unlike previous studies that have used an inverted pendulum model, a musculoskeletal model with seven joints and 70 muscular-tendon actuators is adopted to represent the human anatomy. Our proposed neural controller is composed of both feed-forward and feedback controls. The feed-forward control corresponds to the constant activation input necessary for the musculoskeletal model to maintain a standing posture. This compensates for gravity and regulates stiffness. The developed neural controller model can replicate two salient features of the human biped stance: (1) physiologically plausible muscle activations for quiet standing; and (2) selection of a low active stiffness for low energy consumption. PMID:27655271
Seismic imaging in hardrock environments: The role of heterogeneity?
NASA Astrophysics Data System (ADS)
Bongajum, Emmanuel; Milkereit, Bernd; Adam, Erick; Meng, Yijian
2012-10-01
We investigate the effect of petrophysical scale parameters and structural dips on wave propagation and imaging in heterogeneous media. Seismic wave propagation effects within the heterogeneous media are studied for different velocity models with scale lengths determined via stochastic analysis of petrophysical logs from the Matagami mine, Quebec, Canada. The elastic modeling study reveals that provided certain conditions of the velocity fluctuations are met, strong local distortions of amplitude and arrival times of propagating waves are observed as the degree of scale length anisotropy in the P-wave velocity increases. The location of these local amplitude anomalies is related to the dips characterizing the fabric of the host rocks. This result is different from the elliptical shape of direct waves often defined by effective anisotropic parameters used for layered media. Although estimates of anisotropic parameters suggest weak anisotropy in the investigated models, these effective anisotropic parameters often used in VTI/TTI do not sufficiently describe the effects of scale length anisotropy in heterogeneous media that show such local amplitude, travel time, and phase distortions in the wavefields. Numerical investigations on the implications for reverse time migration (RTM) routines corroborate that mean P-wave velocity of the host rocks produces reliable imaging results. Based on the RTM results, we postulate the following: weak anisotropy in hardrock environments is a sufficient assumption for processing seismic data; and seismic scattering effects due to velocity heterogeneity with a dip component is not sufficient to cause mislocation errors of target structures as observed in the discrepancy between the location of the strong seismic reflections associated to the Matagami sulfide orebody and its true location. Future work will investigate other factors that may provide plausible explanations for these mislocation problems, with the objective of providing a mitigation strategy for incorporation into the seismic data processing sequence when imaging in hardrock settings.
Wortmann, Franz J; Wortmann, Gabriele; Haake, Hans-Martin; Eisfeld, Wolf
Torsional analysis of single human hairs is especially suited to determine the properties of the cuticle and its changes through cosmetic processing. The two primary parameters, which are obtained by free torsional oscillation using the torsional pendulum method, are storage ( G ') and loss modulus ( G ″). Based on previous work on G ', the current investigation focuses on G ″. The results show an increase of G ″ with a drop of G ' and vice versa , as is expected for a viscoelastic material well below its glass transition. The overall power of G ″ to discriminate between samples is quite low. This is attributed to the systematic decrease of the parameter values with increasing fiber diameter, with a pronounced correlation between G ″ and G '. Analyzing this effect on the basis of a core/shell model for the cortex/cuticle structure of hair by nonlinear regression leads to estimates for the loss moduli of cortex ( G ″ co ) and cuticle ( G ″ cu ). Although the values for G ″ co turn out to be physically not plausible, due to limitations of the applied model, those for G ″ cu are considered as generally realistic against relevant literature values. Significant differences between the loss moduli of the cuticle for the different samples provide insight into changes of the torsional energy loss due to the cosmetic processes and products, contributing toward a consistent view of torsional energy storage and loss, namely, in the cuticle of hair.
An automated approach to magnetic divertor configuration design
NASA Astrophysics Data System (ADS)
Blommaert, M.; Dekeyser, W.; Baelmans, M.; Gauger, N. R.; Reiter, D.
2015-01-01
Automated methods based on optimization can greatly assist computational engineering design in many areas. In this paper an optimization approach to the magnetic design of a nuclear fusion reactor divertor is proposed and applied to a tokamak edge magnetic configuration in a first feasibility study. The approach is based on reduced models for magnetic field and plasma edge, which are integrated with a grid generator into one sensitivity code. The design objective chosen here for demonstrative purposes is to spread the divertor target heat load as much as possible over the entire target area. Constraints on the separatrix position are introduced to eliminate physically irrelevant magnetic field configurations during the optimization cycle. A gradient projection method is used to ensure stable cost function evaluations during optimization. The concept is applied to a configuration with typical Joint European Torus (JET) parameters and it automatically provides plausible configurations with reduced heat load.
NASA Astrophysics Data System (ADS)
Shibata, Masaru; Kiuchi, Kenta
2017-06-01
Employing a simplified version of the Israel-Stewart formalism of general-relativistic shear-viscous hydrodynamics, we explore the evolution of a remnant massive neutron star of binary neutron star merger and pay special attention to the resulting gravitational waveforms. We find that for the plausible values of the so-called viscous alpha parameter of the order 10-2 the degree of the differential rotation in the remnant massive neutron star is significantly reduced in the viscous time scale, ≲5 ms . Associated with this, the degree of nonaxisymmetric deformation is also reduced quickly, and as a consequence, the amplitude of quasiperiodic gravitational waves emitted also decays in the viscous time scale. Our results indicate that for modeling the evolution of the merger remnants of binary neutron stars we would have to take into account magnetohydrodynamics effects, which in nature could provide the viscous effects.
Thermal activation in Co/Sb nanoparticle-multilayer thin films
NASA Astrophysics Data System (ADS)
Madden, Michael R.
Multilayer "Co" /"Sb" thin films created via electron-beam physical vapor deposition are known to exhibit thermally activated dynamics. Scanning tunneling microscopy has indicated that the "Co" forms nanoparticles within an "Sb" matrix during deposition and subsequently forms nanowires by way of NP migration within the interstices of the confining layers. The electrical resistance of these systems decays during this irreversible aging process in a manner well-modeled by an Arrhenius law. Presently, this phenomenon is shown to possess some degree of tunability with respect to "Co" layer thickness tCo as well as deposition temperature Tdep , whereby characteristic timescales increase with either parameter. Furthermore, fluctuation timescales and activation energies seem to decrease and increase respectively with increasing t Co. An easily calibrated, one-time-use, time-temperature switch based on such systems lies within the realm of plausibility. The results presented here can be considered to be part of an ongoing development of the concept.
NASA Technical Reports Server (NTRS)
Katsuda, Satoru; Tsunemi, Hiroshi; Mori, Koji; Uchida, Hiroyuki; Petre, Robert; Yamada, Shinya; Akamatsu, Hiroki; Konami, Saori; Tamagawa, Toru
2012-01-01
We present high-resolution X-ray spectra of cloud-shock interaction regions in the eastern and northern rims of the Galactic supernova remnant Puppis A, using the Reflection Grating Spectrometer onboard the XMM-Newton satellite. A number of emission lines including K(alpha) triplets of He-like N, O , and Ne are clearly resolved for the first time. Intensity ratios of forbidden to resonance lines in the triplets are found to be higher than predictions by thermal emission models having plausible plasma parameters. The anomalous line ratios cannot be reproduced by effects of resonance scattering, recombination, or inner-shell ionization processes, but could be explained by charge-exchange emission that should arise at interfaces between the cold/warm clouds and the hot plasma. Our observations thus provide observational support for charge-exchange X-ray emission in supernova remnants.
The Higgs seesaw induced neutrino masses and dark matter
Cai, Yi; Chao, Wei
2015-08-12
In this study we propose a possible explanation of the active neutrino Majorana masses with the TeV scale new physics which also provide a dark matter candidate. We extend the Standard Model (SM) with a local U(1)' symmetry and introduce a seesaw relation for the vacuum expectation values (VEVs) of the exotic scalar singlets, which break the U(1)' spontaneously. The larger VEV is responsible for generating the Dirac mass term of the heavy neutrinos, while the smaller for the Majorana mass term. As a result active neutrino masses are generated via the modified inverse seesaw mechanism. The lightest of themore » new fermion singlets, which are introduced to cancel the U(1)' anomalies, can be a stable particle with ultra flavor symmetry and thus a plausible dark matter candidate. We explore the parameter space with constraints from the dark matter relic abundance and dark matter direct detection.« less
NASA Technical Reports Server (NTRS)
Zirin, R. M.; Witmer, E. A.
1972-01-01
An approximate collision analysis, termed the collision-force method, was developed for studying impact-interaction of an engine rotor blade fragment with an initially circular containment ring. This collision analysis utilizes basic mass, material property, geometry, and pre-impact velocity information for the fragment, together with any one of three postulated patterns of blade deformation behavior: (1) the elastic straight blade model, (2) the elastic-plastic straight shortening blade model, and (3) the elastic-plastic curling blade model. The collision-induced forces are used to predict the resulting motions of both the blade fragment and the containment ring. Containment ring transient responses are predicted by a finite element computer code which accommodates the large deformation, elastic-plastic planar deformation behavior of simple structures such as beams and/or rings. The effects of varying the values of certain parameters in each blade-behavior model were studied. Comparisons of predictions with experimental data indicate that of the three postulated blade-behavior models, the elastic-plastic curling blade model appears to be the most plausible and satisfactory for predicting the impact-induced motions of a ductile engine rotor blade and a containment ring against which the blade impacts.
Multilevel models for estimating incremental net benefits in multinational studies.
Grieve, Richard; Nixon, Richard; Thompson, Simon G; Cairns, John
2007-08-01
Multilevel models (MLMs) have been recommended for estimating incremental net benefits (INBs) in multicentre cost-effectiveness analysis (CEA). However, these models have assumed that the INBs are exchangeable and that there is a common variance across all centres. This paper examines the plausibility of these assumptions by comparing various MLMs for estimating the mean INB in a multinational CEA. The results showed that the MLMs that assumed the INBs were exchangeable and had a common variance led to incorrect inferences. The MLMs that included covariates to allow for systematic differences across the centres, and estimated different variances in each centre, made more plausible assumptions, fitted the data better and led to more appropriate inferences. We conclude that the validity of assumptions underlying MLMs used in CEA need to be critically evaluated before reliable conclusions can be drawn. Copyright 2006 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Jansen, Peter A.; Watter, Scott
2012-03-01
Connectionist language modelling typically has difficulty with syntactic systematicity, or the ability to generalise language learning to untrained sentences. This work develops an unsupervised connectionist model of infant grammar learning. Following the semantic boostrapping hypothesis, the network distils word category using a developmentally plausible infant-scale database of grounded sensorimotor conceptual representations, as well as a biologically plausible semantic co-occurrence activation function. The network then uses this knowledge to acquire an early benchmark clausal grammar using correlational learning, and further acquires separate conceptual and grammatical category representations. The network displays strongly systematic behaviour indicative of the general acquisition of the combinatorial systematicity present in the grounded infant-scale language stream, outperforms previous contemporary models that contain primarily noun and verb word categories, and successfully generalises broadly to novel untrained sensorimotor grounded sentences composed of unfamiliar nouns and verbs. Limitations as well as implications to later grammar learning are discussed.
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
Stüeken, E E; Kipp, M A; Koehler, M C; Schwieterman, E W; Johnson, B; Buick, R
2016-12-01
Nitrogen is a major nutrient for all life on Earth and could plausibly play a similar role in extraterrestrial biospheres. The major reservoir of nitrogen at Earth's surface is atmospheric N 2 , but recent studies have proposed that the size of this reservoir may have fluctuated significantly over the course of Earth's history with particularly low levels in the Neoarchean-presumably as a result of biological activity. We used a biogeochemical box model to test which conditions are necessary to cause large swings in atmospheric N 2 pressure. Parameters for our model are constrained by observations of modern Earth and reconstructions of biomass burial and oxidative weathering in deep time. A 1-D climate model was used to model potential effects on atmospheric climate. In a second set of tests, we perturbed our box model to investigate which parameters have the greatest impact on the evolution of atmospheric pN 2 and consider possible implications for nitrogen cycling on other planets. Our results suggest that (a) a high rate of biomass burial would have been needed in the Archean to draw down atmospheric pN 2 to less than half modern levels, (b) the resulting effect on temperature could probably have been compensated by increasing solar luminosity and a mild increase in pCO 2 , and (c) atmospheric oxygenation could have initiated a stepwise pN 2 rebound through oxidative weathering. In general, life appears to be necessary for significant atmospheric pN 2 swings on Earth-like planets. Our results further support the idea that an exoplanetary atmosphere rich in both N 2 and O 2 is a signature of an oxygen-producing biosphere. Key Words: Biosignatures-Early Earth-Planetary atmospheres. Astrobiology 16, 949-963.
Guatteri, Mariagiovanna; Spudich, P.; Beroza, G.C.
2001-01-01
We consider the applicability of laboratory-derived rate- and state-variable friction laws to the dynamic rupture of the 1995 Kobe earthquake. We analyze the shear stress and slip evolution of Ide and Takeo's [1997] dislocation model, fitting the inferred stress change time histories by calculating the dynamic load and the instantaneous friction at a series of points within the rupture area. For points exhibiting a fast-weakening behavior, the Dieterich-Ruina friction law, with values of dc = 0.01-0.05 m for critical slip, fits the stress change time series well. This range of dc is 10-20 times smaller than the slip distance over which the stress is released, Dc, which previous studies have equated with the slip-weakening distance. The limited resolution and low-pass character of the strong motion inversion degrades the resolution of the frictional parameters and suggests that the actual dc is less than this value. Stress time series at points characterized by a slow-weakening behavior are well fitted by the Dieterich-Ruina friction law with values of dc ??? 0.01-0.05 m. The apparent fracture energy Gc can be estimated from waveform inversions more stably than the other friction parameters. We obtain a Gc = 1.5??106 J m-2 for the 1995 Kobe earthquake, in agreement with estimates for previous earthquakes. From this estimate and a plausible upper bound for the local rock strength we infer a lower bound for Dc of about 0.008 m. Copyright 2001 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Jalalzadeh Fard, B.; Hassanzadeh, H.; Bhatia, U.; Ganguly, A. R.
2016-12-01
Studies on urban areas show a significant increase in frequency and intensity of heatwaves over the past decades, and predict the same trend for future. Since heatwaves have been responsible for a large number of life losses, urgent adaptation and mitigation strategies are required in the policy and decision making level for a sustainable urban planning. The Sustainability and Data Sciences Laboratory at Northeastern University, under the aegis of Thriving Earth Exchange of AGU, is working with the town of Brookline to understand the potential public health impacts of anticipated heatwaves. We consider the most important social and physical factors to obtain vulnerability and exposure parameters for each census block group of the town. Utilizing remote sensing data, we locate Urban Heat Islands (UHIs) during a recent heatwave event, as the hazard parameter. We then create priority risk map using the risk framework. Our analyses show spatial correlations between the UHIs and social factors such as poverty, and physical factors such as land cover variations. Furthermore, we investigate the future heatwave frequency and intensity increases by analyzing the climate models predictions. For future changes of UHIs, land cover changes are investigated using available predictive data. Also, socioeconomic predictions are carried out to complete the futuristic models of heatwave risks. Considering plausible scenarios for Brookline, we develop different risk maps based on the vulnerability, exposure and hazard parameters. Eventually, we suggest guidelines for Heatwave Action Plans for prioritizing effective mitigation and adaptation strategies in urban planning for the town of Brookline.
Mean-field analysis of an inductive reasoning game: Application to influenza vaccination
NASA Astrophysics Data System (ADS)
Breban, Romulus; Vardavas, Raffaele; Blower, Sally
2007-09-01
Recently we have introduced an inductive reasoning game of voluntary yearly vaccination to establish whether or not a population of individuals acting in their own self-interest would be able to prevent influenza epidemics. Here, we analyze our model to describe the dynamics of the collective yearly vaccination uptake. We discuss the mean-field equations of our model and first order effects of fluctuations. We explain why our model predicts that severe epidemics are periodically expected even without the introduction of pandemic strains. We find that fluctuations in the collective yearly vaccination uptake induce severe epidemics with an expected periodicity that depends on the number of independent decision makers in the population. The mean-field dynamics also reveal that there are conditions for which the dynamics become robust to the fluctuations. However, the transition between fluctuation-sensitive and fluctuation-robust dynamics occurs for biologically implausible parameters. We also analyze our model when incentive-based vaccination programs are offered. When a family-based incentive is offered, the expected periodicity of severe epidemics is increased. This results from the fact that the number of independent decision makers is reduced, increasing the effect of the fluctuations. However, incentives based on the number of years of prepayment of vaccination may yield fluctuation-robust dynamics where severe epidemics are prevented. In this case, depending on prepayment, the transition between fluctuation-sensitive and fluctuation-robust dynamics may occur for biologically plausible parameters. Our analysis provides a practical method for identifying how many years of free vaccination should be provided in order to successfully ameliorate influenza epidemics.
Mean-field analysis of an inductive reasoning game: application to influenza vaccination.
Breban, Romulus; Vardavas, Raffaele; Blower, Sally
2007-09-01
Recently we have introduced an inductive reasoning game of voluntary yearly vaccination to establish whether or not a population of individuals acting in their own self-interest would be able to prevent influenza epidemics. Here, we analyze our model to describe the dynamics of the collective yearly vaccination uptake. We discuss the mean-field equations of our model and first order effects of fluctuations. We explain why our model predicts that severe epidemics are periodically expected even without the introduction of pandemic strains. We find that fluctuations in the collective yearly vaccination uptake induce severe epidemics with an expected periodicity that depends on the number of independent decision makers in the population. The mean-field dynamics also reveal that there are conditions for which the dynamics become robust to the fluctuations. However, the transition between fluctuation-sensitive and fluctuation-robust dynamics occurs for biologically implausible parameters. We also analyze our model when incentive-based vaccination programs are offered. When a family-based incentive is offered, the expected periodicity of severe epidemics is increased. This results from the fact that the number of independent decision makers is reduced, increasing the effect of the fluctuations. However, incentives based on the number of years of prepayment of vaccination may yield fluctuation-robust dynamics where severe epidemics are prevented. In this case, depending on prepayment, the transition between fluctuation-sensitive and fluctuation-robust dynamics may occur for biologically plausible parameters. Our analysis provides a practical method for identifying how many years of free vaccination should be provided in order to successfully ameliorate influenza epidemics.
NASA Astrophysics Data System (ADS)
Harper, E. B.; Stella, J. C.; Fremier, A. K.
2009-12-01
Fremont cottonwood (Populus fremontii) is an important component of semi-arid riparian ecosystems throughout western North America, but its populations are in decline due to flow regulation. Achieving a balance between human resource needs and riparian ecosystem function requires a mechanistic understanding of the multiple geomorphic and biological factors affecting tree recruitment and survival, including the timing and magnitude of river flows, and the concomitant influence on suitable habitat creation and mortality from scour and sedimentation burial. Despite a great deal of empirical research on some components of the system, such as factors affecting cottonwood recruitment, other key components are less studied. Yet understanding the relative influence of the full suite of physical and life-history drivers is critical to modeling whole-population dynamics under changing environmental conditions. We addressed these issues for the Fremont cottonwood population along the Sacramento River, CA using a sensitivity analysis approach to quantify uncertainty in parameters on the outcomes of a patch-based, dynamic population model. Using a broad range of plausible values for 15 model parameters that represent key physical, biological and climatic components of the ecosystem, we ran 1,000 population simulations that consisted of a subset of 14.3 million possible combinations of parameter estimates to predict the frequency of patch colonization and total forest habitat predicted to occur under current hydrologic conditions after 175 years. Results indicate that Fremont cottonwood populations are highly sensitive to the interactions among flow regime, sedimentation rate and the depth of the capillary fringe (Fig. 1). Estimates of long-term floodplain sedimentation rate would substantially improve model accuracy. Spatial variation in sediment texture was also important to the extent that it determines the depth of the capillary fringe, which regulates the availability of water for germination and adult tree growth. Our sensitivity analyses suggest that models of future scenarios should incorporate regional climate change projections because changes in temperature and the timing and volume of precipitation affects sensitive aspects of the system, including the timing of seed release and spring snowmelt runoff. Figure 1. The relative effects on model predictions of uncertainty around each parameter included in the patch-based population model for Fremont cottonwood.
Emulation for probabilistic weather forecasting
NASA Astrophysics Data System (ADS)
Cornford, Dan; Barillec, Remi
2010-05-01
Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.
Remediation of anionic dye from aqueous system using bio-adsorbent prepared by microwave activation.
Sharma, Arush; Sharma, Gaurav; Naushad, Mu; Ghfar, Ayman A; Pathania, Deepak
2018-04-01
The present study was attempted to ascertain the possible application of activated carbon as a cost-effective and eco-friendly adsorbent prepared via microwave-assisted chemical activation. The activated carbon was characterized using different techniques. The various adsorption parameters have been optimized to examine the viability of activated carbon as a plausible sorbent for the remediation of Congo red (CR) dye from the aquatic system. The equilibrium data adequately fitted to the Langmuir isotherm with better R 2 (0.994). The maximum adsorption capacity (q m ) of activated carbon was recorded to be 68.96 mg/g. Additionally, sorptional kinetic data were examined by reaction-based and diffusion-based models such as pseudo-first-order and pseudo-second-order equations, and Elovich, intra-particle diffusion, and Dumwald-Wagner models, respectively. The computed values of thermodynamic parameters such as free energy change (ΔG 0 ), enthalpy change (ΔH 0 ) and entropy change (ΔS 0 ) were recorded as -3.63, 42.47 and 152.07 J/mol K, respectively, at 30°C, which accounted for a favorable, spontaneous and endothermic process. The regeneration study emphasized that the percentage uptake declined from 90.35% to 83.45% after six cycles of testing. So, our findings implied that activated carbon produced from biomass must be cost-effectively used as an adsorbent for detoxifying the CR dye from industrial effluents.
Implications from the Upper Limit of Radio Afterglow Emission of FRB 131104/Swift J0644.5-5111
NASA Astrophysics Data System (ADS)
Gao, He; Zhang, Bing
2017-02-01
A γ-ray transient, Swift J0644.5-5111, has been claimed to be associated with FRB 131104. However, a long-term radio imaging follow-up observation only placed an upper limit on the radio afterglow flux of Swift J0644.5-5111. Applying the external shock model, we perform a detailed constraint on the afterglow parameters for the FRB 131104/Swift J0644.5-5111 system. We find that for the commonly used microphysics shock parameters (e.g., {ɛ }e=0.1, {ɛ }B=0.01, and p = 2.3), if the fast radio burst (FRB) is indeed cosmological as inferred from its measured dispersion measure (DM), the ambient medium number density should be ≤slant {10}-3 {{cm}}-3, which is the typical value for a compact binary merger environment but disfavors a massive star origin. Assuming a typical ISM density, one would require that the redshift of the FRB be much smaller than the value inferred from DM (z\\ll 0.1), implying a non-cosmological origin of DM. The constraints are much looser if one adopts smaller {ɛ }B and {ɛ }e values, as observed in some gamma-ray burst afterglows. The FRB 131104/Swift J0644.5-5111 association remains plausible. We critically discuss possible progenitor models for the system.
Growing Actin Networks Form Lamellipodium and Lamellum by Self-Assembly
Huber, Florian; Käs, Josef; Stuhrmann, Björn
2008-01-01
Many different cell types are able to migrate by formation of a thin actin-based cytoskeletal extension. Recently, it became evident that this extension consists of two distinct substructures, designated lamellipodium and lamellum, which differ significantly in their kinetic and kinematic properties as well as their biochemical composition. We developed a stochastic two-dimensional computer simulation that includes chemical reaction kinetics, G-actin diffusion, and filament transport to investigate the formation of growing actin networks in migrating cells. Model parameters were chosen based on experimental data or theoretical considerations. In this work, we demonstrate the system's ability to form two distinct networks by self-organization. We found a characteristic transition in mean filament length as well as a distinct maximum in depolymerization flux, both within the first 1–2 μm. The separation into two distinct substructures was found to be extremely robust with respect to initial conditions and variation of model parameters. We quantitatively investigated the complex interplay between ADF/cofilin and tropomyosin and propose a plausible mechanism that leads to spatial separation of, respectively, ADF/cofilin- or tropomyosin-dominated compartments. Tropomyosin was found to play an important role in stabilizing the lamellar actin network. Furthermore, the influence of filament severing and annealing on the network properties is explored, and simulation data are compared to existing experimental data. PMID:18708450
Expanding the Role of Connectionism in SLA Theory
ERIC Educational Resources Information Center
Language Learning, 2013
2013-01-01
In this article, I explore how connectionism might expand its role in second language acquisition (SLA) theory by showing how some symbolic models of bilingual and second language lexical memory can be reduced to a biologically realistic (i.e., neurally plausible) connectionist model. This integration or hybridization of the two models follows the…
ERIC Educational Resources Information Center
Laszlo, Sarah; Plaut, David C.
2012-01-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between…
Sensitivity of projected long-term CO2 emissions across the Shared Socioeconomic Pathways
NASA Astrophysics Data System (ADS)
Marangoni, G.; Tavoni, M.; Bosetti, V.; Borgonovo, E.; Capros, P.; Fricko, O.; Gernaat, D. E. H. J.; Guivarch, C.; Havlik, P.; Huppmann, D.; Johnson, N.; Karkatsoulis, P.; Keppo, I.; Krey, V.; Ó Broin, E.; Price, J.; van Vuuren, D. P.
2017-01-01
Scenarios showing future greenhouse gas emissions are needed to estimate climate impacts and the mitigation efforts required for climate stabilization. Recently, the Shared Socioeconomic Pathways (SSPs) have been introduced to describe alternative social, economic and technical narratives, spanning a wide range of plausible futures in terms of challenges to mitigation and adaptation. Thus far the key drivers of the uncertainty in emissions projections have not been robustly disentangled. Here we assess the sensitivities of future CO2 emissions to key drivers characterizing the SSPs. We use six state-of-the-art integrated assessment models with different structural characteristics, and study the impact of five families of parameters, related to population, income, energy efficiency, fossil fuel availability, and low-carbon energy technology development. A recently developed sensitivity analysis algorithm allows us to parsimoniously compute both the direct and interaction effects of each of these drivers on cumulative emissions. The study reveals that the SSP assumptions about energy intensity and economic growth are the most important determinants of future CO2 emissions from energy combustion, both with and without a climate policy. Interaction terms between parameters are shown to be important determinants of the total sensitivities.
Critical zone evolution and the origins of organised complexity in watersheds
NASA Astrophysics Data System (ADS)
Harman, C.; Troch, P. A.; Pelletier, J.; Rasmussen, C.; Chorover, J.
2012-04-01
The capacity of the landscape to store and transmit water is the result of a historical trajectory of landscape, soil and vegetation development, much of which is driven by hydrology itself. Progress in geomorphology and pedology has produced models of surface and sub-surface evolution in soil-mantled uplands. These dissected, denuding modeled landscapes are emblematic of the kinds of dissipative self-organized flow structures whose hydrologic organization may also be understood by low-dimensional hydrologic models. They offer an exciting starting-point for examining the mapping between the long-term controls on landscape evolution and the high-frequency hydrologic dynamics. Here we build on recent theoretical developments in geomorphology and pedology to try to understand how the relative rates of erosion, sediment transport and soil development in a landscape determine catchment storage capacity and the relative dominance of runoff process, flow pathways and storage-discharge relationships. We do so by using a combination of landscape evolution models, hydrologic process models and data from a variety of sources, including the University of Arizona Critical Zone Observatory. A challenge to linking the landscape evolution and hydrologic model representations is the vast differences in the timescales implicit in the process representations. Furthermore the vast array of processes involved makes parameterization of such models an enormous challenge. The best data-constrained geomorphic transport and soil development laws only represent hydrologic processes implicitly, through the transport and weathering rate parameters. In this work we propose to avoid this problem by identifying the relationship between the landscape and soil evolution parameters and macroscopic climate and geological controls. These macroscopic controls (such as the aridity index) have two roles: 1) they express the water and energy constraints on the long-term evolution of the landscape system, and 2) they bound the range of plausible short-term hydroclimatic regimes that may drive a particular landscape's hydrologic dynamics. To ensure that the hydrologic dynamics implicit in the evolutionary parameters are compatible with the dynamics observed in the hydrologic modeling, a set of consistency checks based on flow process dominance are developed.
Kim, Cheol-Hee; Chang, Lim-Seok; Meng, Fan; Kajino, Mizuo; Ueda, Hiromasa; Zhang, Yuanhang; Son, Hye-Young; Lee, Jong-Jae; He, Youjiang; Xu, Jun; Sato, Keiichi; Sakurai, Tatsuya; Han, Zhiwei; Duan, Lei; Kim, Jeong-Soo; Lee, Suk-Jo; Song, Chang-Keun; Ban, Soo-Jin; Shim, Shang-Gyoo; Sunwoo, Young; Lee, Tae-Young
2012-11-01
In response to increasing trends in sulfur deposition in Northeast Asia, three countries in the region (China, Japan, and Korea) agreed to devise abatement strategies. The concepts of critical loads and source-receptor (S-R) relationships provide guidance for formulating such strategies. Based on the Long-range Transboundary Air Pollutants in Northeast Asia (LTP) project, this study analyzes sulfur deposition data in order to optimize acidic loads over the three countries. The three groups involved in this study carried out a full year (2002) of sulfur deposition modeling over the geographic region spanning the three countries, using three air quality models: MM5-CMAQ, MM5-RAQM, and RAMS-CADM, employed by Chinese, Japanese, and Korean modeling groups, respectively. Each model employed its own meteorological numerical model and model parameters. Only the emission rates for SO(2) and NO(x) obtained from the LTP project were the common parameter used in the three models. Three models revealed some bias from dry to wet deposition, particularly the latter because of the bias in annual precipitation. This finding points to the need for further sensitivity tests of the wet removal rates in association with underlying cloud-precipitation physics and parameterizations. Despite this bias, the annual total (dry plus wet) sulfur deposition predicted by the models were surprisingly very similar. The ensemble average annual total deposition was 7,203.6 ± 370 kt S with a minimal mean fractional error (MFE) of 8.95 ± 5.24 % and a pattern correlation (PC) of 0.89-0.93 between the models. This exercise revealed that despite rather poor error scores in comparison with observations, these consistent total deposition values across the three models, based on LTP group's input data assumptions, suggest a plausible S-R relationship that can be applied to the next task of designing cost-effective emission abatement strategies.
Random regression analyses using B-splines to model growth of Australian Angus cattle
Meyer, Karin
2005-01-01
Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011
Hu, Zhenghui; Ni, Pengyu; Wan, Qun; Zhang, Yan; Shi, Pengcheng; Lin, Qiang
2016-07-08
Changes in BOLD signals are sensitive to the regional blood content associated with the vasculature, which is known as V0 in hemodynamic models. In previous studies involving dynamic causal modeling (DCM) which embodies the hemodynamic model to invert the functional magnetic resonance imaging signals into neuronal activity, V0 was arbitrarily set to a physiolog-ically plausible value to overcome the ill-posedness of the inverse problem. It is interesting to investigate how the V0 value influences DCM. In this study we addressed this issue by using both synthetic and real experiments. The results show that the ability of DCM analysis to reveal information about brain causality depends critically on the assumed V0 value used in the analysis procedure. The choice of V0 value not only directly affects the strength of system connections, but more importantly also affects the inferences about the network architecture. Our analyses speak to a possible refinement of how the hemody-namic process is parameterized (i.e., by making V0 a free parameter); however, the conditional dependencies induced by a more complex model may create more problems than they solve. Obtaining more realistic V0 information in DCM can improve the identifiability of the system and would provide more reliable inferences about the properties of brain connectivity.
Tumorigenesis and Greenhouse-Effect System Dynamics: Phenomenally Diverse, but Noumenally Similar?
NASA Astrophysics Data System (ADS)
Prakash, Sai
We present a physicochemical model of tumorigenesis leading to cancer invasion and metastasis. The continuum-theoretic model, congruent with recent experiments, analyzes the plausibility of oncogenic neoplasia-induced cavitation or tensile yielding (plasticity) of the tumoral basement membrane (BM) to activate stromal invasion. The model abstracts a spheroid of normal and cancer cells that grows radially via water and nutrient influx while constrained by a stiffer BM and cell adhesion molecules. It is based on coupled fluid-solid mechanics and ATP-fueled mechano-damped cell kinetics, and uses empirical data alone as parameters. The model predicts the dynamic force and exergy (ATP) fields, and tumor size among other variables, and generates the sigmoidal dynamics of far-from-equilibrium biota. Simulations show that the tumor-membrane system, on neoplastic perturbation, evolves from one homeostatic steady state to another over time. Integrated with system dynamics theory, the model renders a key, emergent tissue-level feedback control perspective of malignancy: neoplastic tumors coupled with pathologically-softened BMs appear to participate in altered autoregulatory behavior, and likely undergo BM cavitation and stress-localized ruptures to their adhesome, with or without invadopoiesis, thereby, initiating invasion. Serendipitously, the results also reveal a noumenal similarity of the tumor-membrane to the earth-atmosphere open reactive system as concerns self-regulation.
The role of building models in the evaluation of heat-related risks
NASA Astrophysics Data System (ADS)
Buchin, Oliver; Jänicke, Britta; Meier, Fred; Scherer, Dieter; Ziegler, Felix
2016-04-01
Hazard-risk relationships in epidemiological studies are generally based on the outdoor climate, despite the fact that most of humans' lifetime is spent indoors. By coupling indoor and outdoor climates with a building model, the risk concept developed can still be based on the outdoor conditions but also includes exposure to the indoor climate. The influence of non-linear building physics and the impact of air conditioning on heat-related risks can be assessed in a plausible manner using this risk concept. For proof of concept, the proposed risk concept is compared to a traditional risk analysis. As an example, daily and city-wide mortality data of the age group 65 and older in Berlin, Germany, for the years 2001-2010 are used. Four building models with differing complexity are applied in a time-series regression analysis. This study shows that indoor hazard better explains the variability in the risk data compared to outdoor hazard, depending on the kind of building model. Simplified parameter models include the main non-linear effects and are proposed for the time-series analysis. The concept shows that the definitions of heat events, lag days, and acclimatization in a traditional hazard-risk relationship are influenced by the characteristics of the prevailing building stock.
Deng; Zhang; Zhang; ...
2016-04-11
The jet composition and energy dissipation mechanism of gamma-ray bursts (GRBs) and blazars are fundamental questions that remain not fully understood. One plausible model is to interpret the γ-ray emission of GRBs and optical emission of blazars as synchrotron radiation of electrons accelerated from the collision-induced magnetic dissipation regions in Poynting-flux-dominated jets. The polarization observation is an important and independent information to test this model. Based on our recent 3D relativistic MHD simulations of collision-induced magnetic dissipation of magnetically dominated blobs, here we perform calculations of the polarization properties of the emission in the dissipation region and apply the resultsmore » to model the polarization observational data of GRB prompt emission and blazar optical emission. In this article, we show that the same numerical model with different input parameters can reproduce well the observational data of both GRBs and blazars, especially the 90° polarization angle (PA) change in GRB 100826A and the 180° PA swing in blazar 3C279. This supports a unified model for GRB and blazar jets, suggesting that collision-induced magnetic reconnection is a common physical mechanism to power the relativistic jet emission from events with very different black hole masses.« less
Modeling the lowest-cost splitting of a herd of cows by optimizing a cost function
NASA Astrophysics Data System (ADS)
Gajamannage, Kelum; Bollt, Erik M.; Porter, Mason A.; Dawkins, Marian S.
2017-06-01
Animals live in groups to defend against predation and to obtain food. However, for some animals—especially ones that spend long periods of time feeding—there are costs if a group chooses to move on before their nutritional needs are satisfied. If the conflict between feeding and keeping up with a group becomes too large, it may be advantageous for some groups of animals to split into subgroups with similar nutritional needs. We model the costs and benefits of splitting in a herd of cows using a cost function that quantifies individual variation in hunger, desire to lie down, and predation risk. We model the costs associated with hunger and lying desire as the standard deviations of individuals within a group, and we model predation risk as an inverse exponential function of the group size. We minimize the cost function over all plausible groups that can arise from a given herd and study the dynamics of group splitting. We examine how the cow dynamics and cost function depend on the parameters in the model and consider two biologically-motivated examples: (1) group switching and group fission in a herd of relatively homogeneous cows, and (2) a herd with an equal number of adult males (larger animals) and adult females (smaller animals).
Highly adaptive tests for group differences in brain functional connectivity.
Kim, Junghi; Pan, Wei
2015-01-01
Resting-state functional magnetic resonance imaging (rs-fMRI) and other technologies have been offering evidence and insights showing that altered brain functional networks are associated with neurological illnesses such as Alzheimer's disease. Exploring brain networks of clinical populations compared to those of controls would be a key inquiry to reveal underlying neurological processes related to such illnesses. For such a purpose, group-level inference is a necessary first step in order to establish whether there are any genuinely disrupted brain subnetworks. Such an analysis is also challenging due to the high dimensionality of the parameters in a network model and high noise levels in neuroimaging data. We are still in the early stage of method development as highlighted by Varoquaux and Craddock (2013) that "there is currently no unique solution, but a spectrum of related methods and analytical strategies" to learn and compare brain connectivity. In practice the important issue of how to choose several critical parameters in estimating a network, such as what association measure to use and what is the sparsity of the estimated network, has not been carefully addressed, largely because the answers are unknown yet. For example, even though the choice of tuning parameters in model estimation has been extensively discussed in the literature, as to be shown here, an optimal choice of a parameter for network estimation may not be optimal in the current context of hypothesis testing. Arbitrarily choosing or mis-specifying such parameters may lead to extremely low-powered tests. Here we develop highly adaptive tests to detect group differences in brain connectivity while accounting for unknown optimal choices of some tuning parameters. The proposed tests combine statistical evidence against a null hypothesis from multiple sources across a range of plausible tuning parameter values reflecting uncertainty with the unknown truth. These highly adaptive tests are not only easy to use, but also high-powered robustly across various scenarios. The usage and advantages of these novel tests are demonstrated on an Alzheimer's disease dataset and simulated data.
Properties of young pulsar wind nebulae: TeV detectability and pulsar properties
NASA Astrophysics Data System (ADS)
Tanaka, Shuta J.; Takahara, Fumio
2013-03-01
Among dozens of young pulsar wind nebulae (PWNe), some have been detected in TeV γ-rays (TeV PWNe), while others have not (non-TeV PWNe). The TeV emission detectability is not correlated with either the spin-down power or the characteristic age of the central pulsars and it is an open question as to what determines the detectability. To study this problem, we investigate the spectral evolution of five young non-TeV PWNe: 3C 58, G310.6-1.6, G292.0+1.8, G11.2-0.3 and SNR B0540-69.3. We use a spectral evolution model that was developed in our previous works to be applied to young TeV PWNe. The TeV γ-ray flux upper limits of non-TeV PWNe give upper or lower limits on parameters such as the age of the PWN and the fraction of spin-down power going into magnetic energy injection (the fraction parameter). Combined with other independent observational and theoretical studies, we can guess a plausible value of the parameters for each object. For 3C 58, we prefer parameters with an age of 2.5 kyr and fraction parameter of 3.0 × 10-3, although the spectral modelling alone does not rule out a lower age and a higher fraction parameter. The fraction parameter of 3.0 × 10-3 is also consistent for other non-TeV PWNe and thus the value is regarded as common to young PWNe, including TeV PWNe. Moreover, we find that the intrinsic properties of the central pulsars are similar: 1048-50 erg for the initial rotational energy and 1042-44 erg for the magnetic energy (2 × 1012-3 × 1013 G for the dipole magnetic field strength at the surface). The TeV detectability is correlated with the total injected energy and the energy density of the interstellar radiation field around PWNe. Except for the case of G292.0+1.8, broken power-law injection of the particles reproduces the broad-band emission from non-TeV PWNe well.
Scenario planning: a tool for academic health sciences libraries.
Ludwig, Logan; Giesecke, Joan; Walton, Linda
2010-03-01
Review the International Campaign to Revitalise Academic Medicine (ICRAM) Future Scenarios as a potential starting point for developing scenarios to envisage plausible futures for health sciences libraries. At an educational workshop, 15 groups, each composed of four to seven Association of Academic Health Sciences Libraries (AAHSL) directors and AAHSL/NLM Fellows, created plausible stories using the five ICRAM scenarios. Participants created 15 plausible stories regarding roles played by health sciences librarians, how libraries are used and their physical properties in response to technology, scholarly communication, learning environments and health care economic changes. Libraries are affected by many forces, including economic pressures, curriculum and changes in technology, health care delivery and scholarly communications business models. The future is likely to contain ICRAM scenario elements, although not all, and each, if they come to pass, will impact health sciences libraries. The AAHSL groups identified common features in their scenarios to learn lessons for now. The hope is that other groups find the scenarios useful in thinking about academic health science library futures.
Self-gravity, self-consistency, and self-organization in geodynamics and geochemistry
NASA Astrophysics Data System (ADS)
Anderson, Don L.
The results of seismology and geochemistry for mantle structure are widely believed to be discordant, the former favoring whole-mantle convection and the latter favoring layered convection with a boundary near 650 km. However, a different view arises from recognizing effects usually ignored in the construction of these models, including physical plausibility and dimensionality. Self-compression and expansion affect material properties that are important in all aspects of mantle geochemistry and dynamics, including the interpretation of tomographic images. Pressure compresses a solid and changes physical properties that depend on volume and does so in a highly nonlinear way. Intrinsic, anelastic, compositional, and crystal structure effects control seismic velocities; temperature is not the only parameter, even though tomographic images are often treated as temperature maps. Shear velocity is not a good proxy for density, temperature, and composition or for other elastic constants. Scaling concepts are important in mantle dynamics, equations of state, and wherever it is necessary to extend laboratory experiments to the parameter range of the Earth's mantle. Simple volume-scaling relations that permit extrapolation of laboratory experiments, in a thermodynamically self-consistent way, to deep mantle conditions include the quasiharmonic approximation but not the Boussinesq formalisms. Whereas slabs, plates, and the upper thermal boundary layer of the mantle have characteristic thicknesses of hundreds of kilometers and lifetimes on the order of 100 million years, volume-scaling predicts values an order of magnitude higher for deep-mantle thermal boundary layers. This implies that deep-mantle features are sluggish and ancient. Irreversible chemical stratification is consistent with these results; plausible temperature variations in the deep mantle cause density variations that are smaller than the probable density contrasts across chemical interfaces created by accretional differentiation and magmatic processes. Deep-mantle features may be convectively isolated from upper-mantle processes. Plate tectonics and surface geochemical cycles appear to be entirely restricted to the upper ˜1,000 km. The 650-km discontinuity is mainly an isochemical phase change but major-element chemical boundaries may occur at other depths. Recycling laminates the upper mantle and also makes it statistically heterogeneous, in agreement with high-frequency scattering studies. In contrast to standard geochemical models and recent modifications, the deeper layers need not be accessible to surface volcanoes. There is no conflict between geophysical and geochemical data, but a physical basis for standard geochemical and geodynamic mantle models, including the two-layer and whole-mantle versions, and qualitative tomographic interpretations has been lacking.
The Central Role of Recognition in Auditory Perception: A Neurobiological Model
ERIC Educational Resources Information Center
McLachlan, Neil; Wilson, Sarah
2010-01-01
The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…
Banding of NMR-derived Methyl Order Parameters: Implications for Protein Dynamics
Sharp, Kim A.; Kasinath, Vignesh; Wand, A. Joshua
2014-01-01
Our understanding of protein folding, stability and function has begun to more explicitly incorporate dynamical aspects. Nuclear magnetic resonance has emerged as a powerful experimental method for obtaining comprehensive site-resolved insight into protein motion. It has been observed that methyl-group motion tends to cluster into three “classes” when expressed in terms of the popular Lipari-Szabo model-free squared generalized order parameter. Here the origins of the three classes or bands in the distribution of order parameters are examined. As a first step, a Bayesian based approach, which makes no a priori assumption about the existence or number of bands, is developed to detect the banding of O2axis values derived either from NMR experiments or molecular dynamics simulations. The analysis is applied to seven proteins with extensive molecular dynamics simulations of these proteins in explicit water to examine the relationship between O2 and fine details of the motion of methyl bearing side chains. All of the proteins studied display banding, with some subtle differences. We propose a very simple yet plausible physical mechanism for banding. Finally, our Bayesian method is used to analyze the measured distributions of methyl group motions in the catabolite activating protein and several of its mutants in various liganded states and discuss the functional implications of the observed banding to protein dynamics and function. PMID:24677353
Pedestrian evacuation modeling to reduce vehicle use for distant tsunami evacuations in Hawaiʻi
Wood, Nathan J.; Jones, Jamie; Peters, Jeff; Richards, Kevin
2018-01-01
Tsunami waves that arrive hours after generation elsewhere pose logistical challenges to emergency managers due to the perceived abundance of time and inclination of evacuees to use vehicles. We use coastal communities on the island of Oʻahu (Hawaiʻi, USA) to demonstrate regional evacuation modeling that can identify where successful pedestrian-based evacuations are plausible and where vehicle use could be discouraged. The island of Oʻahu has two tsunami-evacuation zones (standard and extreme), which provides the opportunity to examine if recommended travel modes vary based on zone. Geospatial path distance models are applied to estimate population exposure as a function of pedestrian travel time and speed out of evacuation zones. The use of the extreme zone triples the number of residents, employees, and facilities serving at-risk populations that would be encouraged to evacuate and slightly reduces the percentage of residents (98–76%) that could evacuate in less than 15 min at a plausible speed (with similar percentages for employees). Areas with lengthy evacuations are concentrated in the North Shore region for the standard zone but found all around the Oʻahu coastline for the extreme zone. The use of the extreme zone results in a 26% increase in the number of hotel visitors that would be encouraged to evacuate, and a 76% increase in the number of them that may require more than 15 min. Modeling can identify where pedestrian evacuations are plausible; however, there are logistical and behavioral issues that warrant attention before localized evacuation procedures may be realistic.
Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?
Geier, J.; Voss, C.I.; Dverstorp, B.
2002-01-01
A simple evaluation can be used to characterize the capacity of crystalline bedrock to act as a barrier to release radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealized models of pore geometry. Application to an intensively characterized site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geological barrier performance. Comparison with seven other less intensively characterized crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterized by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.
Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?
Geier, J.; Voss, C.I.; Dverstorp, B.
2002-01-01
A simple evaluation can be used to characterise the capacity of crystalline bedrock to act as a barrier to releases of radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealised models of pore geometry. Application to an intensively characterised site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geologic-barrier performance. Comparison with seven other less intensively characterised crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterised by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.
Anelastic tidal dissipation in multi-layer planets
NASA Astrophysics Data System (ADS)
Remus, F.; Mathis, S.; Zahn, J.-P.; Lainey, V.
2012-09-01
Earth-like planets have anelastic mantles, whereas giant planets may have anelastic cores. As for the fluid parts of a body, the tidal dissipation of such solid regions, gravitationally perturbed by a companion body, highly depends on its internal friction, and thus on its internal structure. Therefore, modelling this kind of interaction presents a high interest to provide constraints on planets interiors, whose properties are still quite uncertain. Here, we examine the equilibrium tide in the solid part of a planet, taking into account the presence of a fluid envelope. We derive the different Love numbers that describe its deformation and discuss the dependence of the quality factor Q on the chosen anelastic model and the size of the core. Taking plausible values for the anelastic parameters, and discussing the frequency-dependence of the solid dissipation, we show how this mechanism may compete with the dissipation in fluid layers, when applied to Jupiter- and Saturn-like planets. We also discuss the case of the icy giants Uranus and Neptune. Finally, we present the way to implement the results in the equations that describe the dynamical evolution of planetary systems.
Duarte Queirós, Sílvio M; Crokidakis, Nuno; Soares-Pinto, Diogo O
2009-07-01
The influence of the tail features of the local magnetic field probability density function (PDF) on the ferromagnetic Ising model is studied in the limit of infinite range interactions. Specifically, we assign a quenched random field whose value is in accordance with a generic distribution that bears platykurtic and leptokurtic distributions depending on a single parameter tau<3 to each site. For tau<5/3, such distributions, which are basically Student-t and r distribution extended for all plausible real degrees of freedom, present a finite standard deviation, if not the distribution has got the same asymptotic power-law behavior as a alpha-stable Lévy distribution with alpha=(3-tau)/(tau-1). For every value of tau, at specific temperature and width of the distribution, the system undergoes a continuous phase transition. Strikingly, we impart the emergence of an inflexion point in the temperature-PDF width phase diagrams for distributions broader than the Cauchy-Lorentz (tau=2) which is accompanied with a divergent free energy per spin (at zero temperature).
Core formation in the shergottite parent body and comparison with the earth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treiman, A.H.; Jones, J.H.; Drake, M.J.
1987-03-30
The mantle of the shergottite parent body (SPB) is depleted relative to the bulk SPB in siderophile and chalcophile elements; these elements are inferred to reside in the SPB's core. Our chemical model of these depletions rests on a physically plausible process of segregation of partially molten metal form partially molten silicates as the SPB grows and is heated above silicate and metallic solidi during accretion. Metallic and silicate phases equilibrate at low pressures as new material is accreted to the SPB surface. Later movement of the metallic phases to the planet's center is so rapid that high-pressure equilibration ismore » insignificant. Partitioning of siderophile and chalcophile elements among solid and liquid metal and silicate determines their abundances in the SPB mantle. Using partition coefficients and the SPB mantle composition determined in earlier studies, we model the abundances of Ag, Au, Co, Ga, Mo, Ni, P, Re, S, and W with free parameters being oxygen fugacity, proportion of solid metal formed, proportion of metallic liquid formed, and proportion of silicate that is molten.« less
The Impact of Deviation from Michaelis-Menten Saturation on Mathematical Model Stability Properties
NASA Technical Reports Server (NTRS)
Blackwell, Charles; Kliss, Mark (Technical Monitor)
1998-01-01
Based on purely abstract ecological theory, it has been argued that a system composed of two or more consumers competing for the same resource cannot persist. By analysis on a Monod format mathematical model, Hubble and others demonstrated that this assertion is true for all but very special cases of such competing organisms which are determined by an index formed by a grouping of. the parameters which characterize the biological processes of the competing organisms. In the laboratory, using a bioreactor, Hansen and Hubble obtained confirmatory results for several cases of two competing species, and they characterized it as "qualitative confirmation" of the assertion. This result is amazing, since the analysis required the exact equality of the hey index, and it seems certain that no pair of organism species could have exactly equal values. It is quite plausible, however, that pairs of organism species could have approximately equal indices, and the question of how different they could be and still have coexistence of the two (or more) presents itself. In this paper, the pursuit of this question and a compatible resolution is presented.
Dark matter detectors as dark photon helioscopes.
An, Haipeng; Pospelov, Maxim; Pradler, Josef
2013-07-26
Light new particles with masses below 10 keV, often considered as a plausible extension of the standard model, will be emitted from the solar interior and can be detected on Earth with a variety of experimental tools. Here, we analyze the new "dark" vector state V, a massive vector boson mixed with the photon via an angle κ, that in the limit of the small mass mV has its emission spectrum strongly peaked at low energies. Thus, we utilize the constraints on the atomic ionization rate imposed by the results of the XENON10 experiment to set the limit on the parameters of this model: κ×mV<3×10(-12) eV. This makes low-threshold dark matter experiments the most sensitive dark vector helioscopes, as our result not only improves current experimental bounds from other searches by several orders of magnitude but also surpasses even the most stringent astrophysical and cosmological limits in a seven-decade-wide interval of mV. We generalize this approach to other light exotic particles and set the most stringent direct constraints on "minicharged" particles.
Lindeman, Meghan I H; Zengel, Bettina; Skowronski, John J
2017-07-01
The affect associated with negative (or unpleasant) memories typically tends to fade faster than the affect associated with positive (or pleasant) memories, a phenomenon called the fading affect bias (FAB). We conducted a study to explore the mechanisms related to the FAB. A retrospective recall procedure was used to obtain three self-report measures (memory vividness, rehearsal frequency, affective fading) for both positive events and negative events. Affect for positive events faded less than affect for negative events, and positive events were recalled more vividly than negative events. The perceived vividness of an event (memory vividness) and the extent to which an event has been rehearsed (rehearsal frequency) were explored as possible mediators of the relation between event valence and affect fading. Additional models conceived of affect fading and rehearsal frequency as contributors to a memory's vividness. Results suggested that memory vividness was a plausible mediator of the relation between an event's valence and affect fading. Rehearsal frequency was also a plausible mediator of this relation, but only via its effects on memory vividness. Additional modelling results suggested that affect fading and rehearsal frequency were both plausible mediators of the relation between an event's valence and the event's rated memory vividness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seager, S.; Bains, W.; Hu, R.
Biosignature gas detection is one of the ultimate future goals for exoplanet atmosphere studies. We have created a framework for linking biosignature gas detectability to biomass estimates, including atmospheric photochemistry and biological thermodynamics. The new framework is intended to liberate predictive atmosphere models from requiring fixed, Earth-like biosignature gas source fluxes. New biosignature gases can be considered with a check that the biomass estimate is physically plausible. We have validated the models on terrestrial production of NO, H{sub 2}S, CH{sub 4}, CH{sub 3}Cl, and DMS. We have applied the models to propose NH{sub 3} as a biosignature gas on amore » 'cold Haber World', a planet with a N{sub 2}-H{sub 2} atmosphere, and to demonstrate why gases such as CH{sub 3}Cl must have too large of a biomass to be a plausible biosignature gas on planets with Earth or early-Earth-like atmospheres orbiting a Sun-like star. To construct the biomass models, we developed a functional classification of biosignature gases, and found that gases (such as CH{sub 4}, H{sub 2}S, and N{sub 2}O) produced from life that extracts energy from chemical potential energy gradients will always have false positives because geochemistry has the same gases to work with as life does, and gases (such as DMS and CH{sub 3}Cl) produced for secondary metabolic reasons are far less likely to have false positives but because of their highly specialized origin are more likely to be produced in small quantities. The biomass model estimates are valid to one or two orders of magnitude; the goal is an independent approach to testing whether a biosignature gas is plausible rather than a precise quantification of atmospheric biosignature gases and their corresponding biomasses.« less
Vectorial Representations of Meaning for a Computational Model of Language Comprehension
ERIC Educational Resources Information Center
Wu, Stephen Tze-Inn
2010-01-01
This thesis aims to define and extend a line of computational models for text comprehension that are humanly plausible. Since natural language is human by nature, computational models of human language will always be just that--models. To the degree that they miss out on information that humans would tap into, they may be improved by considering…
Resolving Conflicts Between Syntax and Plausibility in Sentence Comprehension
Andrews, Glenda; Ogden, Jessica E.; Halford, Graeme S.
2017-01-01
Comprehension of plausible and implausible object- and subject-relative clause sentences with and without prepositional phrases was examined. Undergraduates read each sentence then evaluated a statement as consistent or inconsistent with the sentence. Higher acceptance of consistent than inconsistent statements indicated reliance on syntactic analysis. Higher acceptance of plausible than implausible statements reflected reliance on semantic plausibility. There was greater reliance on semantic plausibility and lesser reliance on syntactic analysis for more complex object-relatives and sentences with prepositional phrases than for less complex subject-relatives and sentences without prepositional phrases. Comprehension accuracy and confidence were lower when syntactic analysis and semantic plausibility yielded conflicting interpretations. The conflict effect on comprehension was significant for complex sentences but not for less complex sentences. Working memory capacity predicted resolution of the syntax-plausibility conflict in more and less complex items only when sentences and statements were presented sequentially. Fluid intelligence predicted resolution of the conflict in more and less complex items under sequential and simultaneous presentation. Domain-general processes appear to be involved in resolving syntax-plausibility conflicts in sentence comprehension. PMID:28458748
Perić, M; Jerosimić, S; Mitić, M; Milovanović, M; Ranković, R
2015-05-07
In the present study, we prove the plausibility of a simple model for the Renner-Teller effect in tetra-atomic molecules with linear equilibrium geometry by ab initio calculations of the electronic energy surfaces and non-adiabatic matrix elements for the X(2)Πu state of C2H2 (+). This phenomenon is considered as a combination of the usual Renner-Teller effect, appearing in triatomic species, and a kind of the Jahn-Teller effect, similar to the original one arising in highly symmetric molecules. Only four parameters (plus the spin-orbit constant, if the spin effects are taken into account), which can be extracted from ab initio calculations carried out at five appropriate (planar) molecular geometries, are sufficient for building up the Hamiltonian matrix whose diagonalization results in the complete low-energy (bending) vibronic spectrum. The main result of the present study is the proof that the diabatization scheme, hidden beneath the apparent simplicity of the model, can safely be carried out, at small-amplitude bending vibrations, without cumbersome computation of non-adiabatic matrix elements at large number of molecular geometries.
Modeling Europa's Ice-Ocean Interface
NASA Astrophysics Data System (ADS)
Elsenousy, A.; Vance, S.; Bills, B. G.
2014-12-01
This work focuses on modeling the ice-ocean interface on Jupiter's Moon (Europa); mainly from the standpoint of heat and salt transfer relationship with emphasis on the basal ice growth rate and its implications to Europa's tidal response. Modeling the heat and salt flux at Europa's ice/ocean interface is necessary to understand the dynamics of Europa's ocean and its interaction with the upper ice shell as well as the history of active turbulence at this area. To achieve this goal, we used McPhee et al., 2008 parameterizations on Earth's ice/ocean interface that was developed to meet Europa's ocean dynamics. We varied one parameter at a time to test its influence on both; "h" the basal ice growth rate and on "R" the double diffusion tendency strength. The double diffusion tendency "R" was calculated as the ratio between the interface heat exchange coefficient αh to the interface salt exchange coefficient αs. Our preliminary results showed a strong double diffusion tendency R ~200 at Europa's ice-ocean interface for plausible changes in the heat flux due to onset or elimination of a hydrothermal activity, suggesting supercooling and a strong tendency for forming frazil ice.
Toward understanding the mechanics of hovering in insects, hummingbirds and bats
NASA Astrophysics Data System (ADS)
Vejdani, Hamid; Boerma, David; Swartz, Sharon; Breuer, Kenneth
2016-11-01
We present results on the dynamical characteristics of two different mechanisms of hovering, corresponding to the behavior of hummingbirds and bats. Using a Lagrangian formulation, we have developed a dynamical model of a body (trunk) and two rectangular wings. The trunk has 3 degrees of freedom (x, z and pitch angle) and each wing has 3 modes of actuation: flapping, pronation/supination, and wingspan extension/flexion (only present for bats). Wings can be effectively massless (hummingbird and insect wings) or relatively massive (important in the case of bats). The aerodynamic drag and lift forces are calculated using a quasi-steady blade-element model. The regions of state space in which hovering is possible are computed by over an exhaustive range of parameters. The effect of wing mass is to shrink the phase space available for viable hovering and, in general, to require higher wingbeat frequency. Moreover, by exploring hovering energy requirements, we find that the pronation angle of the wings also plays a critical role. For bats, who have relatively heavy wings, we show wing extension and flexion is critical in order to maintain a plausible hovering posture with reasonable power requirements. Comparisons with biological data show good agreement with our model predictions.
On a Possible Unified Scaling Law for Volcanic Eruption Durations
Cannavò, Flavio; Nunnari, Giuseppe
2016-01-01
Volcanoes constitute dissipative systems with many degrees of freedom. Their eruptions are the result of complex processes that involve interacting chemical-physical systems. At present, due to the complexity of involved phenomena and to the lack of precise measurements, both analytical and numerical models are unable to simultaneously include the main processes involved in eruptions thus making forecasts of volcanic dynamics rather unreliable. On the other hand, accurate forecasts of some eruption parameters, such as the duration, could be a key factor in natural hazard estimation and mitigation. Analyzing a large database with most of all the known volcanic eruptions, we have determined that the duration of eruptions seems to be described by a universal distribution which characterizes eruption duration dynamics. In particular, this paper presents a plausible global power-law distribution of durations of volcanic eruptions that holds worldwide for different volcanic environments. We also introduce a new, simple and realistic pipe model that can follow the same found empirical distribution. Since the proposed model belongs to the family of the self-organized systems it may support the hypothesis that simple mechanisms can lead naturally to the emergent complexity in volcanic behaviour. PMID:26926425
On a Possible Unified Scaling Law for Volcanic Eruption Durations.
Cannavò, Flavio; Nunnari, Giuseppe
2016-03-01
Volcanoes constitute dissipative systems with many degrees of freedom. Their eruptions are the result of complex processes that involve interacting chemical-physical systems. At present, due to the complexity of involved phenomena and to the lack of precise measurements, both analytical and numerical models are unable to simultaneously include the main processes involved in eruptions thus making forecasts of volcanic dynamics rather unreliable. On the other hand, accurate forecasts of some eruption parameters, such as the duration, could be a key factor in natural hazard estimation and mitigation. Analyzing a large database with most of all the known volcanic eruptions, we have determined that the duration of eruptions seems to be described by a universal distribution which characterizes eruption duration dynamics. In particular, this paper presents a plausible global power-law distribution of durations of volcanic eruptions that holds worldwide for different volcanic environments. We also introduce a new, simple and realistic pipe model that can follow the same found empirical distribution. Since the proposed model belongs to the family of the self-organized systems it may support the hypothesis that simple mechanisms can lead naturally to the emergent complexity in volcanic behaviour.
Causal mediation analysis with a latent mediator.
Albert, Jeffrey M; Geng, Cuiyu; Nelson, Suchitra
2016-05-01
Health researchers are often interested in assessing the direct effect of a treatment or exposure on an outcome variable, as well as its indirect (or mediation) effect through an intermediate variable (or mediator). For an outcome following a nonlinear model, the mediation formula may be used to estimate causally interpretable mediation effects. This method, like others, assumes that the mediator is observed. However, as is common in structural equations modeling, we may wish to consider a latent (unobserved) mediator. We follow a potential outcomes framework and assume a generalized structural equations model (GSEM). We provide maximum-likelihood estimation of GSEM parameters using an approximate Monte Carlo EM algorithm, coupled with a mediation formula approach to estimate natural direct and indirect effects. The method relies on an untestable sequential ignorability assumption; we assess robustness to this assumption by adapting a recently proposed method for sensitivity analysis. Simulation studies show good properties of the proposed estimators in plausible scenarios. Our method is applied to a study of the effect of mother education on occurrence of adolescent dental caries, in which we examine possible mediation through latent oral health behavior. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Saa, Pedro; Nielsen, Lars K.
2015-01-01
Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric) mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP) capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol), a transition region (-2> ΔGr >-20 kJ/mol) and a constant elasticity region (ΔGr <-20 kJ/mol). We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only the kinetic behaviour of these enzymes, but it also provided insights about the particular features underpinning the observed kinetics. Overall, this framework will enable systematic parameterization and sampling of enzymatic reactions. PMID:25874556
Counterfactual Plausibility and Comparative Similarity.
Stanley, Matthew L; Stewart, Gregory W; Brigard, Felipe De
2017-05-01
Counterfactual thinking involves imagining hypothetical alternatives to reality. Philosopher David Lewis (1973, 1979) argued that people estimate the subjective plausibility that a counterfactual event might have occurred by comparing an imagined possible world in which the counterfactual statement is true against the current, actual world in which the counterfactual statement is false. Accordingly, counterfactuals considered to be true in possible worlds comparatively more similar to ours are judged as more plausible than counterfactuals deemed true in possible worlds comparatively less similar. Although Lewis did not originally develop his notion of comparative similarity to be investigated as a psychological construct, this study builds upon his idea to empirically investigate comparative similarity as a possible psychological strategy for evaluating the perceived plausibility of counterfactual events. More specifically, we evaluate judgments of comparative similarity between episodic memories and episodic counterfactual events as a factor influencing people's judgments of plausibility in counterfactual simulations, and we also compare it against other factors thought to influence judgments of counterfactual plausibility, such as ease of simulation and prior simulation. Our results suggest that the greater the perceived similarity between the original memory and the episodic counterfactual event, the greater the perceived plausibility that the counterfactual event might have occurred. While similarity between actual and counterfactual events, ease of imagining, and prior simulation of the counterfactual event were all significantly related to counterfactual plausibility, comparative similarity best captured the variance in ratings of counterfactual plausibility. Implications for existing theories on the determinants of counterfactual plausibility are discussed. Copyright © 2016 Cognitive Science Society, Inc.
A Synchronization Account of False Recognition
ERIC Educational Resources Information Center
Johns, Brendan T.; Jones, Michael N.; Mewhort, Douglas J. K.
2012-01-01
We describe a computational model to explain a variety of results in both standard and false recognition. A key attribute of the model is that it uses plausible semantic representations for words, built through exposure to a linguistic corpus. A study list is encoded in the model as a gist trace, similar to the proposal of fuzzy trace theory…
NASA Astrophysics Data System (ADS)
Keane, J. T.; Johnson, B. C.; Matsuyama, I.; Siegler, M. A.
2018-04-01
New geophysical data and numerical models reveal that basin-scale impacts routinely caused the Moon to tumble (non principal axis rotation) early in its history — plausibly driving magnetic fields, erasing primordial volatiles, and more.
Levy, David T; Borland, Ron; Villanti, Andrea C; Niaura, Raymond; Yuan, Zhe; Zhang, Yian; Meza, Rafael; Holford, Theodore R; Fong, Geoffrey T; Cummings, K Michael; Abrams, David B
2017-02-01
The public health impact of vaporized nicotine products (VNPs) such as e-cigarettes is unknown at this time. VNP uptake may encourage or deflect progression to cigarette smoking in those who would not have otherwise smoked, thereby undermining or accelerating reductions in smoking prevalence seen in recent years. The public health impact of VNP use are modeled in terms of how it alters smoking patterns among those who would have otherwise smoked cigarettes and among those who would not have otherwise smoked cigarettes in the absence of VNPs. The model incorporates transitions from trial to established VNP use, transitions to exclusive VNP and dual use, and the effects of cessation at later ages. Public health impact on deaths and life years lost is estimated for a recent birth cohort incorporating evidence-informed parameter estimates. Based on current use patterns and conservative assumptions, we project a reduction of 21% in smoking-attributable deaths and of 20% in life years lost as a result of VNP use by the 1997 US birth cohort compared to a scenario without VNPs. In sensitivity analysis, health gains from VNP use are especially sensitive to VNP risks and VNP use rates among those likely to smoke cigarettes. Under most plausible scenarios, VNP use generally has a positive public health impact. However, very high VNP use rates could result in net harms. More accurate projections of VNP impacts will require better longitudinal measures of transitions into and out of VNP, cigarette and dual use. Previous models of VNP use do not incorporate whether youth and young adults initiating VNP would have been likely to have been a smoker in the absence of VNPs. This study provides a decision-theoretic model of VNP use in a young cohort that incorporates tendencies toward smoking and shows that, under most plausible scenarios, VNP use yields public health gains. The model makes explicit the type of surveillance information needed to better estimate the effect of new products and thereby inform public policy. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A plausible and consistent model is developed to obtain a quantitative description of the gradual disappearance of hexavalent chromium (Cr(VI)) from groundwater in a small-scale field tracer test and in batch kinetic experiments using aquifer sediments under similar chemical cond...
MODELING WILDLIFE RESPONSE TO LANDSCAPE CHANGE IN OREGON'S WILLAMETTE RIVER BASIN
The PATCH simulation model was used to predict the response of 17 wildlife species to
three plausible scenarios of habitat change in Oregon's Willamette River Basin. This 30
thousand square-kilometer basin comprises about 12% of the state of Oregon, encompasses extensive f...
A novel approach for connecting temporal-ontologies with blood flow simulations.
Weichert, F; Mertens, C; Walczak, L; Kern-Isberner, G; Wagner, M
2013-06-01
In this paper an approach for developing a temporal domain ontology for biomedical simulations is introduced. The ideas are presented in the context of simulations of blood flow in aneurysms using the Lattice Boltzmann Method. The advantages in using ontologies are manyfold: On the one hand, ontologies having been proven to be able to provide medical special knowledge e.g., key parameters for simulations. On the other hand, based on a set of rules and the usage of a reasoner, a system for checking the plausibility as well as tracking the outcome of medical simulations can be constructed. Likewise, results of simulations including data derived from them can be stored and communicated in a way that can be understood by computers. Later on, this set of results can be analyzed. At the same time, the ontologies provide a way to exchange knowledge between researchers. Lastly, this approach can be seen as a black-box abstraction of the internals of the simulation for the biomedical researcher as well. This approach is able to provide the complete parameter sets for simulations, part of the corresponding results and part of their analysis as well as e.g., geometry and boundary conditions. These inputs can be transferred to different simulation methods for comparison. Variations on the provided parameters can be automatically used to drive these simulations. Using a rule base, unphysical inputs or outputs of the simulation can be detected and communicated to the physician in a suitable and familiar way. An example for an instantiation of the blood flow simulation ontology and exemplary rules for plausibility checking are given. Copyright © 2013 Elsevier Inc. All rights reserved.
Hunting a wandering supermassive black hole in the M31 halo hermitage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miki, Yohei; Mori, Masao; Kawaguchi, Toshihiro
2014-03-10
In the hierarchical structure formation scenario, galaxies enlarge through multiple merging events with less massive galaxies. In addition, the Magorrian relation indicates that almost all galaxies are occupied by a central supermassive black hole (SMBH) of mass 10{sup –3} times the mass of its spheroidal component. Consequently, SMBHs are expected to wander in the halos of their host galaxies following a galaxy collision, although evidence of this activity is currently lacking. We investigate a current plausible location of an SMBH wandering in the halo of the Andromeda galaxy (M31). According to theoretical studies of N-body simulations, some of the manymore » substructures in the M31 halo are remnants of a minor merger occurring about 1 Gyr ago. First, to evaluate the possible parameter space of the infalling orbit of the progenitor, we perform numerous parameter studies using a graphics processing unit cluster. To reduce uncertainties in the predicted position of the expected SMBH, we then calculate the time evolution of the SMBH in the progenitor dwarf galaxy from N-body simulations using the plausible parameter sets. Our results show that the SMBH lies within the halo (∼20-50 kpc from the M31 center), closer to the Milky Way than the M31 disk. Furthermore, the predicted current positions of the SMBH were restricted to an observational field of 0.°6 × 0.°7 in the northeast region of the M31 halo. We also discuss the origin of the infalling orbit of the satellite galaxy and its relationships with the recently discovered vast thin disk plane of satellite galaxies around M31.« less
Newberry Volcano EGS Demonstration Stimulation Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trenton T. Cladouhos, Matthew Clyne, Maisie Nichols,; Susan Petty, William L. Osborn, Laura Nofziger
2011-10-23
As a part of Phase I of the Newberry Volcano EGS Demonstration project, several data sets were collected to characterize the rock volume around the well. Fracture, fault, stress, and seismicity data has been collected by borehole televiewer, LiDAR elevation maps, and microseismic monitoring. Well logs and cuttings from the target well (NWG 55-29) and core from a nearby core hole (USGS N-2) have been analyzed to develop geothermal, geochemical, mineralogical and strength models of the rock matrix, altered zones, and fracture fillings (see Osborn et al., this volume). These characterization data sets provide inputs to models used to planmore » and predict EGS reservoir creation and productivity. One model used is AltaStim, a stochastic fracture and flow software model developed by AltaRock. The software's purpose is to model and visualize EGS stimulation scenarios and provide guidance for final planning. The process of creating an AltaStim model requires synthesis of geologic observations at the well, the modeled stress conditions, and the stimulation plan. Any geomechanical model of an EGS stimulation will require many assumptions and unknowns; thus, the model developed here should not be considered a definitive prediction, but a plausible outcome given reasonable assumptions. AltaStim is a tool for understanding the effect of known constraints, assumptions, and conceptual models on plausible outcomes.« less
Tsai, V.C.
2011-01-01
It is known that GPS time series contain a seasonal variation that is not due to tectonic motions, and it has recently been shown that crustal seismic velocities may also vary seasonally. In order to explain these changes, a number of hypotheses have been given, among which thermoelastic and hydrology-induced stresses and strains are leading candidates. Unfortunately, though, since a general framework does not exist for understanding such seasonal variations, it is currently not possible to quickly evaluate the plausibility of these hypotheses. To fill this gap in the literature, I generalize a two-dimensional thermoelastic strain model to provide an analytic solution for the displacements and wave speed changes due to either thermoelastic stresses or hydrologic loading, which consists of poroelastic stresses and purely elastic stresses. The thermoelastic model assumes a periodic surface temperature, and the hydrologic models similarly assume a periodic near-surface water load. Since all three models are two-dimensional and periodic, they are expected to only approximate any realistic scenario; but the models nonetheless provide a quantitative framework for estimating the effects of thermoelastic and hydrologic variations. Quantitative comparison between the models and observations is further complicated by the large uncertainty in some of the relevant parameters. Despite this uncertainty, though, I find that maximum realistic thermoelastic effects are unlikely to explain a large fraction of the observed annual variation in a typical GPS displacement time series or of the observed annual variations in seismic wave speeds in southern California. Hydrologic loading, on the other hand, may be able to explain a larger fraction of both the annual variations in displacements and seismic wave speeds. Neither model is likely to explain all of the seismic wave speed variations inferred from observations. However, more definitive conclusions cannot be made until the model parameters are better constrained. Copyright ?? 2011 by the American Geophysical Union.
COLLABORATION ON NHEERL EPIDEMIOLOGY STUDIES
This task will continue ORD's efforts to develop a biologically plausible, quantitative health risk model for particulate matter (PM) based on epidemiological, toxicological, and mechanistic studies using matched exposure assessments. The NERL, in collaboration with the NHEERL, ...
Bays, Rebecca B; Zabrucky, Karen M; Gagne, Phill
2012-01-01
In the current study we examined whether prevalence information and imagery encoding influence participants' general plausibility, personal plausibility, belief, and memory ratings for suggested childhood events. Results showed decreases in general and personal plausibility ratings for low prevalence events when encoding instructions were not elaborate; however, instructions to repeatedly imagine suggested events elicited personal plausibility increases for low-prevalence events, evidence that elaborate imagery negated the effect of our prevalence manipulation. We found no evidence of imagination inflation or false memory construction. We discuss critical differences in researchers' manipulations of plausibility and imagery that may influence results of false memory studies in the literature. In future research investigators should focus on the specific nature of encoding instructions when examining the development of false memories.
Modular rate laws for enzymatic reactions: thermodynamics, elasticities and implementation.
Liebermeister, Wolfram; Uhlendorf, Jannis; Klipp, Edda
2010-06-15
Standard rate laws are a key requisite for systematically turning metabolic networks into kinetic models. They should provide simple, general and biochemically plausible formulae for reaction velocities and reaction elasticities. At the same time, they need to respect thermodynamic relations between the kinetic constants and the metabolic fluxes and concentrations. We present a family of reversible rate laws for reactions with arbitrary stoichiometries and various types of regulation, including mass-action, Michaelis-Menten and uni-uni reversible Hill kinetics as special cases. With a thermodynamically safe parameterization of these rate laws, parameter sets obtained by model fitting, sampling or optimization are guaranteed to lead to consistent chemical equilibrium states. A reformulation using saturation values yields simple formulae for rates and elasticities, which can be easily adjusted to the given stationary flux distributions. Furthermore, this formulation highlights the role of chemical potential differences as thermodynamic driving forces. We compare the modular rate laws to the thermodynamic-kinetic modelling formalism and discuss a simplified rate law in which the reaction rate directly depends on the reaction affinity. For automatic handling of modular rate laws, we propose a standard syntax and semantic annotations for the Systems Biology Markup Language. An online tool for inserting the rate laws into SBML models is freely available at www.semanticsbml.org. Supplementary data are available at Bioinformatics online.
Inference on the Strength of Balancing Selection for Epistatically Interacting Loci
Buzbas, Erkan Ozge; Joyce, Paul; Rosenberg, Noah A.
2011-01-01
Existing inference methods for estimating the strength of balancing selection in multi-locus genotypes rely on the assumption that there are no epistatic interactions between loci. Complex systems in which balancing selection is prevalent, such as sets of human immune system genes, are known to contain components that interact epistatically. Therefore, current methods may not produce reliable inference on the strength of selection at these loci. In this paper, we address this problem by presenting statistical methods that can account for epistatic interactions in making inference about balancing selection. A theoretical result due to Fearnhead (2006) is used to build a multi-locus Wright-Fisher model of balancing selection, allowing for epistatic interactions among loci. Antagonistic and synergistic types of interactions are examined. The joint posterior distribution of the selection and mutation parameters is sampled by Markov chain Monte Carlo methods, and the plausibility of models is assessed via Bayes factors. As a component of the inference process, an algorithm to generate multi-locus allele frequencies under balancing selection models with epistasis is also presented. Recent evidence on interactions among a set of human immune system genes is introduced as a motivating biological system for the epistatic model, and data on these genes are used to demonstrate the methods. PMID:21277883
A first-principles model for estimating the prevalence of annoyance with aircraft noise exposure.
Fidell, Sanford; Mestre, Vincent; Schomer, Paul; Berry, Bernard; Gjestland, Truls; Vallet, Michel; Reid, Timothy
2011-08-01
Numerous relationships between noise exposure and transportation noise-induced annoyance have been inferred by curve-fitting methods. The present paper develops a different approach. It derives a systematic relationship by applying an a priori, first-principles model to the findings of forty three studies of the annoyance of aviation noise. The rate of change of annoyance with day-night average sound level (DNL) due to aircraft noise exposure was found to closely resemble the rate of change of loudness with sound level. The agreement of model predictions with the findings of recent curve-fitting exercises (cf. Miedma and Vos, 1998) is noteworthy, considering that other analyses have relied on different analytic methods and disparate data sets. Even though annoyance prevalence rates within individual communities consistently grow in proportion to duration-adjusted loudness, variability in annoyance prevalence rates across communities remains great. The present analyses demonstrate that 1) community-specific differences in annoyance prevalence rates can be plausibly attributed to the joint effect of acoustic and non-DNL related factors and (2) a simple model can account for the aggregate influences of non-DNL related factors on annoyance prevalence rates in different communities in terms of a single parameter expressed in DNL units-a "community tolerance level."
Fournié, G; Guitian, F J; Mangtani, P; Ghani, A C
2011-08-07
Live bird markets (LBMs) act as a network 'hub' and potential reservoir of infection for domestic poultry. They may therefore be responsible for sustaining H5N1 highly pathogenic avian influenza (HPAI) virus circulation within the poultry sector, and thus a suitable target for implementing control strategies. We developed a stochastic transmission model to understand how market functioning impacts on the transmission dynamics. We then investigated the potential for rest days-periods during which markets are emptied and disinfected-to modulate the dynamics of H5N1 HPAI within the poultry sector using a stochastic meta-population model. Our results suggest that under plausible parameter scenarios, HPAI H5N1 could be sustained silently within LBMs with the time spent by poultry in markets and the frequency of introduction of new susceptible birds' dominant factors determining sustained silent spread. Compared with interventions applied in farms (i.e. stamping out, vaccination), our model shows that frequent rest days are an effective means to reduce HPAI transmission. Furthermore, our model predicts that full market closure would be only slightly more effective than rest days to reduce transmission. Strategies applied within markets could thus help to control transmission of the disease.
Fournié, G.; Guitian, F. J.; Mangtani, P.; Ghani, A. C.
2011-01-01
Live bird markets (LBMs) act as a network ‘hub’ and potential reservoir of infection for domestic poultry. They may therefore be responsible for sustaining H5N1 highly pathogenic avian influenza (HPAI) virus circulation within the poultry sector, and thus a suitable target for implementing control strategies. We developed a stochastic transmission model to understand how market functioning impacts on the transmission dynamics. We then investigated the potential for rest days—periods during which markets are emptied and disinfected—to modulate the dynamics of H5N1 HPAI within the poultry sector using a stochastic meta-population model. Our results suggest that under plausible parameter scenarios, HPAI H5N1 could be sustained silently within LBMs with the time spent by poultry in markets and the frequency of introduction of new susceptible birds' dominant factors determining sustained silent spread. Compared with interventions applied in farms (i.e. stamping out, vaccination), our model shows that frequent rest days are an effective means to reduce HPAI transmission. Furthermore, our model predicts that full market closure would be only slightly more effective than rest days to reduce transmission. Strategies applied within markets could thus help to control transmission of the disease. PMID:21131332
Support for viral persistence in bats from age-specific serology and models of maternal immunity.
Peel, Alison J; Baker, Kate S; Hayman, David T S; Broder, Christopher C; Cunningham, Andrew A; Fooks, Anthony R; Garnier, Romain; Wood, James L N; Restif, Olivier
2018-03-01
Spatiotemporally-localised prediction of virus emergence from wildlife requires focused studies on the ecology and immunology of reservoir hosts in their native habitat. Reliable predictions from mathematical models remain difficult in most systems due to a dearth of appropriate empirical data. Our goal was to study the circulation and immune dynamics of zoonotic viruses in bat populations and investigate the effects of maternally-derived and acquired immunity on viral persistence. Using rare age-specific serological data from wild-caught Eidolon helvum fruit bats as a case study, we estimated viral transmission parameters for a stochastic infection model. We estimated mean durations of around 6 months for maternally-derived immunity to Lagos bat virus and African henipavirus, whereas acquired immunity was long-lasting (Lagos bat virus: mean 12 years, henipavirus: mean 4 years). In the presence of a seasonal birth pulse, the effect of maternally-derived immunity on virus persistence within modelled bat populations was highly dependent on transmission characteristics. To explain previous reports of viral persistence within small natural and captive E. helvum populations, we hypothesise that some bats must experience prolonged infectious periods or within-host latency. By further elucidating plausible mechanisms of virus persistence in bat populations, we contribute to guidance of future field studies.
Modeling Avoidance in Mood and Anxiety Disorders Using Reinforcement Learning.
Mkrtchian, Anahit; Aylward, Jessica; Dayan, Peter; Roiser, Jonathan P; Robinson, Oliver J
2017-10-01
Serious and debilitating symptoms of anxiety are the most common mental health problem worldwide, accounting for around 5% of all adult years lived with disability in the developed world. Avoidance behavior-avoiding social situations for fear of embarrassment, for instance-is a core feature of such anxiety. However, as for many other psychiatric symptoms the biological mechanisms underlying avoidance remain unclear. Reinforcement learning models provide formal and testable characterizations of the mechanisms of decision making; here, we examine avoidance in these terms. A total of 101 healthy participants and individuals with mood and anxiety disorders completed an approach-avoidance go/no-go task under stress induced by threat of unpredictable shock. We show an increased reliance in the mood and anxiety group on a parameter of our reinforcement learning model that characterizes a prepotent (pavlovian) bias to withhold responding in the face of negative outcomes. This was particularly the case when the mood and anxiety group was under stress. This formal description of avoidance within the reinforcement learning framework provides a new means of linking clinical symptoms with biophysically plausible models of neural circuitry and, as such, takes us closer to a mechanistic understanding of mood and anxiety disorders. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
A Practical Probabilistic Graphical Modeling Tool for Weighing ...
Past weight-of-evidence frameworks for adverse ecological effects have provided soft-scoring procedures for judgments based on the quality and measured attributes of evidence. Here, we provide a flexible probabilistic structure for weighing and integrating lines of evidence for ecological risk determinations. Probabilistic approaches can provide both a quantitative weighing of lines of evidence and methods for evaluating risk and uncertainty. The current modeling structure wasdeveloped for propagating uncertainties in measured endpoints and their influence on the plausibility of adverse effects. To illustrate the approach, we apply the model framework to the sediment quality triad using example lines of evidence for sediment chemistry measurements, bioassay results, and in situ infauna diversity of benthic communities using a simplified hypothetical case study. We then combine the three lines evidence and evaluate sensitivity to the input parameters, and show how uncertainties are propagated and how additional information can be incorporated to rapidly update the probability of impacts. The developed network model can be expanded to accommodate additional lines of evidence, variables and states of importance, and different types of uncertainties in the lines of evidence including spatial and temporal as well as measurement errors. We provide a flexible Bayesian network structure for weighing and integrating lines of evidence for ecological risk determinations
The Prospects of Whole Brain Emulation within the next Half- Century
NASA Astrophysics Data System (ADS)
Eth, Daniel; Foust, Juan-Carlos; Whale, Brandon
2013-12-01
Whole Brain Emulation (WBE), the theoretical technology of modeling a human brain in its entirety on a computer-thoughts, feelings, memories, and skills intact-is a staple of science fiction. Recently, proponents of WBE have suggested that it will be realized in the next few decades. In this paper, we investigate the plausibility of WBE being developed in the next 50 years (by 2063). We identify four essential requisite technologies: scanning the brain, translating the scan into a model, running the model on a computer, and simulating an environment and body. Additionally, we consider the cultural and social effects of WBE. We find the two most uncertain factors for WBE's future to be the development of advanced miniscule probes that can amass neural data in vivo and the degree to which the culture surrounding WBE becomes cooperative or competitive. We identify four plausible scenarios from these uncertainties and suggest the most likely scenario to be one in which WBE is realized, and the technology is used for moderately cooperative ends
Patel, Mainak; Rangan, Aaditya
2017-08-07
Infant rats randomly cycle between the sleeping and waking states, which are tightly correlated with the activity of mutually inhibitory brainstem sleep and wake populations. Bouts of sleep and wakefulness are random; from P2-P10, sleep and wake bout lengths are exponentially distributed with increasing means, while during P10-P21, the sleep bout distribution remains exponential while the distribution of wake bouts gradually transforms to power law. The locus coeruleus (LC), via an undeciphered interaction with sleep and wake populations, has been shown experimentally to be responsible for the exponential to power law transition. Concurrently during P10-P21, the LC undergoes striking physiological changes - the LC exhibits strong global 0.3 Hz oscillations up to P10, but the oscillation frequency gradually rises and synchrony diminishes from P10-P21, with oscillations and synchrony vanishing at P21 and beyond. In this work, we construct a biologically plausible Wilson Cowan-style model consisting of the LC along with sleep and wake populations. We show that external noise and strong reciprocal inhibition can lead to switching between sleep and wake populations and exponentially distributed sleep and wake bout durations as during P2-P10, with the parameters of inhibition between the sleep and wake populations controlling mean bout lengths. Furthermore, we show that the changing physiology of the LC from P10-P21, coupled with reciprocal excitation between the LC and wake population, can explain the shift from exponential to power law of the wake bout distribution. To our knowledge, this is the first study that proposes a plausible biological mechanism, which incorporates the known changing physiology of the LC, for tying the developing sleep-wake circuit and its interaction with the LC to the transformation of sleep and wake bout dynamics from P2-P21. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Verdel, Nina; Marin, Ana; Vidovič, Luka; Milanič, Matija; Majaron, Boris
2017-02-01
We have combined two optical techniques to enable simultaneous assessment of structure and composition of human skin in vivo: Pulsed photothermal radiometry (PPTR), which involves measurements of transient dynamics in midinfrared emission from sample surface after exposure to a light pulse, and diffuse reflectance spectroscopy (DRS) in visible part of the spectrum. Namely, while PPTR is highly sensitive to depth distribution of selected absorbers, DRS provides spectral information and thus enables differentiation between various chromophores. The accuracy and robustness of the inverse analysis is thus considerably improved compared to use of either technique on its own. Our analysis approach is simultaneous multi-dimensional fitting of the measured PPTR signals and DRS with predictions from a numerical model of light-tissue interaction (a.k.a. inverse Monte Carlo). By using a three-layer skin model (epidermis, dermis, and subcutis), we obtain a good match between the experimental and modeling data. However, dividing the dermis into two separate layers (i.e., papillary and reticular dermis) helps to bring all assessed parameter values within anatomically and physiologically plausible intervals. Both the quality of the fit and the assessed parameter values depend somewhat on the assumed scattering properties for skin, which vary in literature and likely depend on subject's age and gender, anatomical site, etc. In our preliminary experience, simultaneous fitting of the scattering properties is possible and leads to considerable improvement of the fit. The described approach may thus have a potential for simultaneous determination of absorption and scattering properties of human skin in vivo.
Kim, Chang-Sei; Ansermino, J. Mark; Hahn, Jin-Oh
2016-01-01
The goal of this study is to derive a minimally complex but credible model of respiratory CO2 gas exchange that may be used in systematic design and pilot testing of closed-loop end-tidal CO2 controllers in mechanical ventilation. We first derived a candidate model that captures the essential mechanisms involved in the respiratory CO2 gas exchange process. Then, we simplified the candidate model to derive two lower-order candidate models. We compared these candidate models for predictive capability and reliability using experimental data collected from 25 pediatric subjects undergoing dynamically varying mechanical ventilation during surgical procedures. A two-compartment model equipped with transport delay to account for CO2 delivery between the lungs and the tissues showed modest but statistically significant improvement in predictive capability over the same model without transport delay. Aggregating the lungs and the tissues into a single compartment further degraded the predictive fidelity of the model. In addition, the model equipped with transport delay demonstrated superior reliability to the one without transport delay. Further, the respiratory parameters derived from the model equipped with transport delay, but not the one without transport delay, were physiologically plausible. The results suggest that gas transport between the lungs and the tissues must be taken into account to accurately reproduce the respiratory CO2 gas exchange process under conditions of wide-ranging and dynamically varying mechanical ventilation conditions. PMID:26870728
ERIC Educational Resources Information Center
Conley, Sharon; You, Sukkyung
2014-01-01
A previous study examined role stress in relation to work outcomes; in this study, we added job structuring antecedents to a model of role stress and examined the moderating effects of locus of control. Structural equation modeling was used to assess the plausibility of our conceptual model, which specified hypothesized linkages among teachers'…
ERIC Educational Resources Information Center
Dombrowski, Stefan C.; Golay, Philippe; McGill, Ryan J.; Canivez, Gary L.
2018-01-01
Bayesian structural equation modeling (BSEM) was used to investigate the latent structure of the Differential Ability Scales-Second Edition core battery using the standardization sample normative data for ages 7-17. Results revealed plausibility of a three-factor model, consistent with publisher theory, expressed as either a higher-order (HO) or a…
Distributed snow modeling suitable for use with operational data for the American River watershed.
NASA Astrophysics Data System (ADS)
Shamir, E.; Georgakakos, K. P.
2004-12-01
The mountainous terrain of the American River watershed (~4300 km2) at the Western slope of the Northern Sierra Nevada is subject to significant variability in the atmospheric forcing that controls the snow accumulation and ablations processes (i.e., precipitation, surface temperature, and radiation). For a hydrologic model that attempts to predict both short- and long-term streamflow discharges, a plausible description of the seasonal and intermittent winter snow pack accumulation and ablation is crucial. At present the NWS-CNRFC operational snow model is implemented in a semi distributed manner (modeling unit of about 100-1000 km2) and therefore lump distinct spatial variability of snow processes. In this study we attempt to account for the precipitation, temperature, and radiation spatial variability by constructing a distributed snow accumulation and melting model suitable for use with commonly available sparse data. An adaptation of the NWS-Snow17 energy and mass balance that is used operationally at the NWS River Forecast Centers is implemented at 1 km2 grid cells with distributed input and model parameters. The input to the model (i.e., precipitation and surface temperature) is interpolated from observed point data. The surface temperature was interpolated over the basin based on adiabatic lapse rates using topographic information whereas the precipitation was interpolated based on maps of climatic mean annual rainfall distribution acquired from PRISM. The model parameters that control the melting rate due to radiation were interpolated based on aspect. The study was conducted for the entire American basin for the snow seasons of 1999-2000. Validation of the Snow Water Equivalent (SWE) prediction is done by comparing to observation from 12 snow Sensors. The Snow Cover Area (SCA) prediction was evaluated by comparing to remotely sensed 500m daily snow cover derived from MODIS. The results that the distribution of snow over the area is well captured and the quantity compared to the snow gauges are well estimated in the high elevation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirocha, Jordan; Burns, Jack O.; Harker, Geraint J. A., E-mail: mirocha@astro.ucla.edu
2015-11-01
Following our previous work, which related generic features in the sky-averaged (global) 21-cm signal to properties of the intergalactic medium, we now investigate the prospects for constraining a simple galaxy formation model with current and near-future experiments. Markov-Chain Monte Carlo fits to our synthetic data set, which includes a realistic galactic foreground, a plausible model for the signal, and noise consistent with 100 hr of integration by an ideal instrument, suggest that a simple four-parameter model that links the production rate of Lyα, Lyman-continuum, and X-ray photons to the growth rate of dark matter halos can be well-constrained (to ∼0.1more » dex in each dimension) so long as all three spectral features expected to occur between 40 ≲ ν/MHz ≲ 120 are detected. Several important conclusions follow naturally from this basic numerical result, namely that measurements of the global 21-cm signal can in principle (i) identify the characteristic halo mass threshold for star formation at all redshifts z ≳ 15, (ii) extend z ≲ 4 upper limits on the normalization of the X-ray luminosity star formation rate (L{sub X}–SFR) relation out to z ∼ 20, and (iii) provide joint constraints on stellar spectra and the escape fraction of ionizing radiation at z ∼ 12. Though our approach is general, the importance of a broadband measurement renders our findings most relevant to the proposed Dark Ages Radio Explorer, which will have a clean view of the global 21-cm signal from ∼40 to 120 MHz from its vantage point above the radio-quiet, ionosphere-free lunar far-side.« less
Evaluating the Sensitivity of Glacial Isostatic Adjustment to a Hydrous Melt at 410 km Depth
NASA Astrophysics Data System (ADS)
Hill, A. M.; Milne, G. A.; Ranalli, G.
2017-12-01
We present a sensitivity analysis aimed at testing whether observables related to GIA can support or refute the existence of a low viscosity partial melt layer located above the mantle transition zone, as required by the so-called "Transition Zone Water Filter" model (Bercovici and Karato 2003). In total, 400 model runs were performed sampling a range of melt layer thicknesses (1, 10 & 20 km) and viscosities (1015 - 1019 Pas) as well as plausible viscosity values in the upper and lower mantle. Comparing model output of postglacial decay times and j2, 18 of the considered viscosity models were found to be compatible with all of the observational constraints. Amongst these, only three `background' upper and lower mantle viscosities are permitted regardless of the properties of the melt layer: an upper mantle value of 3×1020 Pas and lower mantle values of 1022, 3×1022 and 5×1022 Pas. Concerning the properties of the melt layer itself, a thin (1 km) layer may have any of the investigated viscosities (1015 to 1019 Pas). For thicker melt layers, the viscosity must be ≥1018 Pas (20 km) or ≥1017 Pas (10 km). Our results indicate clear parameter trade-offs between the properties of the melt layer and the background viscosity structure. Given that the observations permit several values of lower mantle viscosity, we conclude that tightening constraints on this parameter would be valuable for future investigation of the type presented here. Furthermore, while decay times from both locations considered in this investigation (Ångerman River, Sweden; Richmond Gulf, Canada) offer meaningful constraints on viscosity structure, the value for Richmond Gulf is significantly more uncertain and so increasing its precision would likely result in improved viscosity constraints.
Prediction of the area affected by earthquake-induced landsliding based on seismological parameters
NASA Astrophysics Data System (ADS)
Marc, Odin; Meunier, Patrick; Hovius, Niels
2017-07-01
We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.
NASA Astrophysics Data System (ADS)
Kim, Bongjae; Khmelevskyi, Sergii; Mazin, Igor I.; Agterberg, Daniel F.; Franchini, Cesare
2017-07-01
Sr2RuO4 is the best candidate for spin-triplet superconductivity, an unusual and elusive superconducting state of fundamental importance. In the last three decades, Sr2RuO4 has been very carefully studied and despite its apparent simplicity when compared with strongly correlated high-Tc cuprates, for which the pairing symmetry is understood, there is no scenario that can explain all the major experimental observations, a conundrum that has generated tremendous interest. Here, we present a density-functional-based analysis of magnetic interactions in Sr2RuO4 and discuss the role of magnetic anisotropy in its unconventional superconductivity. Our goal is twofold. First, we access the possibility of the superconducting order parameter rotation in an external magnetic field of 200 Oe, and conclude that the spin-orbit interaction in this material is several orders of magnitude too strong to be consistent with this hypothesis. Thus, the observed invariance of the Knight shift across Tc has no plausible explanation, and casts doubt on using the Knight shift as an ultimate litmus paper for the pairing symmetry. Second, we propose a quantitative double-exchange-like model for combining itinerant fermions with an anisotropic Heisenberg magnetic Hamiltonian. This model is complementary to the Hubbard-model-based calculations published so far, and forms an alternative framework for exploring superconducting symmetry in Sr2RuO4. As an example, we use this model to analyze the degeneracy between various p-triplet states in the simplest mean-field approximation, and show that it splits into a single and two doublets with the ground state defined by the competition between the "Ising" and "compass" anisotropic terms.
Fluorescent Fe K Emission from High Density Accretion Disks
NASA Astrophysics Data System (ADS)
Bautista, Manuel; Mendoza, Claudio; Garcia, Javier; Kallman, Timothy R.; Palmeri, Patrick; Deprince, Jerome; Quinet, Pascal
2018-06-01
Iron K-shell lines emitted by gas closely orbiting black holes are observed to be grossly broadened and skewed by Doppler effects and gravitational redshift. Accordingly, models for line profiles are widely used to measure the spin (i.e., the angular momentum) of astrophysical black holes. The accuracy of these spin estimates is called into question because fitting the data requires very high iron abundances, several times the solar value. Meanwhile, no plausible physical explanation has been proffered for why these black hole systems should be so iron rich. The most likely explanation for the super-solar iron abundances is a deficiency in the models, and the leading candidate cause is that current models are inapplicable at densities above 1018 cm-3. We study the effects of high densities on the atomic parameters and on the spectral models for iron ions. At high densities, Debye plasma can affect the effective atomic potential of the ions, leading to observable changes in energy levels and atomic rates with respect to the low density case. High densities also have the effec of lowering energy the atomic continuum and reducing the recombination rate coefficients. On the spectral modeling side, high densities drive level populations toward a Boltzman distribution and very large numbers of excited atomic levels, typically accounted for in theoretical spectral models, may contribute to the K-shell spectrum.
Menz, Hylton B; Lord, Stephen R; Fitzpatrick, Richard C
2007-02-01
Many falls in older people occur while walking, however the mechanisms responsible for gait instability are poorly understood. Therefore, the aim of this study was to develop a plausible model describing the relationships between impaired sensorimotor function, fear of falling and gait patterns in older people. Temporo-spatial gait parameters and acceleration patterns of the head and pelvis were obtained from 100 community-dwelling older people aged between 75 and 93 years while walking on an irregular walkway. A theoretical model was developed to explain the relationships between these variables, assuming that head stability is a primary output of the postural control system when walking. This model was then tested using structural equation modeling, a statistical technique which enables the testing of a set of regression equations simultaneously. The structural equation model indicated that: (i) reduced step length has a significant direct and indirect association with reduced head stability; (ii) impaired sensorimotor function is significantly associated with reduced head stability, but this effect is largely indirect, mediated by reduced step length, and; (iii) fear of falling is significantly associated with reduced step length, but has little direct influence on head stability. These findings provide useful insights into the possible mechanisms underlying gait characteristics and risk of falling in older people. Particularly important is the indication that fear-related step length shortening may be maladaptive.
Transient rheology of the uppermost mantle beneath the Mojave Desert, California
Pollitz, F.F.
2003-01-01
Geodetic data indicate that the M7.1 Hector Mine, California, earthquake was followed by a brief period (a few weeks) of rapid deformation preceding a prolonged phase of slower deformation. We find that the signal contained in continuous and campaign global positioning system data for 2.5 years after the earthquake may be explained with a transient rheology. Quantitative modeling of these data with allowance for transient (linear biviscous) rheology in the lower crust and upper mantle demonstrates that transient rheology in the upper mantle is dominant, its material properties being typified by two characteristic relaxation times ???0.07 and ???2 years. The inferred mantle rheology is a Jeffreys solid in which the transient and steady-state shear moduli are equal. Consideration of a simpler viscoelastic model with a linear univiscous rheology (2 fewer parameters than a biviscous model) shows that it consistently underpredicts the amplitude of the first ???3 months signal, and allowance for a biviscous rheology is significant at the 99.0% confidence level. Another alternative model - deep postseismic afterslip beneath the coseismic rupture - predicts a vertical velocity pattern opposite to the observed pattern at all time periods considered. Despite its plausibility, the advocated biviscous rheology model is non-unique and should be regarded as a viable alternative to the non-linear mantle rheology model for governing postseismic flow beneath the Mojave Desert. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Botto, Anna; Camporese, Matteo
2017-04-01
Hydrological models allow scientists to predict the response of water systems under varying forcing conditions. In particular, many physically-based integrated models were recently developed in order to understand the fundamental hydrological processes occurring at the catchment scale. However, the use of this class of hydrological models is still relatively limited, as their prediction skills heavily depend on reliable parameter estimation, an operation that is never trivial, being normally affected by large uncertainty and requiring huge computational effort. The objective of this work is to test the potential of data assimilation to be used as an inverse modeling procedure for the broad class of integrated hydrological models. To pursue this goal, a Bayesian data assimilation (DA) algorithm based on a Monte Carlo approach, namely the ensemble Kalman filter (EnKF), is combined with the CATchment HYdrology (CATHY) model. In this approach, input variables (atmospheric forcing, soil parameters, initial conditions) are statistically perturbed providing an ensemble of realizations aimed at taking into account the uncertainty involved in the process. Each realization is propagated forward by the CATHY hydrological model within a parallel R framework, developed to reduce the computational effort. When measurements are available, the EnKF is used to update both the system state and soil parameters. In particular, four different assimilation scenarios are applied to test the capability of the modeling framework: first only pressure head or water content are assimilated, then, the combination of both, and finally both pressure head and water content together with the subsurface outflow. To demonstrate the effectiveness of the approach in a real-world scenario, an artificial hillslope was designed and built to provide real measurements for the DA analyses. The experimental facility, located in the Department of Civil, Environmental and Architectural Engineering of the University of Padova (Italy), consists of a reinforced concrete box containing a soil prism with maximum height of 3.5 m, length of 6 m and width of 2 m. The hillslope is equipped with six pairs of tensiometers and water content reflectometers, to monitor the pressure head and soil moisture content, respectively. Moreover, two tipping bucket flow gages were used to measure the surface and subsurface discharges at the outlet. A 12-day long experiment was carried out, during which a series of four rainfall events with constant rainfall rate were generated, interspersed with phases of drainage. During the experiment, measurements were collected at a relatively high resolution of 0.5 Hz. We report here on the capability of the data assimilation framework to estimate sets of plausible parameters that are consistent with the experimental setup.
ERIC Educational Resources Information Center
Smangs, Mattias
2010-01-01
This article explores the plausibility of the conflicting theoretical assumptions underlying the main criminological perspectives on juvenile delinquents, their peer relations and social skills: the social ability model, represented by Sutherland's theory of differential associations, and the social disability model, represented by Hirschi's…
Exemplar-Based Clustering via Simulated Annealing
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich
2009-01-01
Several authors have touted the p-median model as a plausible alternative to within-cluster sums of squares (i.e., K-means) partitioning. Purported advantages of the p-median model include the provision of "exemplars" as cluster centers, robustness with respect to outliers, and the accommodation of a diverse range of similarity data. We developed…
ERIC Educational Resources Information Center
Paetkau, Mark
2007-01-01
One of my goals as an instructor is to teach students critical thinking skills. This paper presents an example of a student-led discussion of heat conduction at the first-year level. Heat loss from a human head is calculated using conduction and radiation models. The results of these plausible (but wrong) models of heat transfer contradict what…
A model of proto-object based saliency
Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph
2013-01-01
Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601
VISCOELASTIC MODELS OF TIDALLY HEATED EXOMOONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobos, Vera; Turner, Edwin L., E-mail: dobos@konkoly.hu
2015-05-01
Tidal heating of exomoons may play a key role in their habitability, since the elevated temperature can melt the ice on the body even without significant solar radiation. The possibility of life has been intensely studied on solar system moons such as Europa or Enceladus where the surface ice layer covers a tidally heated water ocean. Tidal forces may be even stronger in extrasolar systems, depending on the properties of the moon and its orbit. To study the tidally heated surface temperature of exomoons, we used a viscoelastic model for the first time. This model is more realistic than themore » widely used, so-called fixed Q models because it takes into account the temperature dependence of the tidal heat flux and the melting of the inner material. Using this model, we introduced the circumplanetary Tidal Temperate Zone (TTZ), which strongly depends on the orbital period of the moon and less on its radius. We compared the results with the fixed Q model and investigated the statistical volume of the TTZ using both models. We have found that the viscoelastic model predicts 2.8 times more exomoons in the TTZ with orbital periods between 0.1 and 3.5 days than the fixed Q model for plausible distributions of physical and orbital parameters. The viscoelastic model provides more promising results in terms of habitability because the inner melting of the body moderates the surface temperature, acting like a thermostat.« less
Recchia, Gabriel; Sahlgren, Magnus; Kanerva, Pentti; Jones, Michael N.
2015-01-01
Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics. PMID:25954306
Doble, Brett; John, Thomas; Thomas, David; Fellowes, Andrew; Fox, Stephen; Lorgelly, Paula
2017-05-01
To identify parameters that drive the cost-effectiveness of precision medicine by comparing the use of multiplex targeted sequencing (MTS) to select targeted therapy based on tumour genomic profiles to either no further testing with chemotherapy or no further testing with best supportive care in the fourth-line treatment of metastatic lung adenocarcinoma. A combined decision tree and Markov model to compare costs, life-years, and quality-adjusted life-years over a ten-year time horizon from an Australian healthcare payer perspective. Data sources included the published literature and a population-based molecular cohort study (Cancer 2015). Uncertainty was assessed using deterministic sensitivity analyses and quantified by estimating expected value of perfect/partial perfect information. Uncertainty due to technological/scientific advancement was assessed through a number of plausible future scenario analyses. Point estimate incremental cost-effective ratios indicate that MTS is not cost-effective for selecting fourth-line treatment of metastatic lung adenocarcinoma. Lower mortality rates during testing and for true positive patients, lower health state utility values for progressive disease, and targeted therapy resulting in reductions in inpatient visits, however, all resulted in more favourable cost-effectiveness estimates for MTS. The expected value to decision makers of removing all current decision uncertainty was estimated to be between AUD 5,962,843 and AUD 13,196,451, indicating that additional research to reduce uncertainty may be a worthwhile investment. Plausible future scenarios analyses revealed limited improvements in cost-effectiveness under scenarios of improved test performance, decreased costs of testing/interpretation, and no biopsy costs/adverse events. Reductions in off-label targeted therapy costs, when considered together with the other scenarios did, however, indicate more favourable cost-effectiveness of MTS. As more clinical evidence is generated for MTS, the model developed should be revisited and cost-effectiveness re-estimated under different testing scenarios to further understand the value of precision medicine and its potential impact on the overall health budget. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Learning Multisensory Integration and Coordinate Transformation via Density Estimation
Sabes, Philip N.
2013-01-01
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588
The Detectability of Radio Auroral Emission from Proxima b
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burkhart, Blakesley; Loeb, Abraham
Magnetically active stars possess stellar winds whose interactions with planetary magnetic fields produce radio auroral emission. We examine the detectability of radio auroral emission from Proxima b, the closest known exosolar planet orbiting our nearest neighboring star, Proxima Centauri. Using the radiometric Bode’s law, we estimate the radio flux produced by the interaction of Proxima Centauri’s stellar wind and Proxima b’s magnetosphere for different planetary magnetic field strengths. For plausible planetary masses, Proxima b could produce radio fluxes of 100 mJy or more in a frequency range of 0.02–3 MHz for planetary magnetic field strengths of 0.007–1 G. According tomore » recent MHD models that vary the orbital parameters of the system, this emission is expected to be highly variable. This variability is due to large fluctuations in the size of Proxima b’s magnetosphere as it crosses the equatorial streamer regions of dense stellar wind and high dynamic pressure. Using the MHD model of Garraffo et al. for the variation of the magnetosphere radius during the orbit, we estimate that the observed radio flux can vary nearly by an order of magnitude over the 11.2-day period of Proxima b. The detailed amplitude variation depends on the stellar wind, orbital, and planetary magnetic field parameters. We discuss observing strategies for proposed future space-based observatories to reach frequencies below the ionospheric cutoff (∼10 MHz), which would be required to detect the signal we investigate.« less
Gupta, Kanika; Khatri, Om P
2017-09-01
Efficient removal of malachite green (MG) dye from simulated wastewater is demonstrated using high surface area reduced graphene oxide (rGO). The plausible interaction pathways between MG dye and rGO are deduced from nanostructural features (HRTEM) of rGO and spectroscopic analyses (FTIR and Raman). The high surface area (931m 2 ⋅gm -1 ) of rGO, π-π interaction between the aromatic rings of MG dye and graphitic skeleton, and electrostatic interaction of cationic centre of MG dye with π-electron clouds and negatively charged residual oxygen functionalities of rGO collectively facilitate the adsorption of MG dye on the rGO. The rGO displays adsorption capacity as high as 476.2mg⋅g -1 for MG dye. The thermodynamic parameters calculated from the temperature dependent isotherms suggested that the adsorption was a spontaneous and endothermic process. These results promise the potential of high surface area rGO for efficient removal of cationic dyes for wastewater treatment. Copyright © 2017 Elsevier Inc. All rights reserved.
Geng, Xiaolong; Boufadel, Michel C; Wrenn, Brian
2013-04-01
The biodegradation of heptadecane in five sand columns was modeled using a multiplicative Monod approach. Each column contained 1.0 kg of sand and 2 g of heptadecane, and was supplied with an artificial seawater solution containing nutrients at a flow rate that resulted in unsaturated flow through the column. All nutrients were provided in excess with the exception of nitrate whose influent concentration was 0.1, 0.5, 1.0, 2.5, or 5.0 mg N/L. The experiment was run around 912 h until no measurable oxygen consumption or CO2 production was observed. The residual mass of heptadecane was measured at the end of the experiments and the biodegradation was monitored based on oxygen consumption and CO2 production. Biodegradation kinetic parameters were estimated by fitting the model to experimental data of oxygen, CO2, and residual mass of heptadecane obtained from the two columns having influent nitrate-N concentration of 0.5 and 2.5 mg/L. Noting that the oxygen and CO2 measurements leveled off at around 450 h, we fitted the model to these data for that range. The estimated parameters fell in within the range reported in the literature. In particular, the half-saturation constant for nitrate utilization, [Formula: see text], was estimated to be 0.45 mg N/L, and the yield coefficient was found to be 0.15 mg biomass/mg heptadecane. Using these values, the rest of experimental data from the five columns was predicted, and the model agreed with the observations. There were some consistent discrepancies at large times between the model simulation and observed data in the cases with higher nitrate concentration. One plausible explanation for these differences could be limitation of biodegradation by reduction of the heptadecane-water interfacial area in these columns while the model uses a constant interfacial area.
TH-E-BRF-01: Exploiting Tumor Shrinkage in Split-Course Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unkelbach, J; Craft, D; Hong, T
2014-06-15
Purpose: In split-course radiotherapy, a patient is treated in several stages separated by weeks or months. This regimen has been motivated by radiobiological considerations. However, using modern image-guidance, it also provides an approach to reduce normal tissue dose by exploiting tumor shrinkage. In this work, we consider the optimal design of split-course treatments, motivated by the clinical management of large liver tumors for which normal liver dose constraints prohibit the administration of an ablative radiation dose in a single treatment. Methods: We introduce a dynamic tumor model that incorporates three factors: radiation induced cell kill, tumor shrinkage, and tumor cellmore » repopulation. The design of splitcourse radiotherapy is formulated as a mathematical optimization problem in which the total dose to the liver is minimized, subject to delivering the prescribed dose to the tumor. Based on the model, we gain insight into the optimal administration of radiation over time, i.e. the optimal treatment gaps and dose levels. Results: We analyze treatments consisting of two stages in detail. The analysis confirms the intuition that the second stage should be delivered just before the tumor size reaches a minimum and repopulation overcompensates shrinking. Furthermore, it was found that, for a large range of model parameters, approximately one third of the dose should be delivered in the first stage. The projected benefit of split-course treatments in terms of liver sparing depends on model assumptions. However, the model predicts large liver dose reductions by more than a factor of two for plausible model parameters. Conclusion: The analysis of the tumor model suggests that substantial reduction in normal tissue dose can be achieved by exploiting tumor shrinkage via an optimal design of multi-stage treatments. This suggests taking a fresh look at split-course radiotherapy for selected disease sites where substantial tumor regression translates into reduced target volumes.« less
Solomon, Daniel H; Patrick, Amanda R; Schousboe, John; Losina, Elena
2014-01-01
Fractures related to osteoporosis are associated with $20 billion in cost in the United States, with the majority of cost born by federal health-care programs, such as Medicare and Medicaid. Despite the proven fracture reduction benefits of several osteoporosis treatments, less than one-quarter of patients older than 65 years of age who fracture receive such care. A postfracture liaison service (FLS) has been developed in many health systems but has not been widely implemented in the United States. We developed a Markov state-transition computer simulation model to assess the cost-effectiveness of an FLS using a health-care system perspective. Using the model, we projected the lifetime costs and benefits of FLS, with or without a bone mineral density test, in men and women who had experienced a hip fracture. We estimated the costs and benefits of an FLS, the probabilities of refracture while on osteoporosis treatment, as well as the utilities associated with various health states from published literature. We used multi-way sensitivity analyses to examine impact of uncertainty in input parameters on cost-effectiveness of FLS. The model estimates that an FLS would result in 153 fewer fractures (109 hip, 5 wrist, 21 spine, 17 other), 37.43 more quality-adjusted life years (QALYs), and save $66,879 compared with typical postfracture care per every 10,000 postfracture patients. Doubling the cost of the FLS resulted in an incremental cost-effectiveness ratio (ICER) of $22,993 per QALY. The sensitivity analyses showed that results were robust to plausible ranges of input parameters; assuming the least favorable values of each of the major input parameters results in an ICER of $112,877 per QALY. An FLS targeting patients post-hip fracture should result in cost savings and reduced fractures under most scenarios. PMID:24443384
Low-energy Spectra of Gamma-Ray Bursts from Cooling Electrons
NASA Astrophysics Data System (ADS)
Geng, Jin-Jun; Huang, Yong-Feng; Wu, Xue-Feng; Zhang, Bing; Zong, Hong-Shi
2018-01-01
The low-energy spectra of gamma-ray bursts’ (GRBs) prompt emission are closely related to the energy distribution of electrons, which is further regulated by their cooling processes. We develop a numerical code to calculate the evolution of the electron distribution with given initial parameters, in which three cooling processes (i.e., adiabatic, synchrotron, and inverse Compton cooling) and the effect of a decaying magnetic field are coherently considered. A sequence of results is presented by exploring the plausible parameter space for both the fireball and the Poynting flux–dominated regime. Different cooling patterns for the electrons can be identified, and they are featured by a specific dominant cooling mechanism. Our results show that the hardening of the low-energy spectra can be attributed to the dominance of synchrotron self-Compton cooling within the internal shock model or to decaying synchrotron cooling within the Poynting flux–dominated jet scenario. These two mechanisms can be distinguished by observing the hard low-energy spectra of isolated short pulses in some GRBs. The dominance of adiabatic cooling can also lead to hard low-energy spectra when the ejecta is moving at an extreme relativistic speed. The information from the time-resolved low-energy spectra can help to probe the physical characteristics of the GRB ejecta via our numerical results.
NASA Astrophysics Data System (ADS)
Badhan, Mahmuda A.; Mandell, Avi M.; Hesman, Brigette; Nixon, Conor; Deming, Drake; Irwin, Patrick; Barstow, Joanna; Garland, Ryan
2015-11-01
Understanding the formation environments and evolution scenarios of planets in nearby planetary systems requires robust measures for constraining their atmospheric physical properties. Here we have utilized a combination of two different parameter retrieval approaches, Optimal Estimation and Markov Chain Monte Carlo, as part of the well-validated NEMESIS atmospheric retrieval code, to infer a range of temperature profiles and molecular abundances of H2O, CO2, CH4 and CO from available dayside thermal emission observations of several hot-Jupiter candidates. In order to keep the number of parameters low and henceforth retrieve more plausible profile shapes, we have used a parametrized form of the temperature profile based upon an analytic radiative equilibrium derivation in Guillot et al. 2010 (Line et al. 2012, 2014). We show retrieval results on published spectroscopic and photometric data from both the Hubble Space Telescope and Spitzer missions, and compare them with simulations from the upcoming JWST mission. In addition, since NEMESIS utilizes correlated distribution of absorption coefficients (k-distribution) amongst atmospheric layers to compute these models, updates to spectroscopic databases can impact retrievals quite significantly for such high-temperature atmospheres. As high-temperature line databases are continually being improved, we also compare retrievals between old and newer databases.
Hu, Zhenghui; Ni, Pengyu; Wan, Qun; Zhang, Yan; Shi, Pengcheng; Lin, Qiang
2016-01-01
Changes in BOLD signals are sensitive to the regional blood content associated with the vasculature, which is known as V0 in hemodynamic models. In previous studies involving dynamic causal modeling (DCM) which embodies the hemodynamic model to invert the functional magnetic resonance imaging signals into neuronal activity, V0 was arbitrarily set to a physiolog-ically plausible value to overcome the ill-posedness of the inverse problem. It is interesting to investigate how the V0 value influences DCM. In this study we addressed this issue by using both synthetic and real experiments. The results show that the ability of DCM analysis to reveal information about brain causality depends critically on the assumed V0 value used in the analysis procedure. The choice of V0 value not only directly affects the strength of system connections, but more importantly also affects the inferences about the network architecture. Our analyses speak to a possible refinement of how the hemody-namic process is parameterized (i.e., by making V0 a free parameter); however, the conditional dependencies induced by a more complex model may create more problems than they solve. Obtaining more realistic V0 information in DCM can improve the identifiability of the system and would provide more reliable inferences about the properties of brain connectivity. PMID:27389074
NASA Astrophysics Data System (ADS)
Perez, J. C.; Chandran, B. D. G.
2017-12-01
In this work we present recent results from high-resolution direct numerical simulations and a phenomenological model that describes the radial evolution of reflection-driven Alfven Wave turbulence in the solar atmosphere and the inner solar wind. The simulations are performed inside a narrow magnetic flux tube that models a coronal hole extending from the solar surface through the chromosphere and into the solar corona to approximately 21 solar radii. The simulations include prescribed empirical profiles that account for the inhomogeneities in density, background flow, and the background magnetic field present in coronal holes. Alfven waves are injected into the solar corona by imposing random, time-dependent velocity and magnetic field fluctuations at the photosphere. The phenomenological model incorporates three important features observed in the simulations: dynamic alignment, weak/strong nonlinear AW-AW interactions, and that the outward-propagating AWs launched by the Sun split into two populations with different characteristic frequencies. Model and simulations are in good agreement and show that when the key physical parameters are chosen within observational constraints, reflection-driven Alfven turbulence is a plausible mechanism for the heating and acceleration of the fast solar wind. By flying a virtual Parker Solar Probe (PSP) through the simulations, we will also establish comparisons between the model and simulations with the kind of single-point measurements that PSP will provide.
Dao, Tien Tuan; Pouletaut, Philippe; Charleux, Fabrice; Tho, Marie-Christine Ho Ba; Bensamoun, Sabine
2014-01-01
The purpose of this study was to develop a subject specific finite element model derived from MRI images to numerically analyze the MRE (magnetic resonance elastography) shear wave propagation within skeletal thigh muscles. A sagittal T2 CUBE MRI sequence was performed on the 20-cm thigh segment of a healthy male subject. Skin, adipose tissue, femoral bone and 11 muscles were manually segmented in order to have 3D smoothed solid and meshed models. These tissues were modeled with different constitutive laws. A transient modal dynamics analysis was applied to simulate the shear wave propagation within the thigh tissues. The effects of MRE experimental parameters (frequency, force) and the muscle material properties (shear modulus: C10) were analyzed through the simulated shear wave displacement within the vastus medialis muscle. The results showed a plausible range of frequencies (from 90Hz to 120 Hz), which could be used for MRE muscle protocol. The wave amplitude increased with the level of the force, revealing the importance of the boundary condition. Moreover, different shear displacement patterns were obtained as a function of the muscle mechanical properties. The present study is the first to analyze the shear wave propagation in skeletal muscles using a 3D subject specific finite element model. This study could be of great value to assist the experimenters in the set-up of MRE protocols.
A minimalist feedback-regulated model for galaxy formation during the epoch of reionization
NASA Astrophysics Data System (ADS)
Furlanetto, Steven R.; Mirocha, Jordan; Mebane, Richard H.; Sun, Guochao
2017-12-01
Near-infrared surveys have now determined the luminosity functions of galaxies at 6 ≲ z ≲ 8 to impressive precision and identified a number of candidates at even earlier times. Here, we develop a simple analytic model to describe these populations that allows physically motivated extrapolation to earlier times and fainter luminosities. We assume that galaxies grow through accretion on to dark matter haloes, which we model by matching haloes at fixed number density across redshift, and that stellar feedback limits the star formation rate. We allow for a variety of feedback mechanisms, including regulation through supernova energy and momentum from radiation pressure. We show that reasonable choices for the feedback parameters can fit the available galaxy data, which in turn substantially limits the range of plausible extrapolations of the luminosity function to earlier times and fainter luminosities: for example, the global star formation rate declines rapidly (by a factor of ∼20 from z = 6 to 15 in our fiducial model), but the bright galaxies accessible to observations decline even faster (by a factor ≳ 400 over the same range). Our framework helps us develop intuition for the range of expectations permitted by simple models of high-z galaxies that build on our understanding of 'normal' galaxy evolution. We also provide predictions for galaxy measurements by future facilities, including James Webb Space Telescope and Wide-Field Infrared Survey Telescope.
Modeled occupational exposures to gas-phase medical laser-generated air contaminants.
Lippert, Julia F; Lacey, Steven E; Jones, Rachael M
2014-01-01
Exposure monitoring data indicate the potential for substantive exposure to laser-generated air contaminants (LGAC); however the diversity of medical lasers and their applications limit generalization from direct workplace monitoring. Emission rates of seven previously reported gas-phase constituents of medical laser-generated air contaminants (LGAC) were determined experimentally and used in a semi-empirical two-zone model to estimate a range of plausible occupational exposures to health care staff. Single-source emission rates were generated in an emission chamber as a one-compartment mass balance model at steady-state. Clinical facility parameters such as room size and ventilation rate were based on standard ventilation and environmental conditions required for a laser surgical facility in compliance with regulatory agencies. All input variables in the model including point source emission rates were varied over an appropriate distribution in a Monte Carlo simulation to generate a range of time-weighted average (TWA) concentrations in the near and far field zones of the room in a conservative approach inclusive of all contributing factors to inform future predictive models. The concentrations were assessed for risk and the highest values were shown to be at least three orders of magnitude lower than the relevant occupational exposure limits (OELs). Estimated values do not appear to present a significant exposure hazard within the conditions of our emission rate estimates.
NASA Astrophysics Data System (ADS)
Dupac, X.; Arviset, C.; Fernandez Barreiro, M.; Lopez-Caniego, M.; Tauber, J.
2015-12-01
The Planck Collaboration has released in 2015 their second major dataset through the Planck Legacy Archive (PLA). It includes cosmological, Extragalactic and Galactic science data in temperature (intensity) and polarization. Full-sky maps are provided with unprecedented angular resolution and sensitivity, together with a large number of ancillary maps, catalogues (generic, SZ clusters and Galactic cold clumps), time-ordered data and other information. The extensive cosmological likelihood package allows cosmologists to fully explore the plausible parameters of the Universe. A new web-based PLA user interface is made public since Dec. 2014, allowing easier and faster access to all Planck data, and replacing the previous Java-based software. Numerous additional improvements to the PLA are also being developed through the so-called PLA Added-Value Interface, making use of an external contract with the Planetek Hellas and Expert Analytics software companies. This will allow users to process time-ordered data into sky maps, separate astrophysical components in existing maps, simulate the microwave and infrared sky through the Planck Sky Model, and use a number of other functionalities.
Fletcher, Jason M
2015-07-01
This paper provides some of the first evidence of peer effects in college enrollment decisions. There are several empirical challenges in assessing the influences of peers in this context, including the endogeneity of high school, shared group-level unobservables, and identifying policy-relevant parameters of social interactions models. This paper addresses these issues by using an instrumental variables/fixed effects approach that compares students in the same school but different grade-levels who are thus exposed to different sets of classmates. In particular, plausibly exogenous variation in peers' parents' college expectations are used as an instrument for peers' college choices. Preferred specifications indicate that increasing a student's exposure to college-going peers by ten percentage points is predicted to raise the student's probability of enrolling in college by 4 percentage points. This effect is roughly half the magnitude of growing up in a household with married parents (vs. an unmarried household). Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
BalčiÅ«nas, Sergejus; Ivanov, Maksim; Grigalaitis, Robertas; Banys, Juras; Amorín, Harvey; Castro, Alicia; Algueró, Miguel
2018-05-01
The broadband dielectric properties of high sensitivity piezoelectric 0.36BiScO3-0.64PbTiO3 ceramics with average grain sizes from 1.6 μm down to 26 nm were investigated in the 100-500 K temperature range. The grain size dependence of the dielectric permittivity was analysed within the effective medium approximation. It was found that the generalised core-shell (or brick wall) model correctly explains the size dependence down to the nanoscale. For the first time, the grain bulk and boundary properties were obtained without making any assumptions of values of the parameters or simplifications. Two contributions to dielectric permittivity of the grain bulk are described. The first is the size-independent one, which follows the Curie-Weiss law. The second one is shown to plausibly follow the Kittel's law. This seems to suggest the unexpected persistence of mobile ferroelectric domains at the nanoscale (26 nm grains). Alternative explanations are discussed.
The effect of suspended particles on Jean's criterion for gravitational instability
NASA Technical Reports Server (NTRS)
Wollkind, David J.; Yates, Kemble R.
1990-01-01
The effect that the proper inclusion of suspended particles has on Jeans' criterion for the self-gravitational instability of an unbounded nonrotating adiabatic gas cloud is examined by formulating the appropriate model system, introducing particular physically plausible equations of state and constitutive relations, performing a linear stability analysis of a uniformly expanding exact solution to these governing equations, and exploiting the fact that there exists a natural small material parameter for this problem given by N sub 1/n sub 1, the ratio of the initial number density for the particles to that for the gas. The main result of this investigation is the derivation of an altered criterion which can substantially reduce Jeans' original critical wavelength for instability. It is then shown that the existing discrepancy between Jeans' theoretical prediction using and actual observational data relevant to the Andromeda nebula M31 can be accounted for by this new criterion of assuming suspended particles of a reasonable grain size and distribution to be present.
Degradation kinetics and metabolites in continuous biodegradation of isoprene.
Srivastva, Navnita; Singh, Ram S; Upadhyay, Siddh N; Dubey, Suresh K
2016-04-01
The kinetic parameters of isoprene biodegradation were studied in a bioreactor, comprising of bioscrubber and polyurethane foam packed biofilter in series and inoculated with Pseudomonas sp., using a Michaelis-Menten type model. The maximum elimination capacity, ECmax; substrate constant, Ks and ECmax/Ks values for bioscrubber were found to be 666.7 g m(-3) h(-1), 9.86 g m(-3) and 67.56 h(-1), respectively while those for biofilter were 3333 g m(-3) h(-1), 13.96 g m(-3) and 238.7 h(-1), respectively. The biofilter section exhibited better degradation efficiency compared to the bioscrubber unit. Around 62-75% of the feed isoprene got converted to carbon dioxide, indicating the efficient capability of bacteria to mineralize isoprene. The FTIR and GC-MS analyses of degradation products indicated oxidative cleavage of unsaturated bond of isoprene. These results were used for proposing a plausible degradation pathway for isoprene. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fletcher, Jason M.
2015-01-01
This paper provides some of the first evidence of peer effects in college enrollment decisions. There are several empirical challenges in assessing the influences of peers in this context, including the endogeneity of high school, shared group-level unobservables, and identifying policy-relevant parameters of social interactions models. This paper addresses these issues by using an instrumental variables/fixed effects approach that compares students in the same school but different grade-levels who are thus exposed to different sets of classmates. In particular, plausibly exogenous variation in peers’ parents’ college expectations are used as an instrument for peers’ college choices. Preferred specifications indicate that increasing a student’s exposure to college-going peers by ten percentage points is predicted to raise the student’s probability of enrolling in college by 4 percentage points. This effect is roughly half the magnitude of growing up in a household with married parents (vs. an unmarried household). PMID:26004476
NASA Astrophysics Data System (ADS)
Robinson, Alexandra R.
An updated global survey of radioisotope production and distribution was completed and subjected to a revised "down-selection methodology" to determine those radioisotopes that should be classified as potential national security risks based on availability and key physical characteristics that could be exploited in a hypothetical radiological dispersion device. The potential at-risk radioisotopes then were used in a modeling software suite known as Turbo FRMAC, developed by Sandia National Laboratories, to characterize plausible contamination maps known as Protective Action Guideline Zone Maps. This software also was used to calculate the whole body dose equivalent for exposed individuals based on various dispersion parameters and scenarios. Derived Response Levels then were determined for each radioisotope using: 1) target doses to members of the public provided by the U.S. EPA, and 2) occupational dose limits provided by the U.S. Nuclear Regulatory Commission. The limiting Derived Response Level for each radioisotope also was determined.
NASA Astrophysics Data System (ADS)
Diem, Samuel; Vogt, Tobias; Hoehn, Eduard
2010-12-01
For groundwater transport modeling on a scale of 10-100 m, detailed information about the spatial distribution of hydraulic conductivity is of great importance. At a test site (10×20 m) in the alluvial gravel-and-sand aquifer of the perialpine Thur valley (Switzerland), four different methods were applied on different scales to assess this parameter. A comparison of the results showed that multilevel slug tests give reliable results at the required scale. For its analysis, a plausible value of the anisotropy ratio of hydraulic conductivity ( K v / K h ) is needed, which was calculated using a pumping test. The integral results of pumping tests provide an upper boundary of the natural spectrum of hydraulic conductivity at the scale of the test site. Flowmeter logs are recommended if the relative distribution of hydraulic conductivity is of primary importance, while sieve analyses can be used if only a rough estimate of hydraulic conductivity is acceptable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binder, Tobias; Covi, Laura; Kamada, Ayuki
Dark Matter (DM) models providing possible alternative solutions to the small-scale crisis of the standard cosmology are nowadays of growing interest. We consider DM interacting with light hidden fermions via well-motivated fundamental operators showing the resultant matter power spectrum is suppressed on subgalactic scales within a plausible parameter region. Our basic description of the evolution of cosmological perturbations relies on a fully consistent first principles derivation of a perturbed Fokker-Planck type equation, generalizing existing literature. The cosmological perturbation of the Fokker-Planck equation is presented for the first time in two different gauges, where the results transform into each other accordingmore » to the rules of gauge transformation. Furthermore, our focus lies on a derivation of a broadly applicable and easily computable collision term showing important phenomenological differences to other existing approximations. As one of the main results and concerning the small-scale crisis, we show the equal importance of vector and scalar boson mediated interactions between the DM and the light fermions.« less
Comparison of methods for the detection of gravitational waves from unknown neutron stars
NASA Astrophysics Data System (ADS)
Walsh, S.; Pitkin, M.; Oliver, M.; D'Antonio, S.; Dergachev, V.; Królak, A.; Astone, P.; Bejger, M.; Di Giovanni, M.; Dorosh, O.; Frasca, S.; Leaci, P.; Mastrogiovanni, S.; Miller, A.; Palomba, C.; Papa, M. A.; Piccinni, O. J.; Riles, K.; Sauter, O.; Sintes, A. M.
2016-12-01
Rapidly rotating neutron stars are promising sources of continuous gravitational wave radiation for the LIGO and Virgo interferometers. The majority of neutron stars in our galaxy have not been identified with electromagnetic observations. All-sky searches for isolated neutron stars offer the potential to detect gravitational waves from these unidentified sources. The parameter space of these blind all-sky searches, which also cover a large range of frequencies and frequency derivatives, presents a significant computational challenge. Different methods have been designed to perform these searches within acceptable computational limits. Here we describe the first benchmark in a project to compare the search methods currently available for the detection of unknown isolated neutron stars. The five methods compared here are individually referred to as the PowerFlux, sky Hough, frequency Hough, Einstein@Home, and time domain F -statistic methods. We employ a mock data challenge to compare the ability of each search method to recover signals simulated assuming a standard signal model. We find similar performance among the four quick-look search methods, while the more computationally intensive search method, Einstein@Home, achieves up to a factor of two higher sensitivity. We find that the absence of a second derivative frequency in the search parameter space does not degrade search sensitivity for signals with physically plausible second derivative frequencies. We also report on the parameter estimation accuracy of each search method, and the stability of the sensitivity in frequency and frequency derivative and in the presence of detector noise.
New Insights into Auroral Particle Acceleration via Coordinated Optical-Radar Networks
NASA Astrophysics Data System (ADS)
Hirsch, M.
2016-12-01
The efficacy of instruments synthesized from heterogeneous sensor networks is increasingly being realized in fielded science observation systems. New insights into the finest spatio-temporal scales of ground-observable ionospheric physics are realized by coupling low-level data from fixed legacy instruments with mobile and portable sensors. In particular, turbulent ionospheric events give enhanced radar returns more than three orders of magnitude larger than typical incoherent plasma observations. Radar integration times for the Poker Flat Incoherent Scatter Radar (PFISR) can thereby be shrunk from order 100 second integration time down to order 100 millisecond integration time for the ion line. Auroral optical observations with 20 millisecond cadence synchronized in absolute time with the radar help uncover plausible particle acceleration processes for the highly dynamic aurora often associated with Langmuir turbulence. Quantitative analysis of coherent radar returns combined with a physics-based model yielding optical volume emission rate profiles vs. differential number flux input of precipitating particles into the ionosphere yield plausibility estimates for a particular auroral acceleration process type. Tabulated results from a survey of auroral events where the Boston University High Speed Auroral Tomography system operated simultaneously with PFISR are presented. Context is given to the narrow-field HiST observations by the Poker Flat Digital All-Sky Camera and THEMIS GBO ASI network. Recent advances in high-rate (order 100 millisecond) plasma line ISR observations (100x improvement in temporal resolution) will contribute to future coordinated observations. ISR beam pattern and pulse parameter configurations favorable for future coordinated optical-ISR experiments are proposed in light of recent research uncovering the criticality of aspect angle to ISR-observable physics. High-rate scientist-developed GPS TEC receivers are expected to contribute additional high resolution observations to such experiments.
ERIC Educational Resources Information Center
Gunnoe, Marjorie Lindner; Mariner, Carrie Lea
Researchers who employ contextual models of parenting contend that it is not spanking per se, but rather the context in which spanking occurs and the meanings children ascribe to spanking, that predict child outcomes. This study proposed two plausible meanings that children may ascribe to spanking--a legitimate expression of parental authority or…
ERIC Educational Resources Information Center
Dougherty, Michael R.; Franco-Watkins, Ana M.; Thomas, Rick
2008-01-01
The theory of probabilistic mental models (PMM; G. Gigerenzer, U. Hoffrage, & H. Kleinbolting, 1991) has had a major influence on the field of judgment and decision making, with the most recent important modifications to PMM theory being the identification of several fast and frugal heuristics (G. Gigerenzer & D. G. Goldstein, 1996). These…
ERIC Educational Resources Information Center
Mavritsaki, Eirini; Heinke, Dietmar; Allen, Harriet; Deco, Gustavo; Humphreys, Glyn W.
2011-01-01
We present the case for a role of biologically plausible neural network modeling in bridging the gap between physiology and behavior. We argue that spiking-level networks can allow "vertical" translation between physiological properties of neural systems and emergent "whole-system" performance--enabling psychological results to be simulated from…
Interval Estimation of Revision Effect on Scale Reliability via Covariance Structure Modeling
ERIC Educational Resources Information Center
Raykov, Tenko
2009-01-01
A didactic discussion of a procedure for interval estimation of change in scale reliability due to revision is provided, which is developed within the framework of covariance structure modeling. The method yields ranges of plausible values for the population gain or loss in reliability of unidimensional composites, which results from deletion or…
Plausibility and the Theoreticians' Regress: Constructing the evolutionary fate of stars
NASA Astrophysics Data System (ADS)
Ipe, Alex Ike
2002-10-01
This project presents a case-study of a scientific controversy that occurred in theoretical astrophysics nearly seventy years ago following the conceptual discovery of a novel phenomenon relating to the evolution and structure of stellar matter, known as the limiting mass. The ensuing debate between the author of the finding, Subrahmanyan Chandrasekhar and his primary critic, Arthur Stanley Eddington, witnessed both scientists trying to convince one another, as well as the astrophysical community, that their respective positions on the issue was the correct one. Since there was no independent criterion—that is, no observational evidence—at the time of the dispute that could have been drawn upon to test the validity of the limiting mass concept, a logical, objective resolution to the controversy was not possible. In this respect, I argue that the dynamics of the Chandrasekhar-Eddington debate succinctly resonates with Kennefick's notion of the Theoreticians' Regress. However, whereas this model predicts that such a regress can be broken if both parties in a dispute come to agree on who was in error and collaborate on a calculation whose technical foundation can be agreed to, I argue that a more pragmatic path by which the Theoreticians' Regress is broken is when one side in a dispute is able to construct its argument as being more plausible than that of its opponent, and is so successful in doing so, that its opposition is subsequently forced to withdraw from the debate. In order to adequately deal with the construction of plausibility in the context of scientific controversies, I draw upon Harvey's Plausibility Model as well as Pickering's work on the role socio-cultural factors play in the resolution of intellectual disputes. It is believed that the ideas embedded in these social- relativist-constructivist perspectives provide the most parsimonious explanation as to the reasons for the genesis and ultimate closure of this particular scientific controversy.
Pilgrims sailing the Titanic: plausibility effects on memory for misinformation.
Hinze, Scott R; Slaten, Daniel G; Horton, William S; Jenkins, Ryan; Rapp, David N
2014-02-01
People rely on information they read even when it is inaccurate (Marsh, Meade, & Roediger, Journal of Memory and Language 49:519-536, 2003), but how ubiquitous is this phenomenon? In two experiments, we investigated whether this tendency to encode and rely on inaccuracies from text might be influenced by the plausibility of misinformation. In Experiment 1, we presented stories containing inaccurate plausible statements (e.g., "The Pilgrims' ship was the Godspeed"), inaccurate implausible statements (e.g., . . . the Titanic), or accurate statements (e.g., . . . the Mayflower). On a subsequent test of general knowledge, participants relied significantly less on implausible than on plausible inaccuracies from the texts but continued to rely on accurate information. In Experiment 2, we replicated these results with the addition of a think-aloud procedure to elicit information about readers' noticing and evaluative processes for plausible and implausible misinformation. Participants indicated more skepticism and less acceptance of implausible than of plausible inaccuracies. In contrast, they often failed to notice, completely ignored, and at times even explicitly accepted the misinformation provided by plausible lures. These results offer insight into the conditions under which reliance on inaccurate information occurs and suggest potential mechanisms that may underlie reported misinformation effects.
Defining time crystals via representation theory
NASA Astrophysics Data System (ADS)
Khemani, Vedika; von Keyserlingk, C. W.; Sondhi, S. L.
2017-09-01
Time crystals are proposed states of matter which spontaneously break time translation symmetry. There is no settled definition of such states. We offer a new definition which follows the traditional recipe for Wigner symmetries and order parameters. Supplementing our definition with a few plausible assumptions we find that a) systems with time-independent Hamiltonians should not exhibit time translation symmetry breaking while b) the recently studied π spin glass/Floquet time crystal can be viewed as breaking a global internal symmetry and as breaking time translation symmetry, as befits its two names.
Object recognition with hierarchical discriminant saliency networks.
Han, Sunhyoung; Vasconcelos, Nuno
2014-01-01
The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and computer vision literatures. This demonstrates benefits for all the functional enhancements of the HDSN, the class tuning inherent to discriminant saliency, and saliency layers based on templates of increasing target selectivity and invariance. Altogether, these experiments suggest that there are non-trivial benefits in integrating attention and recognition.
Spontaneous chiral symmetry breaking in two-dimensional aggregation
NASA Astrophysics Data System (ADS)
Sandler, Ilya Moiseevich
Recently, unusual and strikingly beautiful seahorse-like growth patterns have been discovered. These patterns possess a spontaneously broken chiral (left/right) symmetry. To explain this spontaneous chiral symmetry breaking, we develop a model for the growth of the aggregate, assuming that the latter is charged, and that the incoming particles are polarizable, and hence drawn preferentially to regions of strong electric field. This model is used both for numerical simulation and theoretical analysis of the aggregation process. We find that the broken symmetry (typically, an 'S' shape) appears in our simulations for some parameter values. Its origin is the long-range interaction (competition and repulsion) among growing branches of the aggregate, such that a right or left side consistently dominates the growth process. We show that the electrostatic interaction may account for the other geometrical properties of the aggregates, such as the existence of only 2 main arms, and the "finned" external edge of the main arms. The results of our simulations of growth in the presence of the external electric field are also in a good agreement with the results of new experiments, motivated by our ideas. Thus, we believe that our growth model provides a plausible explanation of the origin of the broken symmetry in the experimental patterns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palladino, Andrea; Vissani, Francesco; Spurio, Maurizio, E-mail: andrea.palladino@gssi.infn.it, E-mail: maurizio.spurio@bo.infn.it, E-mail: francesco.vissani@lngs.infn.it
Recently it was noted that different IceCube datasets are not consistent with the same power law spectrum of the cosmic neutrinos: this is the IceCube spectral anomaly , that suggests that they observe a multicomponent spectrum. In this work, the main possibilities to enhance the description in terms of a single extragalactic neutrino component are examined. The hypothesis of a sizable contribution of Galactic high-energy neutrino events distributed as E {sup −2.7} [ Astrophys. J. 826 (2016) 185] is critically analyzed and its natural generalization is considered. The stability of the expectations is studied by introducing free parameters, motivated bymore » theoretical considerations and observational facts. The upgraded model here examined has 1) a Galactic component with different normalization and shape E {sup −2.4}; 2) an extragalactic neutrino spectrum based on new data; 3) a non-zero prompt component of atmospheric neutrinos. The two key predictions of the model concern the 'high-energy starting events' collected from the Southern sky. The Galactic component produces a softer spectrum and a testable angular anisotropy. A second, radically different class of models, where the second component is instead isotropic, plausibly extragalactic and with a relatively soft spectrum, is disfavored instead by existing observations of muon neutrinos from the Northern sky and below few 100 TeV.« less
Angular momentum transfer in primordial discs and the rotation of the first stars
NASA Astrophysics Data System (ADS)
Hirano, Shingo; Bromm, Volker
2018-05-01
We investigate the rotation velocity of the first stars by modelling the angular momentum transfer in the primordial accretion disc. Assessing the impact of magnetic braking, we consider the transition in angular momentum transport mode at the Alfvén radius, from the dynamically dominated free-fall accretion to the magnetically dominated solid-body one. The accreting protostar at the centre of the primordial star-forming cloud rotates with close to breakup speed in the case without magnetic fields. Considering a physically motivated model for small-scale turbulent dynamo amplification, we find that stellar rotation speed quickly declines if a large fraction of the initial turbulent energy is converted to magnetic energy (≳ 0.14). Alternatively, if the dynamo process were inefficient, for amplification due to flux freezing, stars would become slow rotators if the pre-galactic magnetic field strength is above a critical value, ≃10-8.2 G, evaluated at a scale of nH = 1 cm-3, which is significantly higher than plausible cosmological seed values (˜10-15 G). Because of the rapid decline of the stellar rotational speed over a narrow range in model parameters, the first stars encounter a bimodal fate: rapid rotation at almost the breakup level, or the near absence of any rotation.
On the IceCube spectral anomaly
NASA Astrophysics Data System (ADS)
Palladino, Andrea; Spurio, Maurizio; Vissani, Francesco
2016-12-01
Recently it was noted that different IceCube datasets are not consistent with the same power law spectrum of the cosmic neutrinos: this is the IceCube spectral anomaly, that suggests that they observe a multicomponent spectrum. In this work, the main possibilities to enhance the description in terms of a single extragalactic neutrino component are examined. The hypothesis of a sizable contribution of Galactic high-energy neutrino events distributed as E-2.7 [Astrophys. J. 826 (2016) 185] is critically analyzed and its natural generalization is considered. The stability of the expectations is studied by introducing free parameters, motivated by theoretical considerations and observational facts. The upgraded model here examined has 1) a Galactic component with different normalization and shape E-2.4 2) an extragalactic neutrino spectrum based on new data; 3) a non-zero prompt component of atmospheric neutrinos. The two key predictions of the model concern the `high-energy starting events' collected from the Southern sky. The Galactic component produces a softer spectrum and a testable angular anisotropy. A second, radically different class of models, where the second component is instead isotropic, plausibly extragalactic and with a relatively soft spectrum, is disfavored instead by existing observations of muon neutrinos from the Northern sky and below few 100 TeV.
NASA Astrophysics Data System (ADS)
Könnyű, Balázs; Czárán, Tamás
2015-06-01
The RNA World scenario of prebiotic chemical evolution is among the most plausible conceptual framework available today for modelling the origin of life. RNA offers genetic and catalytic (metabolic) functionality embodied in a single chemical entity, and a metabolically cooperating community of RNA molecules would constitute a viable infrabiological subsystem with a potential to evolve into proto-cellular life. Our Metabolically Coupled Replicator System (MCRS) model is a spatially explicit computer simulation implementation of the RNA-World scenario, in which replicable ribozymes cooperate in supplying each other with monomers for their own replication. MCRS has been repeatedly demonstrated to be viable and evolvable, with different versions of the model improved in depth (chemical detail of metabolism) or in extension (additional functions of RNA molecules). One of the dynamically relevant extensions of the MCRS approach to prebiotic RNA evolution is the explicit inclusion of template replication into its assumptions, which we have studied in the present version. We found that this modification has not changed the behaviour of the system in the qualitative sense, just the range of the parameter space which is optimal for the coexistence of metabolically cooperating replicators has shifted in terms of metabolite mobility. The system also remains resistant and tolerant to parasitic replicators.
A novel patient-specific model to compute coronary fractional flow reserve.
Kwon, Soon-Sung; Chung, Eui-Chul; Park, Jin-Seo; Kim, Gook-Tae; Kim, Jun-Woo; Kim, Keun-Hong; Shin, Eun-Seok; Shim, Eun Bo
2014-09-01
The fractional flow reserve (FFR) is a widely used clinical index to evaluate the functional severity of coronary stenosis. A computer simulation method based on patients' computed tomography (CT) data is a plausible non-invasive approach for computing the FFR. This method can provide a detailed solution for the stenosed coronary hemodynamics by coupling computational fluid dynamics (CFD) with the lumped parameter model (LPM) of the cardiovascular system. In this work, we have implemented a simple computational method to compute the FFR. As this method uses only coronary arteries for the CFD model and includes only the LPM of the coronary vascular system, it provides simpler boundary conditions for the coronary geometry and is computationally more efficient than existing approaches. To test the efficacy of this method, we simulated a three-dimensional straight vessel using CFD coupled with the LPM. The computed results were compared with those of the LPM. To validate this method in terms of clinically realistic geometry, a patient-specific model of stenosed coronary arteries was constructed from CT images, and the computed FFR was compared with clinically measured results. We evaluated the effect of a model aorta on the computed FFR and compared this with a model without the aorta. Computationally, the model without the aorta was more efficient than that with the aorta, reducing the CPU time required for computing a cardiac cycle to 43.4%. Copyright © 2014. Published by Elsevier Ltd.
2017-01-01
In principle, formal dynamical models of decision making hold the potential to represent fundamental computations underpinning value-based (i.e., preferential) decisions in addition to perceptual decisions. Sequential-sampling models such as the race model and the drift-diffusion model that are grounded in simplicity, analytical tractability, and optimality remain popular, but some of their more recent counterparts have instead been designed with an aim for more feasibility as architectures to be implemented by actual neural systems. Connectionist models are proposed herein at an intermediate level of analysis that bridges mental phenomena and underlying neurophysiological mechanisms. Several such models drawing elements from the established race, drift-diffusion, feedforward-inhibition, divisive-normalization, and competing-accumulator models were tested with respect to fitting empirical data from human participants making choices between foods on the basis of hedonic value rather than a traditional perceptual attribute. Even when considering performance at emulating behavior alone, more neurally plausible models were set apart from more normative race or drift-diffusion models both quantitatively and qualitatively despite remaining parsimonious. To best capture the paradigm, a novel six-parameter computational model was formulated with features including hierarchical levels of competition via mutual inhibition as well as a static approximation of attentional modulation, which promotes “winner-take-all” processing. Moreover, a meta-analysis encompassing several related experiments validated the robustness of model-predicted trends in humans’ value-based choices and concomitant reaction times. These findings have yet further implications for analysis of neurophysiological data in accordance with computational modeling, which is also discussed in this new light. PMID:29077746
Colas, Jaron T
2017-01-01
In principle, formal dynamical models of decision making hold the potential to represent fundamental computations underpinning value-based (i.e., preferential) decisions in addition to perceptual decisions. Sequential-sampling models such as the race model and the drift-diffusion model that are grounded in simplicity, analytical tractability, and optimality remain popular, but some of their more recent counterparts have instead been designed with an aim for more feasibility as architectures to be implemented by actual neural systems. Connectionist models are proposed herein at an intermediate level of analysis that bridges mental phenomena and underlying neurophysiological mechanisms. Several such models drawing elements from the established race, drift-diffusion, feedforward-inhibition, divisive-normalization, and competing-accumulator models were tested with respect to fitting empirical data from human participants making choices between foods on the basis of hedonic value rather than a traditional perceptual attribute. Even when considering performance at emulating behavior alone, more neurally plausible models were set apart from more normative race or drift-diffusion models both quantitatively and qualitatively despite remaining parsimonious. To best capture the paradigm, a novel six-parameter computational model was formulated with features including hierarchical levels of competition via mutual inhibition as well as a static approximation of attentional modulation, which promotes "winner-take-all" processing. Moreover, a meta-analysis encompassing several related experiments validated the robustness of model-predicted trends in humans' value-based choices and concomitant reaction times. These findings have yet further implications for analysis of neurophysiological data in accordance with computational modeling, which is also discussed in this new light.
NASA Technical Reports Server (NTRS)
Nomoto, K.
1981-01-01
As a plausible explosion model for a Type I supernova, the evolution of carbon-oxygen white dwarfs accreting helium in binary systems was investigated from the onset of accretion up to the point at which a thermonuclear explosion occurs. The relationship between the conditions in the binary system and the triggering mechanism for the supernova explosion is discussed, especially for the cases with relatively slow accretion rate. It is found that the growth of a helium zone on the carbon-oxygen core leads to a supernova explosion which is triggered either by the off-center helium detonation for slow and intermediate accretion rates or by the carbon deflagration for slow and rapid accretion rates. Both helium detonation and carbon deflagration are possible for the case of slow accretion, since in this case the initial mass of the white dwarf is an important parameter for determining the mode of ignition. Finally, various modes of building up the helium zone on the white dwarf, namely, direct transfer of helium from the companion star and the various types and strength of the hydrogen shell flashes are discussed in some detail.
Ultraviolet line diagnostics of accretion disk winds in cataclysmic variables
NASA Technical Reports Server (NTRS)
Vitello, Peter; Shlosman, Isaac
1993-01-01
The IUE data base is used to analyze the UV line shapes of the cataclysmic variables RW Sex, RW Tri, and V Sge. Observed lines are compared to synthetic line profiles computed using a model of rotating biconical winds from accretion disks. The wind model calculates the wind ionization structure self-consistently including photoionization from the disk and boundary layer and treats 3D line radiation transfer in the Sobolev approximation. It is found that winds from accretion disks provide a good fit for reasonable parameters to the observed UV lines which include the P Cygni profiles for low-inclination systems and pure emission at large inclination. Disk winds are preferable to spherical winds which originate on the white dwarf because they: (1) require a much lower ratio of mass-loss rate to accretion rate and are therefore more plausible energetically; (2) provide a natural source for a biconical distribution of mass outflow which produces strong scattering far above the disk leading to P Cygni profiles for low-inclination systems and pure line emission profiles at high inclination with the absence of eclipses in UV lines; and (3) produce rotation-broadened pure emission lines at high inclination.
Balancing Selection in Species with Separate Sexes: Insights from Fisher’s Geometric Model
Connallon, Tim; Clark, Andrew G.
2014-01-01
How common is balancing selection, and what fraction of phenotypic variance is attributable to balanced polymorphisms? Despite decades of research, answers to these questions remain elusive. Moreover, there is no clear theoretical prediction about the frequency with which balancing selection is expected to arise within a population. Here, we use an extension of Fisher’s geometric model of adaptation to predict the probability of balancing selection in a population with separate sexes, wherein polymorphism is potentially maintained by two forms of balancing selection: (1) heterozygote advantage, where heterozygous individuals at a locus have higher fitness than homozygous individuals, and (2) sexually antagonistic selection (a.k.a. intralocus sexual conflict), where the fitness of each sex is maximized by different genotypes at a locus. We show that balancing selection is common under biologically plausible conditions and that sex differences in selection or sex-by-genotype effects of mutations can each increase opportunities for balancing selection. Although heterozygote advantage and sexual antagonism represent alternative mechanisms for maintaining polymorphism, they mutually exist along a balancing selection continuum that depends on population and sex-specific parameters of selection and mutation. Sexual antagonism is the dominant mode of balancing selection across most of this continuum. PMID:24812306
Gamma-ray Burst Prompt Correlations: Selection and Instrumental Effects
NASA Astrophysics Data System (ADS)
Dainotti, M. G.; Amati, L.
2018-05-01
The prompt emission mechanism of gamma-ray bursts (GRB) even after several decades remains a mystery. However, it is believed that correlations between observable GRB properties, given their huge luminosity/radiated energy and redshift distribution extending up to at least z ≈ 9, are promising possible cosmological tools. They also may help to discriminate among the most plausible theoretical models. Nowadays, the objective is to make GRBs standard candles, similar to supernovae (SNe) Ia, through well-established and robust correlations. However, differently from SNe Ia, GRBs span over several order of magnitude in their energetics, hence they cannot yet be considered standard candles. Additionally, being observed at very large distances, their physical properties are affected by selection biases, the so-called Malmquist bias or Eddington effect. We describe the state of the art on how GRB prompt correlations are corrected for these selection biases to employ them as redshift estimators and cosmological tools. We stress that only after an appropriate evaluation and correction for these effects, GRB correlations can be used to discriminate among the theoretical models of prompt emission, to estimate the cosmological parameters and to serve as distance indicators via redshift estimation.
Linkage studies on chromosome 22 in familial schizophrenia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallada, H.P.; Gill, M.; Sham, P.
1995-04-24
As part of a systematic search for a major genetic locus for schizophrenia we have examined chromosome 22 using 14 highly polymorphic markers in 23 disease pedigrees. The markers were distributed at an average distance of 6.6 cM, covering 70-80% of the chromosome. We analyzed the data by the lod score method using five plausible genetic models ranging from dominant to recessive, after testing the power of our sample under the same genetic parameters. The most positive lod score found was 1.51 under a recessive model for the marker D22S278, which is insufficient to conclude linkage. However, an excess ofmore » shared alleles in affected siblings (P < .01) was found for both D22S278 and D22S283. For D22S278, the A statistic was equal to the lod score (1.51) and therefore did not provide additional evidence for linkage allowing for heterogeneity, but the Liang statistic was more significant (P = .002). Our results suggest the possibility that the region around D22S278 and D22S283 contains a gene which contributes to the etiology of schizophrenia. 60 refs., 1 fig., 5 tabs.« less
UV line diagnostics of accretion disk winds in cataclysmic variables
NASA Technical Reports Server (NTRS)
Vitello, Peter; Shlosman, Isaac
1992-01-01
The IUE data base is used to analyze the UV line shapes of cataclysmic variables RW Sex, RW Tri, and V Sge. Observed lines are compared to synthetic line profiles computed using a model of rotating bi-conical winds from accretion disks. The wind model calculates the wind ionization structure self-consistently including photoionization from the disk and boundary layer and treats 3-D line radiation transfer in the Sobolev approximation. It is found that winds from accretion disks provide a good fit for reasonable parameters to the observed UV lines which include the P Cygni profiles for low inclination systems and pure emission at large inclination. Disk winds are preferable to spherical winds which originate on the white dwarf because they (1) require a much lower ratio of mass loss rate to accretion rate and are therefore more plausible energetically, (2) provide a natural source for a bi-conical distribution of mass outflow which produces strong scattering far above the disk leading to P Cygni profiles for low inclination systems, and pure line emission profiles at high inclination with the absence of eclipses in UV lines, and (3) produce rotation broadened pure emission lines at high inclination.
Periodontal disease and perinatal outcomes.
Matevosyan, Naira Roland
2011-04-01
To elucidate plausible associations between periodontal disease (PD) and pregnancy events through meta-analysis of original research published between 1998 and 2010. One hundred and twenty-five randomized, case-control, matched-cohort studies on pregnancy and postpartum specifics in women with PD are identified through PubMed, LILACS, and Cochrane Register. Meta-study is performed on a sample of 992 births allocated from studies of level I-II-1 evidence. An oral inflammation score (OIS) is composed from parameteric and observational components of maternal PD. Pearson arrival process is modeled for exchangeable correlations. Women with preeclampsia and preterm birth have poor periodontal parameters in both, treatment and placebo groups (OR 1.94-2.9). In puerperae with severe periodontitis birth weight is negatively correlated with maternal probing depth (r = -0.368), and C-reactive protein (r = -0.416). Higher rates of tobacco use (RR 3.02), bacterial vaginosis (RR 2.7), clinical attachment level (OR 2.76), and fetal tyrosine kinase (OR 1.6) contribute in increased rates of preeclamsia (RR 1.68), and prematurity (RR 2.75). After adding confounders into the model OIS remains significantly associated with preterm birth (OR 2.3). Maternal PD has strong associations with preeclampsia and prematurity.
TAP score: torsion angle propensity normalization applied to local protein structure evaluation
Tosatto, Silvio CE; Battistutta, Roberto
2007-01-01
Background Experimentally determined protein structures may contain errors and require validation. Conformational criteria based on the Ramachandran plot are mainly used to distinguish between distorted and adequately refined models. While the readily available criteria are sufficient to detect totally wrong structures, establishing the more subtle differences between plausible structures remains more challenging. Results A new criterion, called TAP score, measuring local sequence to structure fitness based on torsion angle propensities normalized against the global minimum and maximum is introduced. It is shown to be more accurate than previous methods at estimating the validity of a protein model in terms of commonly used experimental quality parameters on two test sets representing the full PDB database and a subset of obsolete PDB structures. Highly selective TAP thresholds are derived to recognize over 90% of the top experimental structures in the absence of experimental information. Both a web server and an executable version of the TAP score are available at . Conclusion A novel procedure for energy normalization (TAP) has significantly improved the possibility to recognize the best experimental structures. It will allow the user to more reliably isolate problematic structures in the context of automated experimental structure determination. PMID:17504537
Walder, J.S.
1997-01-01
We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/ D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?? > 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.
NASA Astrophysics Data System (ADS)
Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.
2006-12-01
Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.
NASA Astrophysics Data System (ADS)
Siqueira, M. B.; Katul, G. G.
2009-12-01
A one-dimensional analytical model that predicts foliage CO2 uptake rates, turbulent fluxes, and mean concentration throughout the roughness sub-layer (RSL), a layer that extends from the ground surface up to 5 times the canopy height (h), is proposed. The model combines the mean continuity equation for CO2 with first-order closure principles for turbulent fluxes and simplified physiological and radiative transfer schemes for foliage uptake. This combination results in a second-order ordinary differential equation in which it is imposed soil respiration (RE) as lower and CO2 concentration well above the RSL as upper boundary conditions. An inverse version of the model was tested against data sets from two contrasting ecosystems: a tropical forest (TF, h=40 m) and a managed irrigated rice canopy (RC, h=0.7 m) - with good agreement noted between modeled and measured mean CO2 concentration profiles within the entire RSL (see figure). Sensitivity analysis on the model parameters revealed a plausible scaling regime between them and a dimensionless parameter defined by the ratio between external (RE) and internal (stomatal conductance) characteristics controlling the CO2 exchange process. The model can be used to infer the thickness of the RSL for CO2 exchange, the inequality in zero-plane displacement between CO2 and momentum, and its consequences on modeled CO2 fluxes. A simplified version of the solution is well suited for being incorporated into large-scale climate models. Furthermore, the model framework here can be used to a priori estimate relative contributions from the soil surface and the atmosphere to canopy-air CO2 concentration thereby making it synergetic to stable isotopes studies. Panels a) and c): Profiles of normalized measured leaf area density distribution (a) for TF and RC, respectively. Continuous lines are the constant a used in the model and dashed lines represent data-derived profiles. Panels b) and d) are modeled and ensemble-averaged measured CO2 profiles reference to the uppermost measured point for TF and RC, respectively.
Eocene Paleoclimate: Incredible or Uncredible? Model data syntheses raise questions.
NASA Astrophysics Data System (ADS)
Huber, M.
2012-04-01
Reconstructions of Eocene paleoclimate have pushed on the boundaries of climate dynamics theory for generations. While significant improvements in theory and models have brought them closer to the proxy data, the data themselves have shifted considerably. Tropical temperatures and greenhouse gas concentrations are now reconstructed to be higher than once thought--in agreement with models--but, many polar temperature reconstructions are even warmer than the eye popping numbers from only a decade ago. These interpretations of subtropical-to-tropical polar conditions once again challenge models and theory. But, the devil, is as always in the details and it is worthwhile to consider the range of potential uncertainties and biases in the paleoclimate record interpretations to evaluate the proposition that models and data may not materially disagree. It is necessary to ask whether current Eocene paleoclimate reconstructions are accurate enough to compellingly argue for a complete failure of climate models and theory. Careful consideration of Eocene model output and proxy data reveals that over most of the Earth the model agrees with the upper range of plausible tropical proxy data and the lower range of plausible high latitude proxy reconstructions. Implications for the sensitivity of global climate to greenhouse gas forcing are drawn for a range of potential Eocene climate scenarios ranging from a literal interpretation of one particular model to a literal interpretation of proxy data. Hope for a middle ground is found.
Bayesian learning and the psychology of rule induction
Endress, Ansgar D.
2014-01-01
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791
A Stochastic Model of Plausibility in Live Virtual Constructive Environments
2017-09-14
objective in virtual environment research and design is the maintenance of adequate consistency levels in the face of limited system resources such as...provides some commentary with regard to system design considerations and future research directions. II. SYSTEM MODEL DVEs are often designed as a...exceed the system’s requirements. Research into predictive models of virtual environment consistency is needed to provide designers the tools to
A One-System Theory Which is Not Propositional.
Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R
2009-04-01
We argue that the propositional and link-based approaches to human contingency learning represent different levels of analysis because propositional reasoning requires a basis, which is plausibly provided by a link-based architecture. Moreover, in their attempt to compare two general classes of models (link-based and propositional), Mitchell et al. have referred to only two generic models and ignore the large variety of different models within each class.
Cure models for the analysis of time-to-event data in cancer studies.
Jia, Xiaoyu; Sima, Camelia S; Brennan, Murray F; Panageas, Katherine S
2013-11-01
In settings when it is biologically plausible that some patients are cured after definitive treatment, cure models present an alternative to conventional survival analysis. Cure models can inform on the group of patients cured, by estimating the probability of cure, and identifying factors that influence it; while simultaneously focusing on time to recurrence and associated factors for the remaining patients. © 2013 Wiley Periodicals, Inc.
Entrainment to the CIECAM02 and CIELAB colour appearance models in the human cortex.
Thwaites, Andrew; Wingfield, Cai; Wieser, Eric; Soltan, Andrew; Marslen-Wilson, William D; Nimmo-Smith, Ian
2018-04-01
In human visual processing, information from the visual field passes through numerous transformations before perceptual attributes such as colour are derived. The sequence of transforms involved in constructing perceptions of colour can be approximated by colour appearance models such as the CIE (2002) colour appearance model, abbreviated as CIECAM02. In this study, we test the plausibility of CIECAM02 as a model of colour processing by looking for evidence of its cortical entrainment. The CIECAM02 model predicts that colour is split in to two opposing chromatic components, red-green and cyan-yellow (termed CIECAM02-a and CIECAM02-b respectively), and an achromatic component (termed CIECAM02-A). Entrainment of cortical activity to the outputs of these components was estimated using measurements of electro- and magnetoencephalographic (EMEG) activity, recorded while healthy subjects watched videos of dots changing colour. We find entrainment to chromatic component CIECAM02-a at approximately 35 ms latency bilaterally in occipital lobe regions, and entrainment to achromatic component CIECAM02-A at approximately 75 ms latency, also bilaterally in occipital regions. For comparison, transforms from a less physiologically plausible model (CIELAB) were also tested, with no significant entrainment found. Copyright © 2018 Elsevier Ltd. All rights reserved.
Radiative heating of interstellar grains falling toward the solar nebula: 1-D diffusion calculations
NASA Technical Reports Server (NTRS)
Simonelli, D. P.; Pollack, J. B.; McKay, C. P.
1997-01-01
As the dense molecular cloud that was the precursor of our Solar System was collapsing to form a protosun and the surrounding solar-nebula accretion disk, infalling interstellar grains were heated much more effectively by radiation from the forming protosun than by radiation from the disk's accretion shock. Accordingly, we have estimated the temperatures experienced by these infalling grains using radiative diffusion calculations whose sole energy source is radiation from the protosun. Although the calculations are 1-dimensional, they make use of 2-D, cylindrically symmetric models of the density structure of a collapsing, rotating cloud. The temperature calculations also utilize recent models for the composition and radiative properties of interstellar grains (Pollack et al. 1994. Astrophys. J. 421, 615-639), thereby allowing us to estimate which grain species might have survived, intact, to the disk accretion shock and what accretion rates and molecular-cloud rotation rates aid that survival. Not surprisingly, we find that the large uncertainties in the free parameter values allow a wide range of grain-survival results: (1) For physically plausible high accretion rates or low rotation rates (which produce small accretion disks), all of the infalling grain species, even the refractory silicates and iron, will vaporize in the protosun's radiation field before reaching the disk accretion shock. (2) For equally plausible low accretion rates or high rotation rates (which produce large accretion disks), all non-ice species, even volatile organics, will survive intact to the disk accretion shock. These grain-survival conclusions are subject to several limitations which need to be addressed by future, more sophisticated radiative-transfer models. Nevertheless, our results can serve as useful inputs to models of the processing that interstellar grains undergo at the solar nebula's accretion shock, and thus help address the broader question of interstellar inheritance in the solar nebula and present Solar System. These results may also help constrain the size of the accretion disk: for example, if we require that the calculations produce partial survival of organic grains into the solar nebula, we infer that some material entered the disk intact at distances comparable to or greater than a few AU. Intriguingly, this is comparable to the heliocentric distance that separates the C-rich outer parts of the current Solar System from the C-poor inner regions.
Simonelli, D P; Pollack, J B; McKay, C P
1997-02-01
As the dense molecular cloud that was the precursor of our Solar System was collapsing to form a protosun and the surrounding solar-nebula accretion disk, infalling interstellar grains were heated much more effectively by radiation from the forming protosun than by radiation from the disk's accretion shock. Accordingly, we have estimated the temperatures experienced by these infalling grains using radiative diffusion calculations whose sole energy source is radiation from the protosun. Although the calculations are 1-dimensional, they make use of 2-D, cylindrically symmetric models of the density structure of a collapsing, rotating cloud. The temperature calculations also utilize recent models for the composition and radiative properties of interstellar grains (Pollack et al. 1994. Astrophys. J. 421, 615-639), thereby allowing us to estimate which grain species might have survived, intact, to the disk accretion shock and what accretion rates and molecular-cloud rotation rates aid that survival. Not surprisingly, we find that the large uncertainties in the free parameter values allow a wide range of grain-survival results: (1) For physically plausible high accretion rates or low rotation rates (which produce small accretion disks), all of the infalling grain species, even the refractory silicates and iron, will vaporize in the protosun's radiation field before reaching the disk accretion shock. (2) For equally plausible low accretion rates or high rotation rates (which produce large accretion disks), all non-ice species, even volatile organics, will survive intact to the disk accretion shock. These grain-survival conclusions are subject to several limitations which need to be addressed by future, more sophisticated radiative-transfer models. Nevertheless, our results can serve as useful inputs to models of the processing that interstellar grains undergo at the solar nebula's accretion shock, and thus help address the broader question of interstellar inheritance in the solar nebula and present Solar System. These results may also help constrain the size of the accretion disk: for example, if we require that the calculations produce partial survival of organic grains into the solar nebula, we infer that some material entered the disk intact at distances comparable to or greater than a few AU. Intriguingly, this is comparable to the heliocentric distance that separates the C-rich outer parts of the current Solar System from the C-poor inner regions.
Shivkumar, Sabyasachi; Muralidharan, Vignesh; Chakravarthy, V Srinivasa
2017-01-01
Basal ganglia circuit is an important subcortical system of the brain thought to be responsible for reward-based learning. Striatum, the largest nucleus of the basal ganglia, serves as an input port that maps cortical information. Microanatomical studies show that the striatum is a mosaic of specialized input-output structures called striosomes and regions of the surrounding matrix called the matrisomes. We have developed a computational model of the striatum using layered self-organizing maps to capture the center-surround structure seen experimentally and explain its functional significance. We believe that these structural components could build representations of state and action spaces in different environments. The striatum model is then integrated with other components of basal ganglia, making it capable of solving reinforcement learning tasks. We have proposed a biologically plausible mechanism of action-based learning where the striosome biases the matrisome activity toward a preferred action. Several studies indicate that the striatum is critical in solving context dependent problems. We build on this hypothesis and the proposed model exploits the modularity of the striatum to efficiently solve such tasks.
Shivkumar, Sabyasachi; Muralidharan, Vignesh; Chakravarthy, V. Srinivasa
2017-01-01
Basal ganglia circuit is an important subcortical system of the brain thought to be responsible for reward-based learning. Striatum, the largest nucleus of the basal ganglia, serves as an input port that maps cortical information. Microanatomical studies show that the striatum is a mosaic of specialized input-output structures called striosomes and regions of the surrounding matrix called the matrisomes. We have developed a computational model of the striatum using layered self-organizing maps to capture the center-surround structure seen experimentally and explain its functional significance. We believe that these structural components could build representations of state and action spaces in different environments. The striatum model is then integrated with other components of basal ganglia, making it capable of solving reinforcement learning tasks. We have proposed a biologically plausible mechanism of action-based learning where the striosome biases the matrisome activity toward a preferred action. Several studies indicate that the striatum is critical in solving context dependent problems. We build on this hypothesis and the proposed model exploits the modularity of the striatum to efficiently solve such tasks. PMID:28680395
Delta: a new web-based 3D genome visualization and analysis platform.
Tang, Bixia; Li, Feifei; Li, Jing; Zhao, Wenming; Zhang, Zhihua
2018-04-15
Delta is an integrative visualization and analysis platform to facilitate visually annotating and exploring the 3D physical architecture of genomes. Delta takes Hi-C or ChIA-PET contact matrix as input and predicts the topologically associating domains and chromatin loops in the genome. It then generates a physical 3D model which represents the plausible consensus 3D structure of the genome. Delta features a highly interactive visualization tool which enhances the integration of genome topology/physical structure with extensive genome annotation by juxtaposing the 3D model with diverse genomic assay outputs. Finally, by visually comparing the 3D model of the β-globin gene locus and its annotation, we speculated a plausible transitory interaction pattern in the locus. Experimental evidence was found to support this speculation by literature survey. This served as an example of intuitive hypothesis testing with the help of Delta. Delta is freely accessible from http://delta.big.ac.cn, and the source code is available at https://github.com/zhangzhwlab/delta. zhangzhihua@big.ac.cn. Supplementary data are available at Bioinformatics online.
Computational analyses in cognitive neuroscience: in defense of biological implausibility.
Dror, I E; Gallogly, D P
1999-06-01
Because cognitive neuroscience researchers attempt to understand the human mind by bridging behavior and brain, they expect computational analyses to be biologically plausible. In this paper, biologically implausible computational analyses are shown to have critical and essential roles in the various stages and domains of cognitive neuroscience research. Specifically, biologically implausible computational analyses can contribute to (1) understanding and characterizing the problem that is being studied, (2) examining the availability of information and its representation, and (3) evaluating and understanding the neuronal solution. In the context of the distinct types of contributions made by certain computational analyses, the biological plausibility of those analyses is altogether irrelevant. These biologically implausible models are nevertheless relevant and important for biologically driven research.
Plausibility Judgments in Conceptual Change and Epistemic Cognition
ERIC Educational Resources Information Center
Lombardi, Doug; Nussbaum, E. Michael; Sinatra, Gale M.
2016-01-01
Plausibility judgments rarely have been addressed empirically in conceptual change research. Recent research, however, suggests that these judgments may be pivotal to conceptual change about certain topics where a gap exists between what scientists and laypersons find plausible. Based on a philosophical and empirical foundation, this article…
Identifying Asteroidal Parent Bodies of the Meteorites: The Last Lap
NASA Technical Reports Server (NTRS)
Gaffey, M. J.
2000-01-01
Spectral studies of asteroids and dynamical models have converged to yield, at last, a clear view of asteroid-meteorite linkages. Plausible parent bodies for most meteorite types have either been identified or it has become evident where to search for them.
Precision modelling of M dwarf stars: the magnetic components of CM Draconis
NASA Astrophysics Data System (ADS)
MacDonald, J.; Mullan, D. J.
2012-04-01
The eclipsing binary CM Draconis (CM Dra) contains two nearly identical red dwarfs of spectral class dM4.5. The masses and radii of the two components have been reported with unprecedentedly small statistical errors: for M, these errors are 1 part in 260, while for R, the errors reported by Morales et al. are 1 part in 130. When compared with standard stellar models with appropriate mass and age (≈4 Gyr), the empirical results indicate that both components are discrepant from the models in the following sense: the observed stars are larger in R ('bloated'), by several standard deviations, than the models predict. The observed luminosities are also lower than the models predict. Here, we attempt at first to model the two components of CM Dra in the context of standard (non-magnetic) stellar models using a systematic array of different assumptions about helium abundances (Y), heavy element abundances (Z), opacities and mixing length parameter (α). We find no 4-Gyr-old models with plausible values of these four parameters that fit the observed L and R within the reported statistical error bars. However, CM Dra is known to contain magnetic fields, as evidenced by the occurrence of star-spots and flares. Here we ask: can inclusion of magnetic effects into stellar evolution models lead to fits of L and R within the error bars? Morales et al. have reported that the presence of polar spots results in a systematic overestimate of R by a few per cent when eclipses are interpreted with a standard code. In a star where spots cover a fraction f of the surface area, we find that the revised R and L for CM Dra A can be fitted within the error bars by varying the parameter α. The latter is often assumed to be reduced by the presence of magnetic fields, although the reduction in α as a function of B is difficult to quantify. An alternative magnetic effect, namely inhibition of the onset of convection, can be readily quantified in terms of a magnetic parameter δ≈B2/4πγpgas (where B is the strength of the local vertical magnetic field). In the context of δ models in which B is not allowed to exceed a 'ceiling' of 106 G, we find that the revised R and L can also be fitted, within the error bars, in a finite region of the f-δ plane. The permitted values of δ near the surface leads us to estimate that the vertical field strength on the surface of CM Dra A is about 500 G, in good agreement with independent observational evidence for similar low-mass stars. Recent results for another binary with parameters close to those of CM Dra suggest that metallicity differences cannot be the dominant explanation for the bloating of the two components of CM Dra.
Mori, Amani T; Ngalesoni, Frida; Norheim, Ole F; Robberstad, Bjarne
2014-09-15
Dihydroartemisinin-piperaquine (DhP) is highly recommended for the treatment of uncomplicated malaria. This study aims to compare the costs, health benefits and cost-effectiveness of DhP and artemether-lumefantrine (AL) alongside "do-nothing" as a baseline comparator in order to consider the appropriateness of DhP as a first-line anti-malarial drug for children in Tanzania. A cost-effectiveness analysis was performed using a Markov decision model, from a provider's perspective. The study used cost data from Tanzania and secondary effectiveness data from a review of articles from sub-Saharan Africa. Probabilistic sensitivity analysis was used to incorporate uncertainties in the model parameters. In addition, sensitivity analyses were used to test plausible variations of key parameters and the key assumptions were tested in scenario analyses. The model predicts that DhP is more cost-effective than AL, with an incremental cost-effectiveness ratio (ICER) of US$ 12.40 per DALY averted. This result relies on the assumption that compliance to treatment with DhP is higher than that with AL due to its relatively simple once-a-day dosage regimen. When compliance was assumed to be identical for the two drugs, AL was more cost-effective than DhP with an ICER of US$ 12.54 per DALY averted. DhP is, however, slightly more likely to be cost-effective compared to a willingness-to-pay threshold of US$ 150 per DALY averted. Dihydroartemisinin-piperaquine is a very cost-effective anti-malarial drug. The findings support its use as an alternative first-line drug for treatment of uncomplicated malaria in children in Tanzania and other sub-Saharan African countries with similar healthcare infrastructures and epidemiology of malaria.
Carlos, Fernando; Espejel, Luis; Novick, Diego; López, Rubén; Flores, Daniel
2015-09-25
Painful diabetic peripheral neuropathy affects 40-50% of patients with diabetic neuropathy, leading to impaired quality of life and substantial costs. Duloxetine and pregabalin have evidence-based support, and are formally approved for controlling painful diabetic peripheral neuropathy. We used a 12-week decision model for examining painful diabetic peripheral neuropathy first-line therapy with daily doses of duloxetine 60mg or pregabalin 300mg, under the perspective of the Instituto Venezolano de los Seguros Sociales. We gathered model parameters from published literature and expert´s opinion, focusing on the magnitude of pain relief, the presence of adverse events, the possibility of withdrawal owing to intolerable adverse events or due to lack of efficacy, and the quality-adjusted life years expected in each strategy. We analyzed direct medical costs (which are expressed in Bolívares Fuertes, BsF) comprising drug acquisition besides additional care devoted to treatment of adverse events and poor pain relief. We conducted both deterministic and probabilistic sensitivity analyses. Total expected costs per 1000 patients were BsF 1 046 146 (26%) lower with duloxetine than with pregabalin. Most of these savings (91%) corresponds to the difference in the acquisitions cost of each medication. duloxetine also provided 23 more patients achieving good pain relief and a gain of about two quality-adjusted life years per 1000 treated. Model was robust to plausible changes in main parameters. Duloxetine remained the preferred option in 93.9% of the second-order Monte Carlo simulations. This study suggests duloxetine dominates (i.e., is more effective and lead to gains in quality-adjusted life years), remaining less costly than pregabalin for treatment of painful diabetic peripheral neuropathy.
Least-squares reverse time migration in elastic media
NASA Astrophysics Data System (ADS)
Ren, Zhiming; Liu, Yang; Sen, Mrinal K.
2017-02-01
Elastic reverse time migration (RTM) can yield accurate subsurface information (e.g. PP and PS reflectivity) by imaging the multicomponent seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyse the influence of model parametrizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parametrizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parametrizations produce fewer artefacts caused by parameter crosstalk than the Lamé coefficient parametrization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its antinoise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.
Estimates of live-tree carbon stores in the Pacific Northwest are sensitive to model selection
Susanna L. Melson; Mark E. Harmon; Jeremy S. Fried; James B. Domingo
2011-01-01
Estimates of live-tree carbon stores are influenced by numerous uncertainties. One of them is model-selection uncertainty: one has to choose among multiple empirical equations and conversion factors that can be plausibly justified as locally applicable to calculate the carbon store from inventory measurements such as tree height and diameter at breast height (DBH)....
Alternative supply specifications and estimates of regional supply and demand for stumpage.
Kent P. Connaughton; David H. Jackson; Gerard A. Majerus
1988-01-01
Four plausible sets of stumpage supply and demand equations were developed and estimated; the demand equation was the same for each set, although the supply equation differed. The supply specifications varied from the model of regional excess demand in which National Forest harvest levels were assumed fixed to a more realistic model in which the harvest on the National...
NASA Astrophysics Data System (ADS)
Costa, Veber; Fernandes, Wilson
2017-11-01
Extreme flood estimation has been a key research topic in hydrological sciences. Reliable estimates of such events are necessary as structures for flood conveyance are continuously evolving in size and complexity and, as a result, their failure-associated hazards become more and more pronounced. Due to this fact, several estimation techniques intended to improve flood frequency analysis and reducing uncertainty in extreme quantile estimation have been addressed in the literature in the last decades. In this paper, we develop a Bayesian framework for the indirect estimation of extreme flood quantiles from rainfall-runoff models. In the proposed approach, an ensemble of long daily rainfall series is simulated with a stochastic generator, which models extreme rainfall amounts with an upper-bounded distribution function, namely, the 4-parameter lognormal model. The rationale behind the generation model is that physical limits for rainfall amounts, and consequently for floods, exist and, by imposing an appropriate upper bound for the probabilistic model, more plausible estimates can be obtained for those rainfall quantiles with very low exceedance probabilities. Daily rainfall time series are converted into streamflows by routing each realization of the synthetic ensemble through a conceptual hydrologic model, the Rio Grande rainfall-runoff model. Calibration of parameters is performed through a nonlinear regression model, by means of the specification of a statistical model for the residuals that is able to accommodate autocorrelation, heteroscedasticity and nonnormality. By combining the outlined steps in a Bayesian structure of analysis, one is able to properly summarize the resulting uncertainty and estimating more accurate credible intervals for a set of flood quantiles of interest. The method for extreme flood indirect estimation was applied to the American river catchment, at the Folsom dam, in the state of California, USA. Results show that most floods, including exceptionally large non-systematic events, were reasonably estimated with the proposed approach. In addition, by accounting for uncertainties in each modeling step, one is able to obtain a better understanding of the influential factors in large flood formation dynamics.
Bowers, Jeffrey S
2009-01-01
A fundamental claim associated with parallel distributed processing (PDP) theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts (e.g. "dog"), that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed representations learned in connectionist models and neural coding in brain and often dismiss localist (grandmother cell) theories as biologically implausible. The author reviews a range a data that strongly challenge this claim and shows that localist models provide a better account of single-cell recording studies. The author also contrast local and alternative distributed coding schemes (sparse and coarse coding) and argues that common rejection of grandmother cell theories in neuroscience is due to a misunderstanding about how localist models behave. The author concludes that the localist representations embedded in theories of perception and cognition are consistent with neuroscience; biology only calls into question the distributed representations often learned in PDP models.
Dynamical simulation priors for human motion tracking.
Vondrak, Marek; Sigal, Leonid; Jenkins, Odest Chadwicke
2013-01-01
We propose a simulation-based dynamical motion prior for tracking human motion from video in presence of physical ground-person interactions. Most tracking approaches to date have focused on efficient inference algorithms and/or learning of prior kinematic motion models; however, few can explicitly account for the physical plausibility of recovered motion. Here, we aim to recover physically plausible motion of a single articulated human subject. Toward this end, we propose a full-body 3D physical simulation-based prior that explicitly incorporates a model of human dynamics into the Bayesian filtering framework. We consider the motion of the subject to be generated by a feedback “control loop” in which Newtonian physics approximates the rigid-body motion dynamics of the human and the environment through the application and integration of interaction forces, motor forces, and gravity. Interaction forces prevent physically impossible hypotheses, enable more appropriate reactions to the environment (e.g., ground contacts), and are produced from detected human-environment collisions. Motor forces actuate the body, ensure that proposed pose transitions are physically feasible, and are generated using a motion controller. For efficient inference in the resulting high-dimensional state space, we utilize an exemplar-based control strategy that reduces the effective search space of motor forces. As a result, we are able to recover physically plausible motion of human subjects from monocular and multiview video. We show, both quantitatively and qualitatively, that our approach performs favorably with respect to Bayesian filtering methods with standard motion priors.
Plausible carrier transport model in organic-inorganic hybrid perovskite resistive memory devices
NASA Astrophysics Data System (ADS)
Park, Nayoung; Kwon, Yongwoo; Choi, Jaeho; Jang, Ho Won; Cha, Pil-Ryung
2018-04-01
We demonstrate thermally assisted hopping (TAH) as an appropriate carrier transport model for CH3NH3PbI3 resistive memories. Organic semiconductors, including organic-inorganic hybrid perovskites, have been previously speculated to follow the space-charge-limited conduction (SCLC) model. However, the SCLC model cannot reproduce the temperature dependence of experimental current-voltage curves. Instead, the TAH model with temperature-dependent trap densities and a constant trap level are demonstrated to well reproduce the experimental results.
Computational modeling of peripheral pain: a commentary.
Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S
2015-06-11
This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications.
Source Effects and Plausibility Judgments When Reading about Climate Change
ERIC Educational Resources Information Center
Lombardi, Doug; Seyranian, Viviane; Sinatra, Gale M.
2014-01-01
Gaps between what scientists and laypeople find plausible may act as a barrier to learning complex and/or controversial socioscientific concepts. For example, individuals may consider scientific explanations that human activities are causing current climate change as implausible. This plausibility judgment may be due-in part-to individuals'…
Plausibility and Perspective Influence the Processing of Counterfactual Narratives
ERIC Educational Resources Information Center
Ferguson, Heather J.; Jayes, Lewis T.
2018-01-01
Previous research has established that readers' eye movements are sensitive to the difficulty with which a word is processed. One important factor that influences processing is the fit of a word within the wider context, including its plausibility. Here we explore the influence of plausibility in counterfactual language processing. Counterfactuals…
NASA Astrophysics Data System (ADS)
Kurosawa, Kosuke; Okamoto, Takaya; Genda, Hidenori
2018-02-01
Hypervelocity ejection of material by impact spallation is considered a plausible mechanism for material exchange between two planetary bodies. We have modeled the spallation process during vertical impacts over a range of impact velocities from 6 to 21 km/s using both grid- and particle-based hydrocode models. The Tillotson equations of state, which are able to treat the nonlinear dependence of density on pressure and thermal pressure in strongly shocked matter, were used to study the hydrodynamic-thermodynamic response after impacts. The effects of material strength and gravitational acceleration were not considered. A two-dimensional time-dependent pressure field within a 1.5-fold projectile radius from the impact point was investigated in cylindrical coordinates to address the generation of spalled material. A resolution test was also performed to reject ejected materials with peak pressures that were too low due to artificial viscosity. The relationship between ejection velocity veject and peak pressure Ppeak was also derived. Our approach shows that "late-stage acceleration" in an ejecta curtain occurs due to the compressible nature of the ejecta, resulting in an ejection velocity that can be higher than the ideal maximum of the resultant particle velocity after passage of a shock wave. We also calculate the ejecta mass that can escape from a planet like Mars (i.e., veject > 5 km/s) that matches the petrographic constraints from Martian meteorites, and which occurs when Ppeak = 30-50 GPa. Although the mass of such ejecta is limited to 0.1-1 wt% of the projectile mass in vertical impacts, this is sufficient for spallation to have been a plausible mechanism for the ejection of Martian meteorites. Finally, we propose that impact spallation is a plausible mechanism for the generation of tektites.
Empirical agreement in model validation.
Jebeile, Julie; Barberousse, Anouk
2016-04-01
Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Akao, Akihiko; Ogawa, Yutaro; Jimbo, Yasuhiko; Ermentrout, G. Bard; Kotani, Kiyoshi
2018-01-01
Gamma oscillations are thought to play an important role in brain function. Interneuron gamma (ING) and pyramidal interneuron gamma (PING) mechanisms have been proposed as generation mechanisms for these oscillations. However, the relation between the generation mechanisms and the dynamical properties of the gamma oscillation are still unclear. Among the dynamical properties of the gamma oscillation, the phase response function (PRF) is important because it encodes the response of the oscillation to inputs. Recently, the PRF for an inhibitory population of modified theta neurons that generate an ING rhythm was computed by the adjoint method applied to the associated Fokker-Planck equation (FPE) for the model. The modified theta model incorporates conductance-based synapses as well as the voltage and current dynamics. Here, we extended this previous work by creating an excitatory-inhibitory (E-I) network using the modified theta model and described the population dynamics with the corresponding FPE. We conducted a bifurcation analysis of the FPE to find parameter regions which generate gamma oscillations. In order to label the oscillatory parameter regions by their generation mechanisms, we defined ING- and PING-type gamma oscillation in a mathematically plausible way based on the driver of the inhibitory population. We labeled the oscillatory parameter regions by these generation mechanisms and derived PRFs via the adjoint method on the FPE in order to investigate the differences in the responses of each type of oscillation to inputs. PRFs for PING and ING mechanisms are derived and compared. We found the amplitude of the PRF for the excitatory population is larger in the PING case than in the ING case. Finally, the E-I population of the modified theta neuron enabled us to analyze the PRFs of PING-type gamma oscillation and the entrainment ability of E and I populations. We found a parameter region in which PRFs of E and I are both purely positive in the case of PING oscillations. The different entrainment abilities of E and I stimulation as governed by the respective PRFs was compared to direct simulations of finite populations of model neurons. We find that it is easier to entrain the gamma rhythm by stimulating the inhibitory population than by stimulating the excitatory population as has been found experimentally.
NASA Technical Reports Server (NTRS)
Colgan, William; Rajaram, Harihar; Anderson, Robert; Steffen. Konrad; Phillips, Thomas; Zwally, H. Jay; Abdalati, Waleed
2012-01-01
We apply a novel one-dimensional glacier hydrology model that calculates hydraulic head to the tidewater-terminating Sermeq Avannarleq flowline of the Greenland ice sheet. Within a plausible parameter space, the model achieves a quasi-steady-state annual cycle in which hydraulic head oscillates close to flotation throughout the ablation zone. Flotation is briefly achieved during the summer melt season along a approx.17 km stretch of the approx.50 km of flowline within the ablation zone. Beneath the majority of the flowline, subglacial conduit storage closes (i.e. obtains minimum radius) during the winter and opens (i.e. obtains maximum radius) during the summer. Along certain stretches of the flowline, the model predicts that subglacial conduit storage remains open throughout the year. A calculated mean glacier water residence time of approx.2.2 years implies that significant amounts of water are stored in the glacier throughout the year. We interpret this residence time as being indicative of the timescale over which the glacier hydrologic system is capable of adjusting to external surface meltwater forcings. Based on in situ ice velocity observations, we suggest that the summer speed-up event generally corresponds to conditions of increasing hydraulic head during inefficient subglacial drainage. Conversely, the slowdown during fall generally corresponds to conditions of decreasing hydraulic head during efficient subglacial drainage.