Extensions of Rasch's Multiplicative Poisson Model.
ERIC Educational Resources Information Center
Jansen, Margo G. H.; van Duijn, Marijtje A. J.
1992-01-01
A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)
Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.
Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J
2018-05-24
Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.
Analytical performance evaluation of SAR ATR with inaccurate or estimated models
NASA Astrophysics Data System (ADS)
DeVore, Michael D.
2004-09-01
Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.
Model-Based IN SITU Parameter Estimation of Ultrasonic Guided Waves in AN Isotropic Plate
NASA Astrophysics Data System (ADS)
Hall, James S.; Michaels, Jennifer E.
2010-02-01
Most ultrasonic systems employing guided waves for flaw detection require information such as dispersion curves, transducer locations, and expected propagation loss. Degraded system performance may result if assumed parameter values do not accurately reflect the actual environment. By characterizing the propagating environment in situ at the time of test, potentially erroneous a priori estimates are avoided and performance of ultrasonic guided wave systems can be improved. A four-part model-based algorithm is described in the context of previous work that estimates model parameters whereby an assumed propagation model is used to describe the received signals. This approach builds upon previous work by demonstrating the ability to estimate parameters for the case of single mode propagation. Performance is demonstrated on signals obtained from theoretical dispersion curves, finite element modeling, and experimental data.
The Impact on Individualizing Student Models on Necessary Practice Opportunities
ERIC Educational Resources Information Center
Lee, Jung In; Brunskill, Emma
2012-01-01
When modeling student learning, tutors that use the Knowledge Tracing framework often assume that all students have the same set of model parameters. We find that when fitting parameters to individual students, there is significant variation among the individual's parameters. We examine if this variation is important in terms of instructional…
A general model for attitude determination error analysis
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Seidewitz, ED; Nicholson, Mark
1988-01-01
An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.
Well behaved anisotropic compact star models in general relativity
NASA Astrophysics Data System (ADS)
Jasim, M. K.; Maurya, S. K.; Gupta, Y. K.; Dayanandan, B.
2016-11-01
Anisotropic compact star models have been constructed by assuming a particular form of a metric function e^{λ}. We solved the Einstein field equations for determining the metric function e^{ν}. For this purpose we have assumed a physically valid expression of radial pressure (pr). The obtained anisotropic compact star model is representing the realistic compact objects such as PSR 1937 +21. We have done an extensive study about physical parameters for anisotropic models and found that these parameters are well behaved throughout inside the star. Along with these we have also determined the equation of state for compact star which gives the radial pressure is purely the function of density i.e. pr=f(ρ).
Parameter Recovery for the 1-P HGLLM with Non-Normally Distributed Level-3 Residuals
ERIC Educational Resources Information Center
Kara, Yusuf; Kamata, Akihito
2017-01-01
A multilevel Rasch model using a hierarchical generalized linear model is one approach to multilevel item response theory (IRT) modeling and is referred to as a one-parameter hierarchical generalized linear logistic model (1-P HGLLM). Although it has the flexibility to model nested structure of data with covariates, the model assumes the normality…
ERIC Educational Resources Information Center
DeMars, Christine E.
2012-01-01
In structural equation modeling software, either limited-information (bivariate proportions) or full-information item parameter estimation routines could be used for the 2-parameter item response theory (IRT) model. Limited-information methods assume the continuous variable underlying an item response is normally distributed. For skewed and…
M-estimator for the 3D symmetric Helmert coordinate transformation
NASA Astrophysics Data System (ADS)
Chang, Guobin; Xu, Tianhe; Wang, Qianxin
2018-01-01
The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.
An IRT Model with a Parameter-Driven Process for Change
ERIC Educational Resources Information Center
Rijmen, Frank; De Boeck, Paul; van der Maas, Han L. J.
2005-01-01
An IRT model with a parameter-driven process for change is proposed. Quantitative differences between persons are taken into account by a continuous latent variable, as in common IRT models. In addition, qualitative inter-individual differences and auto-dependencies are accounted for by assuming within-subject variability with respect to the…
BAYESIAN PARAMETER ESTIMATION IN A MIXED-ORDER MODEL OF BOD DECAY. (U915590)
We describe a generalized version of the BOD decay model in which the reaction is allowed to assume an order other than one. This is accomplished by making the exponent on BOD concentration a free parameter to be determined by the data. This "mixed-order" model may be ...
Monte Carlo exploration of Mikheyev-Smirnov-Wolfenstein solutions to the solar neutrino problem
NASA Technical Reports Server (NTRS)
Shi, X.; Schramm, D. N.; Bahcall, J. N.
1992-01-01
The paper explores the impact of astrophysical uncertainties on the Mikheyev-Smirnov-Wolfenstein (MSW) solution by calculating the allowed MSW solutions for 1000 different solar models with a Monte Carlo selection of solar model input parameters, assuming a full three-family MSW mixing. Applications are made to the chlorine, gallium, Kamiokande, and Borexino experiments. The initial GALLEX result limits the mixing parameters to the upper diagonal and the vertical regions of the MSW triangle. The expected event rates in the Borexino experiment are also calculated, assuming the MSW solutions implied by GALLEX.
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
The drift diffusion model as the choice rule in reinforcement learning.
Pedersen, Mads Lund; Frank, Michael J; Biele, Guido
2017-08-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyperactivity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups.
The drift diffusion model as the choice rule in reinforcement learning
Frank, Michael J.
2017-01-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyper-activity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups. PMID:27966103
Understanding which parameters control shallow ascent of silicic effusive magma
NASA Astrophysics Data System (ADS)
Thomas, Mark E.; Neuberg, Jurgen W.
2014-11-01
The estimation of the magma ascent rate is key to predicting volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. Linking potential changes of such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models Soufrière that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. We show that variability in the rate of low frequency seismicity, assumed to correlate directly with the rate of magma movement, can be used as an indicator for changes in ascent rate and, therefore, eruptive activity. The results indicate that conduit diameter and excess pressure in the magma chamber are amongst the dominant controlling variables, but the single most important parameter is the volatile content (assumed as only water). Modeling this parameter in the range of reported values causes changes in the calculated ascent velocities of up to 800%.
Inference of directional selection and mutation parameters assuming equilibrium.
Vogl, Claus; Bergman, Juraj
2015-12-01
In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Bahrami, Parviz A.
1996-01-01
Theoretical analysis and numerical computations are performed to set forth a new model of film condensation on a horizontal cylinder. The model is more general than the well-known Nusselt model of film condensation and is designed to encompass all essential features of the Nusselt model. It is shown that a single parameter, constructed explicitly and without specification of the cylinder wall temperature, determines the degree of departure from the Nusselt model, which assumes a known and uniform wall temperature. It is also known that the Nusselt model is reached for very small, as well as very large, values of this parameter. In both limiting cases the cylinder wall temperature assumes a uniform distribution and the Nusselt model is approached. The maximum deviations between the two models is rather small for cases which are representative of cylinder dimensions, materials and conditions encountered in practice.
Physically based model for extracting dual permeability parameters using non-Newtonian fluids
NASA Astrophysics Data System (ADS)
Abou Najm, M. R.; Basset, C.; Stewart, R. D.; Hauswirth, S.
2017-12-01
Dual permeability models are effective for the assessment of flow and transport in structured soils with two dominant structures. The major challenge to those models remains in the ability to determine appropriate and unique parameters through affordable, simple, and non-destructive methods. This study investigates the use of water and a non-Newtonian fluid in saturated flow experiments to derive physically-based parameters required for improved flow predictions using dual permeability models. We assess the ability of these two fluids to accurately estimate the representative pore sizes in dual-domain soils, by determining the effective pore sizes of macropores and micropores. We developed two sub-models that solve for the effective macropore size assuming either cylindrical (e.g., biological pores) or planar (e.g., shrinkage cracks and fissures) pore geometries, with the micropores assumed to be represented by a single effective radius. Furthermore, the model solves for the percent contribution to flow (wi) corresponding to the representative macro and micro pores. A user-friendly solver was developed to numerically solve the system of equations, given that relevant non-Newtonian viscosity models lack forms conducive to analytical integration. The proposed dual-permeability model is a unique attempt to derive physically based parameters capable of measuring dual hydraulic conductivities, and therefore may be useful in reducing parameter uncertainty and improving hydrologic model predictions.
Piezoceramic devices and artificial intelligence time varying concepts in smart structures
NASA Technical Reports Server (NTRS)
Hanagud, S.; Calise, A. J.; Glass, B. J.
1990-01-01
The problem of development of smart structures and their vibration control by the use of piezoceramic sensors and actuators have been discussed. In particular, these structures are assumed to have time varying model form and parameters. The model form may change significantly and suddenly. Combined identification of the model from parameters of these structures and model adaptive control of these structures are discussed in this paper.
Basic research on design analysis methods for rotorcraft vibrations
NASA Technical Reports Server (NTRS)
Hanagud, S.
1991-01-01
The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.
Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models
ERIC Educational Resources Information Center
Doebler, Anna; Doebler, Philipp; Holling, Heinz
2013-01-01
The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…
White, L J; Mandl, J N; Gomes, M G M; Bodley-Tickell, A T; Cane, P A; Perez-Brena, P; Aguilar, J C; Siqueira, M M; Portes, S A; Straliotto, S M; Waris, M; Nokes, D J; Medley, G F
2007-09-01
The nature and role of re-infection and partial immunity are likely to be important determinants of the transmission dynamics of human respiratory syncytial virus (hRSV). We propose a single model structure that captures four possible host responses to infection and subsequent reinfection: partial susceptibility, altered infection duration, reduced infectiousness and temporary immunity (which might be partial). The magnitude of these responses is determined by four homotopy parameters, and by setting some of these parameters to extreme values we generate a set of eight nested, deterministic transmission models. In order to investigate hRSV transmission dynamics, we applied these models to incidence data from eight international locations. Seasonality is included as cyclic variation in transmission. Parameters associated with the natural history of the infection were assumed to be independent of geographic location, while others, such as those associated with seasonality, were assumed location specific. Models incorporating either of the two extreme assumptions for immunity (none or solid and lifelong) were unable to reproduce the observed dynamics. Model fits with either waning or partial immunity to disease or both were visually comparable. The best fitting structure was a lifelong partial immunity to both disease and infection. Observed patterns were reproduced by stochastic simulations using the parameter values estimated from the deterministic models.
Effect of misspecification of gene frequency on the two-point LOD score.
Pal, D K; Durner, M; Greenberg, D A
2001-11-01
In this study, we used computer simulation of simple and complex models to ask: (1) What is the penalty in evidence for linkage when the assumed gene frequency is far from the true gene frequency? (2) If the assumed model for gene frequency and inheritance are misspecified in the analysis, can this lead to a higher maximum LOD score than that obtained under the true parameters? Linkage data simulated under simple dominant, recessive, dominant and recessive with reduced penetrance, and additive models, were analysed assuming a single locus with both the correct and incorrect dominance model and assuming a range of different gene frequencies. We found that misspecifying the analysis gene frequency led to little penalty in maximum LOD score in all models examined, especially if the assumed gene frequency was lower than the generating one. Analysing linkage data assuming a gene frequency of the order of 0.01 for a dominant gene, and 0.1 for a recessive gene, appears to be a reasonable tactic in the majority of realistic situations because underestimating the gene frequency, even when the true gene frequency is high, leads to little penalty in the LOD score.
ERIC Educational Resources Information Center
Xu, Xueli; Jia, Yue
2011-01-01
Estimation of item response model parameters and ability distribution parameters has been, and will remain, an important topic in the educational testing field. Much research has been dedicated to addressing this task. Some studies have focused on item parameter estimation when the latent ability was assumed to follow a normal distribution,…
A probabilistic approach for the estimation of earthquake source parameters from spectral inversion
NASA Astrophysics Data System (ADS)
Supino, M.; Festa, G.; Zollo, A.
2017-12-01
The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.
An EOQ Model with Two-Parameter Weibull Distribution Deterioration and Price-Dependent Demand
ERIC Educational Resources Information Center
Mukhopadhyay, Sushanta; Mukherjee, R. N.; Chaudhuri, K. S.
2005-01-01
An inventory replenishment policy is developed for a deteriorating item and price-dependent demand. The rate of deterioration is taken to be time-proportional and the time to deterioration is assumed to follow a two-parameter Weibull distribution. A power law form of the price dependence of demand is considered. The model is solved analytically…
ERIC Educational Resources Information Center
Chen, Tina; Starns, Jeffrey J.; Rotello, Caren M.
2015-01-01
The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are…
Energy conditions in f (T, TG) gravity
NASA Astrophysics Data System (ADS)
Jawad, Abdul
2015-05-01
This paper is devoted to study the energy conditions in f( T, T G ) gravity for the FRW universe with perfect fluid, where T is the torsion scalar and T G is the quartic torsion scalar. We construct the energy conditions in this theory and discuss them for two specific f( T, T G ) models. These models are and , which represent viability through some cosmological scenarios. We consider cosmographic parameters to simplify the energy condition expressions. The present-day values of these parameters are assumed to check the constraints on model parameters through energy condition inequalities.
Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M
2014-02-01
Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.
On the robust optimization to the uncertain vaccination strategy problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaerani, D., E-mail: d.chaerani@unpad.ac.id; Anggriani, N., E-mail: d.chaerani@unpad.ac.id; Firdaniza, E-mail: d.chaerani@unpad.ac.id
2014-02-21
In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccinationmore » strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.« less
On the estimation of the reproduction number based on misreported epidemic data.
Azmon, Amin; Faes, Christel; Hens, Niel
2014-03-30
Epidemic data often suffer from underreporting and delay in reporting. In this paper, we investigated the impact of delays and underreporting on estimates of reproduction number. We used a thinned version of the epidemic renewal equation to describe the epidemic process while accounting for the underlying reporting system. Assuming a constant reporting parameter, we used different delay patterns to represent the delay structure in our model. Instead of assuming a fixed delay distribution, we estimated the delay parameters while assuming a smooth function for the reproduction number over time. In order to estimate the parameters, we used a Bayesian semiparametric approach with penalized splines, allowing both flexibility and exact inference provided by MCMC. To show the performance of our method, we performed different simulation studies. We conducted sensitivity analyses to investigate the impact of misspecification of the delay pattern and the impact of assuming nonconstant reporting parameters on the estimates of the reproduction numbers. We showed that, whenever available, additional information about time-dependent underreporting can be taken into account. As an application of our method, we analyzed confirmed daily A(H1N1) v2009 cases made publicly available by the World Health Organization for Mexico and the USA. Copyright © 2013 John Wiley & Sons, Ltd.
Constraints from triple gauge couplings on vectorlike leptons
Bertuzzo, Enrico; Machado, Pedro A. N.; Perez-Gonzalez, Yuber F.; ...
2017-08-30
Here, we study the contributions of colorless vectorlike fermions to the triple gauge couplings W +W -γ and W +W -Z 0. We consider models in which their coupling to the Standard Model Higgs boson is allowed or forbidden by quantum numbers. We assess the sensitivity of the future accelerators FCC-ee, ILC, and CLIC to the parameters of these models, assuming they will be able to constrain the anomalous triple gauge couplings with precision δ κV~O(10 -4), V = γ,Z 0. We show that the combination of measurements at different center-of-mass energies helps to improve the sensitivity to the contributionmore » of vectorlike fermions, in particular when they couple to the Higgs. In fact, the measurements at the FCC-ee and, especially, the ILC and the CLIC, may turn the triple gauge couplings into a new set of precision parameters able to constrain the models better than the oblique parameters or the H → γγ decay, even assuming the considerable improvement of the latter measurements achievable at the new machines.« less
Constraints from triple gauge couplings on vectorlike leptons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertuzzo, Enrico; Machado, Pedro A. N.; Perez-Gonzalez, Yuber F.
Here, we study the contributions of colorless vectorlike fermions to the triple gauge couplings W +W -γ and W +W -Z 0. We consider models in which their coupling to the Standard Model Higgs boson is allowed or forbidden by quantum numbers. We assess the sensitivity of the future accelerators FCC-ee, ILC, and CLIC to the parameters of these models, assuming they will be able to constrain the anomalous triple gauge couplings with precision δ κV~O(10 -4), V = γ,Z 0. We show that the combination of measurements at different center-of-mass energies helps to improve the sensitivity to the contributionmore » of vectorlike fermions, in particular when they couple to the Higgs. In fact, the measurements at the FCC-ee and, especially, the ILC and the CLIC, may turn the triple gauge couplings into a new set of precision parameters able to constrain the models better than the oblique parameters or the H → γγ decay, even assuming the considerable improvement of the latter measurements achievable at the new machines.« less
Defining modeling parameters for juniper trees assuming pleistocene-like conditions at the NTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarbox, S.R.; Cochran, J.R.
1994-12-31
This paper addresses part of Sandia National Laboratories` (SNL) efforts to assess the long-term performance of the Greater Confinement Disposal (GCD) facility located on the Nevada Test Site (NTS). Of issue is whether the GCD site complies with 40 CFR 191 standards set for transuranic (TRU) waste burial. SNL has developed a radionuclide transport model which can be used to assess TRU radionuclide movement away from the GCD facility. An earlier iteration of the model found that radionuclide uptake and release by plants is an important aspect of the system to consider. Currently, the shallow-rooted plants at the NTS domore » not pose a threat to the integrity of the GCD facility. However, the threat increases substantially it deeper-rooted woodland species migrate to the GCD facility, given a shift to a wetter climate. The model parameters discussed here will be included in the next model iteration which assumes a climate shift will provide for the growth of juniper trees at the GCD facility. Model parameters were developed using published data and wherever possible, data were taken from juniper and pinon-juniper studies that mirrored as many aspects of the GCD facility as possible.« less
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri
2010-05-01
This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.
da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G
2016-07-08
Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.
RICE bounds on cosmogenic neutrino fluxes and interactions
NASA Astrophysics Data System (ADS)
Hussain, Shahid
2005-04-01
Assuming standard model interactions we calculate shower rates induced by cosmogenic neutrinos in ice, and we bound the cosmogenic neutrino fluxes using RICE 2000-2004 results. Next we assume new interactions due to extra- dimensional, low-scale gravity (i.e. black hole production and decay; graviton mediated deep inelastic scattering) and calculate enhanced shower rates induced by cosmogenic neutrinos in ice. With the help of RICE 2000-2004 results, we survey bounds on low scale gravity parameters for a range of cosmogenic neutrino flux models.
Feasibility of High Energy Lasers for Interdiction Activities
2017-12-01
2.3.2 Power in the Bucket Another parameter we will use in this study is the power-in-the-bucket. The “bucket” is defined as the area on the target we...the heat diffusion equation for a one -dimensional case (where the x-direction is into the target) and assuming a semi-infinite slab of material. The... studied and modeled. One of the approaches to describe these interactions is by making a one -dimensional mathematical model assuming [8]: 1. A semi
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
NASA Astrophysics Data System (ADS)
Barforoush, M. S. M.; Saedodin, S.
2018-01-01
This article investigates the thermal performance of convective-radiative annular fins with a step reduction in local cross section (SRC). The thermal conductivity of the fin's material is assumed to be a linear function of temperature, and heat transfer coefficient is assumed to be a power-law function of surface temperature. Moreover, nonzero convection and radiation sink temperatures are included in the mathematical model of the energy equation. The well-known differential transformation method (DTM) is used to derive the analytical solution. An exact analytical solution for a special case is derived to prove the validity of the obtained results from the DTM. The model provided here is a more realistic representation of SRC annular fins in actual engineering practices. Effects of many parameters such as conduction-convection parameters, conduction-radiation parameter and sink temperature, and also some parameters which deal with step fins such as thickness parameter and dimensionless parameter describing the position of junction in the fin on the temperature distribution of both thin and thick sections of the fin are investigated. It is believed that the obtained results will facilitate the design and performance evaluation of SRC annular fins.
Evaluation of earthquake potential in China
NASA Astrophysics Data System (ADS)
Rong, Yufang
I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.
A first approach to the distortion analysis of nonlinear analog circuits utilizing X-parameters
NASA Astrophysics Data System (ADS)
Weber, H.; Widemann, C.; Mathis, W.
2013-07-01
In this contribution a first approach to the distortion analysis of nonlinear 2-port-networks with X-parameters1 is presented. The X-parameters introduced by Verspecht and Root (2006) offer the possibility to describe nonlinear microwave 2-port-networks under large signal conditions. On the basis of X-parameter measurements with a nonlinear network analyzer (NVNA) behavioral models can be extracted for the networks. These models can be used to consider the nonlinear behavior during the design process of microwave circuits. The idea of the present work is to extract the behavioral models in order to describe the influence of interfering signals on the output behavior of the nonlinear circuits. Hereby, a simulator is used instead of a NVNA to extract the X-parameters. Assuming that the interfering signals are relatively small compared to the nominal input signal, the output signal can be described as a superposition of the effects of each input signal. In order to determine the functional correlation between the scattering variables, a polynomial dependency is assumed. The required datasets for the approximation of the describing functions are simulated by a directional coupler model in Cadence Design Framework. The polynomial coefficients are obtained by a least-square method. The resulting describing functions can be used to predict the system's behavior under certain conditions as well as the effects of the interfering signal on the output signal. 1 X-parameter is a registered trademark of Agilent Technologies, Inc.
Gottfredson, Nisha C.; Bauer, Daniel J.; Baldwin, Scott A.; Okiishi, John C.
2014-01-01
Objective This study demonstrates how to use a shared parameter mixture model (SPMM) in longitudinal psychotherapy studies to accommodate missing that are due to a correlation between rate of improvement and termination of therapy. Traditional growth models assume that such a relationship does not exist (i.e., assume that data are missing at random) and will produce biased results if this assumption is incorrect. Method We use longitudinal data from 4,676 patients enrolled in a naturalistic study of psychotherapy to compare results from a latent growth model and a shared parameter mixture model (SPMM). Results In this dataset, estimates of the rate of improvement during therapy differ by 6.50 – 6.66% across the two models, indicating that participants with steeper trajectories left psychotherapy earliest, thereby potentially biasing inference for the slope in the latent growth model. Conclusion We conclude that reported estimates of change during therapy may be underestimated in naturalistic studies of therapy in which participants and their therapists determine the end of treatment. Because non-randomly missing data can also occur in randomized controlled trials or in observational studies of development, the utility of the SPMM extends beyond naturalistic psychotherapy data. PMID:24274626
A Bayesian Nonparametric Meta-Analysis Model
ERIC Educational Resources Information Center
Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.
2015-01-01
In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…
A dual-loop model of the human controller in single-axis tracking tasks
NASA Technical Reports Server (NTRS)
Hess, R. A.
1977-01-01
A dual loop model of the human controller in single axis compensatory tracking tasks is introduced. This model possesses an inner-loop closure which involves feeding back that portion of the controlled element output rate which is due to control activity. The sensory inputs to the human controller are assumed to be system error and control force. The former is assumed to be sensed via visual, aural, or tactile displays while the latter is assumed to be sensed in kinesthetic fashion. A nonlinear form of the model is briefly discussed. This model is then linearized and parameterized. A set of general adaptive characteristics for the parameterized model is hypothesized. These characteristics describe the manner in which the parameters in the linearized model will vary with such things as display quality. It is demonstrated that the parameterized model can produce controller describing functions which closely approximate those measured in laboratory tracking tasks for a wide variety of controlled elements.
Bouligand, C.; Glen, J.M.G.; Blakely, R.J.
2009-01-01
We have revisited the problem of mapping depth to the Curie temperature isotherm from magnetic anomalies in an attempt to provide a measure of crustal temperatures in the western United States. Such methods are based on the estimation of the depth to the bottom of magnetic sources, which is assumed to correspond to the temperature at which rocks lose their spontaneous magnetization. In this study, we test and apply a method based on the spectral analysis of magnetic anomalies. Early spectral analysis methods assumed that crustal magnetization is a completely uncorrelated function of position. Our method incorporates a more realistic representation where magnetization has a fractal distribution defined by three independent parameters: the depths to the top and bottom of magnetic sources and a fractal parameter related to the geology. The predictions of this model are compatible with radial power spectra obtained from aeromagnetic data in the western United States. Model parameters are mapped by estimating their value within a sliding window swept over the study area. The method works well on synthetic data sets when one of the three parameters is specified in advance. The application of this method to western United States magnetic compilations, assuming a constant fractal parameter, allowed us to detect robust long-wavelength variations in the depth to the bottom of magnetic sources. Depending on the geologic and geophysical context, these features may result from variations in depth to the Curie temperature isotherm, depth to the mantle, depth to the base of volcanic rocks, or geologic settings that affect the value of the fractal parameter. Depth to the bottom of magnetic sources shows several features correlated with prominent heat flow anomalies. It also shows some features absent in the map of heat flow. Independent geophysical and geologic data sets are examined to determine their origin, thereby providing new insights on the thermal and geologic crustal structure of the western United States.
Hot SPOT Eclipses in Dwarf Novae
NASA Astrophysics Data System (ADS)
Smak, J.
1996-10-01
Eclipses of the hot spot in four dwarf novae: U Gem, IP Peg, Z Cha, and OY Car are re-analyzed, assuming two models for the shape of the spot. In Model 1 an elliptical spot is assumed, with the semi-axes s_a in the orbital plane and s_b perpendicular to the orbital plane, its center located on the stream trajectory. The results show that such an ellipse is, within errors, tangent to the disk's circumference. In all four cases the resulting dimensions of the spot s_a are larger than the theoretical cross-section of the stream. Accordingly, in Model2 the spot is assumed to consist of a head, centered on the stream trajectory, and a tail, extending downstream, ie., along disk's circumference. In some cases the resulting parameters, eg., mass ratios or disk radii, differ significantly from those obtained with Model 1.
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
Modelling Accuracy of a Car Steering Mechanism with Rack and Pinion and McPherson Suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-08-01
Modelling accuracy of a car steering mechanism with a rack and pinion and McPherson suspension is analyzed. Geometrical parameters of the model are described by using the coordinates of centers of spherical joints, directional unit vectors and axis points of revolute, cylindrical and prismatic joints. Modelling accuracy is assumed as the differences between the values of the wheel knuckle position and orientation coordinates obtained using a simulation model and the corresponding measured values. The sensitivity analysis of the parameters on the model accuracy is illustrated by two numerical examples.
Measurement of D0-D0 mixing parameters in D0 --> Ks pi+ pi- decays.
Zhang, L M; Zhang, Z P; Adachi, I; Aihara, H; Aulchenko, V; Aushev, T; Bakich, A M; Balagura, V; Barberio, E; Bay, A; Belous, K; Bitenc, U; Bondar, A; Bozek, A; Bracko, M; Brodzicka, J; Browder, T E; Chang, P; Chao, Y; Chen, A; Chen, K-F; Chen, W T; Cheon, B G; Chiang, C-C; Cho, I-S; Choi, Y; Choi, Y K; Dalseno, J; Danilov, M; Dash, M; Drutskoy, A; Eidelman, S; Epifanov, D; Fratina, S; Gabyshev, N; Gokhroo, G; Golob, B; Ha, H; Haba, J; Hara, T; Hastings, N C; Hayasaka, K; Hayashii, H; Hazumi, M; Heffernan, D; Hokuue, T; Hoshi, Y; Hou, W-S; Hsiung, Y B; Hyun, H J; Iijima, T; Ikado, K; Inami, K; Ishikawa, A; Ishino, H; Itoh, R; Iwasaki, M; Iwasaki, Y; Joshi, N J; Kah, D H; Kaji, H; Kajiwara, S; Kang, J H; Kawai, H; Kawasaki, T; Kichimi, H; Kim, H J; Kim, H O; Kim, S K; Kim, Y J; Kinoshita, K; Korpar, S; Krizan, P; Krokovny, P; Kumar, R; Kuo, C C; Kuzmin, A; Kwon, Y-J; Lee, J S; Lee, M J; Lee, S E; Lesiak, T; Li, J; Limosani, A; Lin, S-W; Liu, Y; Liventsev, D; Matsumoto, T; Matyja, A; McOnie, S; Medvedeva, T; Mitaroff, W; Miyake, H; Miyata, H; Miyazaki, Y; Mizuk, R; Nagasaka, Y; Nakamura, I; Nakano, E; Nakao, M; Natkaniec, Z; Nishida, S; Nitoh, O; Ogawa, S; Ohshima, T; Okuno, S; Olsen, S L; Onuki, Y; Ostrowicz, W; Ozaki, H; Pakhlov, P; Pakhlova, G; Park, C W; Park, H; Peak, L S; Pestotnik, R; Piilonen, L E; Poluektov, A; Sahoo, H; Sakai, Y; Schneider, O; Schümann, J; Schwanda, C; Schwartz, A J; Seidl, R; Senyo, K; Sevior, M E; Shapkin, M; Shibuya, H; Shinomiya, S; Shiu, J-G; Shwartz, B; Singh, J B; Sokolov, A; Somov, A; Soni, N; Stanic, S; Staric, M; Stoeck, H; Sumisawa, K; Sumiyoshi, T; Suzuki, S; Tajima, O; Takasaki, F; Tamai, K; Tamura, N; Tanaka, M; Taylor, G N; Teramoto, Y; Tian, X C; Tikhomirov, I; Tsuboyama, T; Uehara, S; Ueno, K; Uglov, T; Unno, Y; Uno, S; Urquijo, P; Usov, Y; Varner, G; Vervink, K; Villa, S; Vinokurova, A; Wang, C H; Wang, M-Z; Wang, P; Watanabe, Y; Won, E; Yabsley, B D; Yamaguchi, A; Yamashita, Y; Yamauchi, M; Yuan, C Z; Zhang, C C; Zhilich, V; Zupanc, A
2007-09-28
We report a measurement of D0-D(0) mixing parameters in D(0) --> K(s)(0) pi(+) pi(-) decays using a time-dependent Dalitz-plot analysis. We first assume CP conservation and subsequently allow for CP violation. The results are based on 540 fb(-1) of data accumulated with the Belle detector at the KEKB e(+)e(-) collider. Assuming negligible CP violation, we measure the mixing parameters x = (0.80 +/- 0.29(-0.07-0.14)(+0.09+0.10))% and y = (0.33+/-0.24(-0.12-0.08)(+0.08+0.06))%, where the errors are statistical, experimental systematic, and systematic due to the Dalitz decay model, respectively. Allowing for CP violation, we obtain the CP-violating parameters |q / p| = 0.86(-0.29-0.03)(+0.30+0.06) +/- 0.08 and arg(q/p) = (-14(-18-3-4)(+16+5+2)) degrees .
Simulated discharge trends indicate robustness of hydrological models in a changing climate
NASA Astrophysics Data System (ADS)
Addor, Nans; Nikolova, Silviya; Seibert, Jan
2016-04-01
Assessing the robustness of hydrological models under contrasted climatic conditions should be part any hydrological model evaluation. Robust models are particularly important for climate impact studies, as models performing well under current conditions are not necessarily capable of correctly simulating hydrological perturbations caused by climate change. A pressing issue is the usually assumed stationarity of parameter values over time. Modeling experiments using conceptual hydrological models revealed that assuming transposability of parameters values in changing climatic conditions can lead to significant biases in discharge simulations. This raises the question whether parameter values should to be modified over time to reflect changes in hydrological processes induced by climate change. Such a question denotes a focus on the contribution of internal processes (i.e., catchment processes) to discharge generation. Here we adopt a different perspective and explore the contribution of external forcing (i.e., changes in precipitation and temperature) to changes in discharge. We argue that in a robust hydrological model, discharge variability should be induced by changes in the boundary conditions, and not by changes in parameter values. In this study, we explore how well the conceptual hydrological model HBV captures transient changes in hydrological signatures over the period 1970-2009. Our analysis focuses on research catchments in Switzerland undisturbed by human activities. The precipitation and temperature forcing are extracted from recently released 2km gridded data sets. We use a genetic algorithm to calibrate HBV for the whole 40-year period and for the eight successive 5-year periods to assess eventual trends in parameter values. Model calibration is run multiple times to account for parameter uncertainty. We find that in alpine catchments showing a significant increase of winter discharge, this trend can be captured reasonably well with constant parameter values over the whole reference period. Further, preliminary results suggest that some trends in parameter values do not reflect changes in hydrological processes, as reported by others previously, but instead might stem from a modeling artifact related to the parameterization of evapotranspiration, which is overly sensitive to temperature increase. We adopt a trading-space-for-time approach to better understand whether robust relationships between parameter values and forcing can be established, and to critically explore the rationale behind time-dependent parameter values in conceptual hydrological models.
Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation
NASA Astrophysics Data System (ADS)
Nyabeze, W. R.
A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.
Parameter Invariance and Skill Attribute Continuity in the DINA Model
ERIC Educational Resources Information Center
Bolt, Daniel M.; Kim, Jee-Seon
2018-01-01
Cognitive diagnosis models (CDMs) typically assume skill attributes with discrete (often binary) levels of skill mastery, making the existence of skill continuity an anticipated form of model misspecification. In this article, misspecification due to skill continuity is argued to be of particular concern for several CDM applications due to the…
Testing Dissipative Magnetosphere Model Light Curves and Spectra with Fermi Pulsars
NASA Technical Reports Server (NTRS)
Brambilla, Gabriele; Kalapotharakos, Constantinos; Harding, Alice K.; Kazanas, Demosthenes
2015-01-01
We explore the emission properties of a dissipative pulsar magnetosphere model introduced by Kalapotharakos et al. comparing its high-energy light curves and spectra, due to curvature radiation, with data collected by the Fermi LAT. The magnetosphere structure is assumed to be near the force-free solution. The accelerating electric field, inside the light cylinder (LC), is assumed to be negligible, while outside the LC it rescales with a finite conductivity (sigma). In our approach we calculate the corresponding high-energy emission by integrating the trajectories of test particles that originate from the stellar surface, taking into account both the accelerating electric field components and the radiation reaction forces. First, we explore the parameter space assuming different value sets for the stellar magnetic field, stellar period, and conductivity. We show that the general properties of the model are in a good agreement with observed emission characteristics of young gamma-ray pulsars, including features of the phase-resolved spectra. Second, we find model parameters that fit each pulsar belonging to a group of eight bright pulsars that have a published phase-resolved spectrum. The sigma values that best describe each of the pulsars in this group show an increase with the spin-down rate (E? ) and a decrease with the pulsar age, expected if pair cascades are providing the magnetospheric conductivity. Finally, we explore the limits of our analysis and suggest future directions for improving such models.
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
Helgesson, P; Sjöstrand, H
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
Computerized Adaptive Testing with Item Clones. Research Report.
ERIC Educational Resources Information Center
Glas, Cees A. W.; van der Linden, Wim J.
To reduce the cost of item writing and to enhance the flexibility of item presentation, items can be generated by item-cloning techniques. An important consequence of cloning is that it may cause variability on the item parameters. Therefore, a multilevel item response model is presented in which it is assumed that the item parameters of a…
Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.
2001-01-01
The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Herzog, Sereina A; Low, Nicola; Berghold, Andrea
2015-06-19
The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.
NASA Astrophysics Data System (ADS)
Budak, Vladimir P.; Korkin, Sergey V.
2009-03-01
The singularity subtraction on the vectorial modification of spherical harmonics method (VMSH) of the solution of the vectorial radiative transfer equation boundary problem is applied to the problem of influence of atmosphere parameters on the polarimetric system signal. We assume in this model different phase matrices (Mie, Rayleigh, and Henyey-Greenstein), reflecting bottom and particle size distribution. The authors describe the main features of the model and some results of its implementation.
NASA Astrophysics Data System (ADS)
Devarakonda, Lalitha; Hu, Tingshu
2014-12-01
This paper presents an algebraic method for parameter identification of Thevenin's equivalent circuit models for batteries under non-zero initial condition. In traditional methods, it was assumed that all capacitor voltages have zero initial conditions at the beginning of each charging/discharging test. This would require a long rest time between two tests, leading to very lengthy tests for a charging/discharging cycle. In this paper, we propose an algebraic method which can extract the circuit parameters together with initial conditions. This would theoretically reduce the rest time to 0 and substantially accelerate the testing cycles.
Heat transfer in porous medium embedded with vertical plate: Non-equilibrium approach - Part A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badruddin, Irfan Anjum; Quadir, G. A.
2016-06-08
Heat transfer in a porous medium embedded with vertical flat plate is investigated by using thermal non-equilibrium model. Darcy model is employed to simulate the flow inside porous medium. It is assumed that the heat transfer takes place by natural convection and radiation. The vertical plate is maintained at isothermal temperature. The governing partial differential equations are converted into non-dimensional form and solved numerically using finite element method. Results are presented in terms of isotherms and streamlines for various parameters such as heat transfer coefficient parameter, thermal conductivity ratio, and radiation parameter.
Parameter Variability and Distributional Assumptions in the Diffusion Model
ERIC Educational Resources Information Center
Ratcliff, Roger
2013-01-01
If the diffusion model (Ratcliff & McKoon, 2008) is to account for the relative speeds of correct responses and errors, it is necessary that the components of processing identified by the model vary across the trials of a task. In standard applications, the rate at which information is accumulated by the diffusion process is assumed to be normally…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon
2013-01-01
We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less
Achcar, Fiona; Barrett, Michael P; Breitling, Rainer
2013-09-01
Previous models of glycolysis in the sleeping sickness parasite Trypanosoma brucei assumed that the core part of glycolysis in this unicellular parasite is tightly compartimentalized within an organelle, the glycosome, which had previously been shown to contain most of the glycolytic enzymes. The glycosomes were assumed to be largely impermeable, and exchange of metabolites between the cytosol and the glycosome was assumed to be regulated by specific transporters in the glycosomal membrane. This tight compartmentalization was considered to be essential for parasite viability. Recently, size-specific metabolite pores were discovered in the membrane of glycosomes. These channels are proposed to allow smaller metabolites to diffuse across the membrane but not larger ones. In light of this new finding, we re-analyzed the model taking into account uncertainty about the topology of the metabolic system in T. brucei, as well as uncertainty about the values of all parameters of individual enzymatic reactions. Our analysis shows that these newly-discovered nonspecific pores are not necessarily incompatible with our current knowledge of the glycosomal metabolic system, provided that the known cytosolic activities of the glycosomal enzymes play an important role in the regulation of glycolytic fluxes and the concentration of metabolic intermediates of the pathway. © 2013 FEBS.
PV cells electrical parameters measurement
NASA Astrophysics Data System (ADS)
Cibira, Gabriel
2017-12-01
When measuring optical parameters of a photovoltaic silicon cell, precise results bring good electrical parameters estimation, applying well-known physical-mathematical models. Nevertheless, considerable re-combination phenomena might occur in both surface and intrinsic thin layers within novel materials. Moreover, rear contact surface parameters may influence close-area re-combination phenomena, too. Therefore, the only precise electrical measurement approach is to prove assumed cell electrical parameters. Based on theoretical approach with respect to experiments, this paper analyses problems within measurement procedures and equipment used for electrical parameters acquisition within a photovoltaic silicon cell, as a case study. Statistical appraisal quality is contributed.
New photoionization models of intergalactic clouds
NASA Technical Reports Server (NTRS)
Donahue, Megan; Shull, J. M.
1991-01-01
New photoionization models of optically thin low-density intergalactic gas at constant pressure, photoionized by QSOs, are presented. All ion stages of H, He, C, N, O, Si, and Fe, plus H2 are modeled, and the column density ratios of clouds at specified values of the ionization parameter of n sub gamma/n sub H and cloud metallicity are predicted. If Ly-alpha clouds are much cooler than the previously assumed value, 30,000 K, the ionization parameter must be very low, even with the cooling contribution of a trace component of molecules. If the clouds cool below 6000 K, their final equilibrium must be below 3000 K, owing to the lack of a stable phase between 6000 and 3000 K. If it is assumed that the clouds are being irradiated by an EUV power-law continuum typical of WSOs, with J0 = 10 exp -21 ergs/s sq cm Hz, typical cloud thicknesses along the line of sight that are much smaller than would be expected from shocks, thermal instabilities, or gravitational collapse are derived.
Nunez, Michael D.; Vandekerckhove, Joachim; Srinivasan, Ramesh
2016-01-01
Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects. PMID:28435173
Nunez, Michael D; Vandekerckhove, Joachim; Srinivasan, Ramesh
2017-02-01
Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects.
NASA Technical Reports Server (NTRS)
Nishimura, T.
1975-01-01
This paper proposes a worst-error analysis for dealing with problems of estimation of spacecraft trajectories in deep space missions. Navigation filters in use assume either constant or stochastic (Markov) models for their estimated parameters. When the actual behavior of these parameters does not follow the pattern of the assumed model, the filters sometimes result in very poor performance. To prepare for such pathological cases, the worst errors of both batch and sequential filters are investigated based on the incremental sensitivity studies of these filters. By finding critical switching instances of non-gravitational accelerations, intensive tracking can be carried out around those instances. Also the worst errors in the target plane provide a measure in assignment of the propellant budget for trajectory corrections. Thus the worst-error study presents useful information as well as practical criteria in establishing the maneuver and tracking strategy of spacecraft's missions.
An adaptive control scheme for a flexible manipulator
NASA Technical Reports Server (NTRS)
Yang, T. C.; Yang, J. C. S.; Kudva, P.
1987-01-01
The problem of controlling a single link flexible manipulator is considered. A self-tuning adaptive control scheme is proposed which consists of a least squares on-line parameter identification of an equivalent linear model followed by a tuning of the gains of a pole placement controller using the parameter estimates. Since the initial parameter values for this model are assumed unknown, the use of arbitrarily chosen initial parameter estimates in the adaptive controller would result in undesirable transient effects. Hence, the initial stage control is carried out with a PID controller. Once the identified parameters have converged, control is transferred to the adaptive controller. Naturally, the relevant issues in this scheme are tests for parameter convergence and minimization of overshoots during control switch-over. To demonstrate the effectiveness of the proposed scheme, simulation results are presented with an analytical nonlinear dynamic model of a single link flexible manipulator.
Cooley, Richard L.
1982-01-01
Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.
Modeling of Waves Propagating in Water with a Crushed Ice Layer on the Free Surface
NASA Astrophysics Data System (ADS)
Szmidt, Kazimierz
2017-12-01
A transformation of gravitational waves in fluid of constant depth with a crushed ice layer floating on the free fluid surface is considered. The propagating waves undergo a slight damping along their path of propagation. The main goal of the study is to construct an approximate descriptive model of this phenomenon.With regard to small displacements of the free surface, a viscous type model of damping is considered, which corresponds to a continuous distribution of dash-pots at the free surface of the fluid. A constant parameter of the dampers is assumed in advance as known parameter of damping. This parameter may be obtained by means of experiments in a laboratory flume.
A frequency quantum interpretation of the surface renewal model of mass transfer
Mondal, Chanchal
2017-01-01
The surface of a turbulent liquid is visualized as consisting of a large number of chaotic eddies or liquid elements. Assuming that surface elements of a particular age have renewal frequencies that are integral multiples of a fundamental frequency quantum, and further assuming that the renewal frequency distribution is of the Boltzmann type, performing a population balance for these elements leads to the Danckwerts surface age distribution. The basic quantum is what has been traditionally called the rate of surface renewal. The Higbie surface age distribution follows if the renewal frequency distribution of such elements is assumed to be continuous. Four age distributions, which reflect different start-up conditions of the absorption process, are then used to analyse transient physical gas absorption into a large volume of liquid, assuming negligible gas-side mass-transfer resistance. The first two are different versions of the Danckwerts model, the third one is based on the uniform and Higbie distributions, while the fourth one is a mixed distribution. For the four cases, theoretical expressions are derived for the rates of gas absorption and dissolved-gas transfer to the bulk liquid. Under transient conditions, these two rates are not equal and have an inverse relationship. However, with the progress of absorption towards steady state, they approach one another. Assuming steady-state conditions, the conventional one-parameter Danckwerts age distribution is generalized to a two-parameter age distribution. Like the two-parameter logarithmic normal distribution, this distribution can also capture the bell-shaped nature of the distribution of the ages of surface elements observed experimentally in air–sea gas and heat exchange. Estimates of the liquid-side mass-transfer coefficient made using these two distributions for the absorption of hydrogen and oxygen in water are very close to one another and are comparable to experimental values reported in the literature. PMID:28791137
Population Synthesis of Radio and Y-ray Normal, Isolated Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2013-04-01
We present preliminary results of a population statistics study of normal pulsars (NP) from the Galactic disk using Markov Chain Monte Carlo techniques optimized according to two different methods. The first method compares the detected and simulated cumulative distributions of series of pulsar characteristics, varying the model parameters to maximize the overall agreement. The advantage of this method is that the distributions do not have to be binned. The other method varies the model parameters to maximize the log of the maximum likelihood obtained from the comparisons of four-two dimensional distributions of radio and γ-ray pulsar characteristics. The advantage of this method is that it provides a confidence region of the model parameter space. The computer code simulates neutron stars at birth using Monte Carlo procedures and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and γ-ray emission characteristics, implementing an empirical γ-ray luminosity model. A comparison group of radio NPs detected in ten-radio surveys is used to normalize the simulation, adjusting the model radio luminosity to match a birth rate. We include the Fermi pulsars in the forthcoming second pulsar catalog. We present preliminary results comparing the simulated and detected distributions of radio and γ-ray NPs along with a confidence region in the parameter space of the assumed models. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Application of Bayesian model averaging to measurements of the primordial power spectrum
NASA Astrophysics Data System (ADS)
Parkinson, David; Liddle, Andrew R.
2010-11-01
Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940
System identification of analytical models of damped structures
NASA Technical Reports Server (NTRS)
Fuh, J.-S.; Chen, S.-Y.; Berman, A.
1984-01-01
A procedure is presented for identifying linear nonproportionally damped system. The system damping is assumed to be representable by a real symmetric matrix. Analytical mass, stiffness and damping matrices which constitute an approximate representation of the system are assumed to be available. Given also are an incomplete set of measured natural frequencies, damping ratios and complex mode shapes of the structure, normally obtained from test data. A method is developed to find the smallest changes in the analytical model so that the improved model can exactly predict the measured modal parameters. The present method uses the orthogonality relationship to improve mass and damping matrices and the dynamic equation to find the improved stiffness matrix.
Earthquake Potential Models for China
NASA Astrophysics Data System (ADS)
Rong, Y.; Jackson, D. D.
2002-12-01
We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic model we derived the uniform upper magnitude limit from the special catalog and assumed local a-values proportional to maximum horizontal strain rate. In prospective tests the geodetic model agrees well with earthquake occurrence. The smoothed seismicity model performs best of the four models.
Electrostatic potential jump across fast-mode collisionless shocks
NASA Technical Reports Server (NTRS)
Mandt, M. E.; Kan, J. R.
1991-01-01
The electrostatic potential jump across fast-mode collisionless shocks is examined by comparing published observations, hybrid simulations, and a simple model, in order to better characterize its dependence on the various shock parameters. In all three, it is assumed that the electrons can be described by an isotropic power-law equation of state. The observations show that the cross-shock potential jump correlates well with the shock strength but shows very little correlation with other shock parameters. Assuming that the electrons obey an isotropic power law equation of state, the correlation of the potential jump with the shock strength follows naturally from the increased shock compression and an apparent dependence of the power law exponent on the Mach number which the observations indicate. It is found that including a Mach number dependence for the power law exponent in the electron equation of state in the simple model produces a potential jump which better fits the observations. On the basis of the simulation results and theoretical estimates of the cross-shock potential, it is discussed how the cross-shock potential might be expected to depend on the other shock parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overton, J.H.; Jarabek, A.M.
1989-01-01
The U.S. EPA advocates the assessment of health-effects data and calculation of inhaled reference doses as benchmark values for gauging systemic toxicity to inhaled gases. The assessment often requires an inter- or intra-species dose extrapolation from no observed adverse effect level (NOAEL) exposure concentrations in animals to human equivalent NOAEL exposure concentrations. To achieve this, a dosimetric extrapolation procedure was developed based on the form or type of equations that describe the uptake and disposition of inhaled volatile organic compounds (VOCs) in physiologically-based pharmacokinetic (PB-PK) models. The procedure assumes allometric scaling of most physiological parameters and that the value ofmore » the time-integrated human arterial-blood concentration must be limited to no more than to that of experimental animals. The scaling assumption replaces the need for most parameter values and allows the derivation of a simple formula for dose extrapolation of VOCs that gives equivalent or more-conservative exposure concentrations values than those that would be obtained using a PB-PK model in which scaling was assumed.« less
Fitting the Mixed Rasch Model to a Reading Comprehension Test: Identifying Reader Types
ERIC Educational Resources Information Center
Baghaei, Purya; Carstensen, Claus H.
2013-01-01
Standard unidimensional Rasch models assume that persons with the same ability parameters are comparable. That is, the same interpretation applies to persons with identical ability estimates as regards the underlying mental processes triggered by the test. However, research in cognitive psychology shows that persons at the same trait level may…
SU(5) with nonuniversal gaugino masses
NASA Astrophysics Data System (ADS)
Ajaib, M. Adeel
2018-02-01
We explore the sparticle spectroscopy of the supersymmetric SU(5) model with nonuniversal gaugino masses in light of latest experimental searches. We assume that the gaugino mass parameters are independent at the GUT scale. We find that the observed deviation in the anomalous magnetic moment of the muon can be explained in this model. The parameter space that explains this deviation predicts a heavy colored sparticle spectrum whereas the sleptons can be light. We also find a notable region of the parameter space that yields the desired relic abundance for dark matter. In addition, we analyze the model in light of latest limits from direct detection experiments and find that the parameter space corresponding to the observed deviation in the muon anomalous magnetic moment can be probed at some of the future direct detection experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudford, B.S.
1996-12-31
The determination of an appropriate thermal history in an exploration area is of fundamental importance when attempting to understand the evolution of the petroleum system. In this talk we present the results of a single-well modelling study in which bottom hole temperature data, vitrinite reflectance data and three different biomarker ratio datasets were available to constrain the modelling. Previous modelling studies using biomarker ratios have been hampered by the wide variety of published kinetic parameters for biomarker evolution. Generally, these parameters have been determined either from measurements in the laboratory and extrapolation to the geological setting, or from downhole measurementsmore » where the heat flow history is assumed to be known. In the first case serious errors can arise because the heating rate is being extrapolated over many orders of magnitude, while in the second case errors can arise if the assumed heat flow history is incorrect. To circumvent these problems we carried out a parameter optimization in which the heat flow history was treated as an unknown in addition to the biomarker ratio kinetic parameters. This method enabled the heat flow history for the area to be determined together with appropriate kinetic parameters for the three measured biomarker ratios. Within the resolution of the data, the heat flow since the early Miocene has been relatively constant at levels required to yield good agreement between predicted and measured subsurface temperatures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudford, B.S.
1996-01-01
The determination of an appropriate thermal history in an exploration area is of fundamental importance when attempting to understand the evolution of the petroleum system. In this talk we present the results of a single-well modelling study in which bottom hole temperature data, vitrinite reflectance data and three different biomarker ratio datasets were available to constrain the modelling. Previous modelling studies using biomarker ratios have been hampered by the wide variety of published kinetic parameters for biomarker evolution. Generally, these parameters have been determined either from measurements in the laboratory and extrapolation to the geological setting, or from downhole measurementsmore » where the heat flow history is assumed to be known. In the first case serious errors can arise because the heating rate is being extrapolated over many orders of magnitude, while in the second case errors can arise if the assumed heat flow history is incorrect. To circumvent these problems we carried out a parameter optimization in which the heat flow history was treated as an unknown in addition to the biomarker ratio kinetic parameters. This method enabled the heat flow history for the area to be determined together with appropriate kinetic parameters for the three measured biomarker ratios. Within the resolution of the data, the heat flow since the early Miocene has been relatively constant at levels required to yield good agreement between predicted and measured subsurface temperatures.« less
Constraints on a scale-dependent bias from galaxy clustering
NASA Astrophysics Data System (ADS)
Amendola, L.; Menegoni, E.; Di Porto, C.; Corsi, M.; Branchini, E.
2017-01-01
We forecast the future constraints on scale-dependent parametrizations of galaxy bias and their impact on the estimate of cosmological parameters from the power spectrum of galaxies measured in a spectroscopic redshift survey. For the latter we assume a wide survey at relatively large redshifts, similar to the planned Euclid survey, as the baseline for future experiments. To assess the impact of the bias we perform a Fisher matrix analysis, and we adopt two different parametrizations of scale-dependent bias. The fiducial models for galaxy bias are calibrated using mock catalogs of H α emitting galaxies mimicking the expected properties of the objects that will be targeted by the Euclid survey. In our analysis we have obtained two main results. First of all, allowing for a scale-dependent bias does not significantly increase the errors on the other cosmological parameters apart from the rms amplitude of density fluctuations, σ8 , and the growth index γ , whose uncertainties increase by a factor up to 2, depending on the bias model adopted. Second, we find that the accuracy in the linear bias parameter b0 can be estimated to within 1%-2% at various redshifts regardless of the fiducial model. The nonlinear bias parameters have significantly large errors that depend on the model adopted. Despite this, in the more realistic scenarios departures from the simple linear bias prescription can be detected with a ˜2 σ significance at each redshift explored. Finally, we use the Fisher matrix formalism to assess the impact od assuming an incorrect bias model and find that the systematic errors induced on the cosmological parameters are similar or even larger than the statistical ones.
Gottfredson, Nisha C; Bauer, Daniel J; Baldwin, Scott A; Okiishi, John C
2014-10-01
This study demonstrates how to use a shared parameter mixture model (SPMM) in longitudinal psychotherapy studies to accommodate missingness that is due to a correlation between rate of improvement and termination of therapy. Traditional growth models assume that such a relationship does not exist (i.e., assume that data are missing at random) and produce biased results if this assumption is incorrect. We used longitudinal data from 4,676 patients enrolled in a naturalistic study of psychotherapy to compare results from a latent growth model and an SPMM. In this data set, estimates of the rate of improvement during therapy differed by 6.50%-6.66% across the two models, indicating that participants with steeper trajectories left psychotherapy earliest, thereby potentially biasing inference for the slope in the latent growth model. We conclude that reported estimates of change during therapy may be underestimated in naturalistic studies of therapy in which participants and their therapists determine the end of treatment. Because non-randomly missing data can also occur in randomized controlled trials or in observational studies of development, the utility of the SPMM extends beyond naturalistic psychotherapy data. PsycINFO Database Record (c) 2014 APA, all rights reserved.
The effect of magnetohydrodynamic nano fluid flow through porous cylinder
NASA Astrophysics Data System (ADS)
Widodo, Basuki; Arif, Didik Khusnul; Aryany, Deviana; Asiyah, Nur; Widjajati, Farida Agustini; Kamiran
2017-08-01
This paper concerns about the analysis of the effect of magnetohydrodynamic nano fluid flow through horizontal porous cylinder on steady and incompressible condition. Fluid flow is assumed opposite gravity and induced by magnet field. Porous cylinder is assumed had the same depth of porous and was not absorptive. The First thing to do in this research is to build the model of fluid flow to obtain dimentional governing equations. The dimentional governing equations are consist of continuity equation, momentum equation, and energy equation. Furthermore, the dimensional governing equations are converted to non-dimensional governing equation by using non-dimensional parameters and variables. Then, the non-dimensional governing equations are transformed into similarity equations using stream function and solved using Keller-Box method. The result of numerical solution further is obtained by taking variation of magnetic parameter, Prandtl number, porosity parameter, and volume fraction. The numerical results show that velocity profiles increase and temperature profiles decrease when both of the magnetic and the porosity parameter increase. However, the velocity profiles decrease and the temperature profiles increase when both of the magnetic and the porosity parameter increase.
An analytical prediction of the oscillation and extinction thresholds of a clarinet
NASA Astrophysics Data System (ADS)
Dalmont, Jean-Pierre; Gilbert, Joël; Kergomard, Jean; Ollivier, Sébastien
2005-11-01
This paper investigates the dynamic range of the clarinet from the oscillation threshold to the extinction at high pressure level. The use of an elementary model for the reed-mouthpiece valve effect combined with a simplified model of the pipe assuming frequency independent losses (Raman's model) allows an analytical calculation of the oscillations and their stability analysis. The different thresholds are shown to depend on parameters related to embouchure parameters and to the absorption coefficient in the pipe. Their values determine the dynamic range of the fundamental oscillations and the bifurcation scheme at the extinction.
New Agegraphic Pilgrim Dark Energy in f(T, TG) Gravity
NASA Astrophysics Data System (ADS)
Jawad, Abdul; Debnath, Ujjal
2015-08-01
In this work, we briefly discuss a novel class of modified gravity like f(T, TG) gravity. In this background, we assume the new agegraphic version of pilgrim dark energy and reconstruct f(T, TG) models for two specific values of s. We also discuss the equation of state parameter, squared speed of sound and wDE-w‧DE plane for these reconstructed f(T, TG) models. The equation of state parameter provides phantom-like behavior of the universe. The wDE-w‧DE plane also corresponds to ΛCDM limit, thawing and freezing regions for both models.
Temporal variation and scaling of parameters for a monthly hydrologic model
NASA Astrophysics Data System (ADS)
Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang
2018-03-01
The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.
Vicini, P; Bonadonna, R C; Lehtovirta, M; Groop, L C; Cobelli, C
1998-01-01
Distributed models of blood-tissue exchange are widely used to measure kinetic events of various solutes from multiple tracer dilution experiments. Their use requires, however, a careful description of blood flow heterogeneity along the capillary bed. Since they have mostly been applied in animal studies, direct measurement of the heterogeneity distribution was possible, e.g., with the invasive microsphere method. Here we apply distributed modeling to a dual tracer experiment in humans, performed using an intravascular (indocyanine green dye, subject to distribution along the vascular tree and confined to the capillary bed) and an extracellular ([3H]-D-mannitol, tracing passive transcapillary transfer across the capillary membrane in the interstitial fluid) tracer. The goal is to measure relevant parameters of transcapillary exchange in human skeletal muscle. We show that assuming an accurate description of blood flow heterogeneity is crucial for modeling, and in particular that assuming for skeletal muscle the well-studied cardiac muscle blood flow heterogeneity is inappropriate. The same reason prevents the use of the common method of estimating the input function of the distributed model via deconvolution, which assumes a known blood flow heterogeneity, either defined from literature or measured, when possible. We present a novel approach for the estimation of blood flow heterogeneity in each individual from the intravascular tracer data. When this newly estimated blood flow heterogeneity is used, a more satisfactory model fit is obtained and it is possible to reliably measure parameters of capillary membrane permeability-surface product and interstitial fluid volume describing transcapillary transfer in vivo.
A new Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin
2017-04-01
Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.
Phenomenological model of nuclear primary air showers
NASA Technical Reports Server (NTRS)
Tompkins, D. R., Jr.; Saterlie, S. F.
1976-01-01
The development of proton primary air showers is described in terms of a model based on a hadron core plus an electromagnetic cascade. The muon component is neglected. The model uses three parameters: a rate at which hadron core energy is converted into electromagnetic cascade energy and a two-parameter sea-level shower-age function. By assuming an interaction length for the primary nucleus, the model is extended to nuclear primaries. Both models are applied over the energy range from 10 to the 13th power to 10 to the 21st power eV. Both models describe the size and age structure (neglecting muons) from a depth of 342 to 2052 g/sq cm.
Constant-parameter capture-recapture models
Brownie, C.; Hines, J.E.; Nichols, J.D.
1986-01-01
Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
NASA Technical Reports Server (NTRS)
Patre, Parag; Joshi, Suresh M.
2011-01-01
Decentralized adaptive control is considered for systems consisting of multiple interconnected subsystems. It is assumed that each subsystem s parameters are uncertain and the interconnection parameters are not known. In addition, mismatch can exist between each subsystem and its reference model. A strictly decentralized adaptive control scheme is developed, wherein each subsystem has access only to its own state but has the knowledge of all reference model states. The mismatch is estimated online for each subsystem and the mismatch estimates are used to adaptively modify the corresponding reference models. The adaptive control scheme is extended to the case with actuator failures in addition to mismatch.
Model independent constraints on transition redshift
NASA Astrophysics Data System (ADS)
Jesus, J. F.; Holanda, R. F. L.; Pereira, S. H.
2018-05-01
This paper aims to put constraints on the transition redshift zt, which determines the onset of cosmic acceleration, in cosmological-model independent frameworks. In order to perform our analyses, we consider a flat universe and assume a parametrization for the comoving distance DC(z) up to third degree on z, a second degree parametrization for the Hubble parameter H(z) and a linear parametrization for the deceleration parameter q(z). For each case, we show that type Ia supernovae and H(z) data complement each other on the parameter space and tighter constrains for the transition redshift are obtained. By combining the type Ia supernovae observations and Hubble parameter measurements it is possible to constrain the values of zt, for each approach, as 0.806± 0.094, 0.870± 0.063 and 0.973± 0.058 at 1σ c.l., respectively. Then, such approaches provide cosmological-model independent estimates for this parameter.
Technique for predicting high-frequency stability characteristics of gaseous-propellant combustors
NASA Technical Reports Server (NTRS)
Priem, R. J.; Jefferson, Y. S. Y.
1973-01-01
A technique for predicting the stability characteristics of a gaseous-propellant rocket combustion system is developed based on a model that assumes coupling between the flow through the injector and the oscillating chamber pressure. The theoretical model uses a lumped parameter approach for the flow elements in the injection system plus wave dynamics in the combustion chamber. The injector flow oscillations are coupled to the chamber pressure oscillations with a delay time. Frequency and decay (or growth) rates are calculated for various combustor design and operating parameters to demonstrate the influence of various parameters on stability. Changes in oxidizer design parameters had a much larger influence on stability than a similar change in fuel parameters. A complete description of the computer program used to make these calculations is given in an appendix.
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
Models of epidemics: when contact repetition and clustering should be included
Smieszek, Timo; Fiebig, Lena; Scholz, Roland W
2009-01-01
Background The spread of infectious disease is determined by biological factors, e.g. the duration of the infectious period, and social factors, e.g. the arrangement of potentially contagious contacts. Repetitiveness and clustering of contacts are known to be relevant factors influencing the transmission of droplet or contact transmitted diseases. However, we do not yet completely know under what conditions repetitiveness and clustering should be included for realistically modelling disease spread. Methods We compare two different types of individual-based models: One assumes random mixing without repetition of contacts, whereas the other assumes that the same contacts repeat day-by-day. The latter exists in two variants, with and without clustering. We systematically test and compare how the total size of an outbreak differs between these model types depending on the key parameters transmission probability, number of contacts per day, duration of the infectious period, different levels of clustering and varying proportions of repetitive contacts. Results The simulation runs under different parameter constellations provide the following results: The difference between both model types is highest for low numbers of contacts per day and low transmission probabilities. The number of contacts and the transmission probability have a higher influence on this difference than the duration of the infectious period. Even when only minor parts of the daily contacts are repetitive and clustered can there be relevant differences compared to a purely random mixing model. Conclusion We show that random mixing models provide acceptable estimates of the total outbreak size if the number of contacts per day is high or if the per-contact transmission probability is high, as seen in typical childhood diseases such as measles. In the case of very short infectious periods, for instance, as in Norovirus, models assuming repeating contacts will also behave similarly as random mixing models. If the number of daily contacts or the transmission probability is low, as assumed for MRSA or Ebola, particular consideration should be given to the actual structure of potentially contagious contacts when designing the model. PMID:19563624
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Generalized ghost pilgrim dark energy in F(T,TG) cosmology
NASA Astrophysics Data System (ADS)
Sharif, M.; Nazir, Kanwal
2016-07-01
This paper is devoted to study the generalized ghost pilgrim dark energy (PDE) model in F(T,TG) gravity with flat Friedmann-Robertson-Walker (FRW) universe. In this scenario, we reconstruct F(T,TG) models and evaluate the corresponding equation of state (EoS) parameter for different choices of the scale factors. We assume power-law scale factor, scale factor for unification of two phases, intermediate and bouncing scale factor. We study the behavior of reconstructed models and EoS parameters graphically. It is found that all the reconstructed models show decreasing behavior for PDE parameter u = -2. On the other hand, the EoS parameter indicates transition from dust-like matter to phantom era for all choices of the scale factor except intermediate for which this is less than - 1. We conclude that all the results are in agreement with PDE phenomenon.
Tomblin Murphy, Gail; Birch, Stephen; MacKenzie, Adrian; Rigby, Janet
2016-12-12
As part of efforts to inform the development of a global human resources for health (HRH) strategy, a comprehensive methodology for estimating HRH supply and requirements was described in a companion paper. The purpose of this paper is to demonstrate the application of that methodology, using data publicly available online, to simulate the supply of and requirements for midwives, nurses, and physicians in the 32 high-income member countries of the Organisation for Economic Co-operation and Development (OECD) up to 2030. A model combining a stock-and-flow approach to simulate the future supply of each profession in each country-adjusted according to levels of HRH participation and activity-and a needs-based approach to simulate future HRH requirements was used. Most of the data to populate the model were obtained from the OECD's online indicator database. Other data were obtained from targeted internet searches and documents gathered as part of the companion paper. Relevant recent measures for each model parameter were found for at least one of the included countries. In total, 35% of the desired current data elements were found; assumed values were used for the other current data elements. Multiple scenarios were used to demonstrate the sensitivity of the simulations to different assumed future values of model parameters. Depending on the assumed future values of each model parameter, the simulated HRH gaps across the included countries could range from shortfalls of 74 000 midwives, 3.2 million nurses, and 1.2 million physicians to surpluses of 67 000 midwives, 2.9 million nurses, and 1.0 million physicians by 2030. Despite important gaps in the data publicly available online and the short time available to implement it, this paper demonstrates the basic feasibility of a more comprehensive, population needs-based approach to estimating HRH supply and requirements than most of those currently being used. HRH planners in individual countries, working with their respective stakeholder groups, would have more direct access to data on the relevant planning parameters and would thus be in an even better position to implement such an approach.
Denitrogenation model for vacuum tank degasser
NASA Astrophysics Data System (ADS)
Gobinath, R.; Vetrivel Murugan, R.
2018-02-01
Nitrogen in steel is both beneficial and detrimental depending on grade of steel and its application. To get desired low nitrogen during vacuum degassing process, VD parameters namely vacuum level, argon flow rate and holding time has to optimized depending upon initial nitrogen level. In this work a mathematical model to simulate nitrogen removal in tank degasser is developed and how various VD parameters affects nitrogen removal is studied. Ladle water model studies with bottom purging have shown two distinct flow regions, namely the plume region and the outside plume region. The two regions are treated as two separate reactors exchanging mass between them and complete mixing is assumed in both the reactors. In the plume region, transfer of nitrogen to single bubble is simulated. At the gas-liquid metal interface (bubble interface) thermodynamic equilibrium is assumed and the transfer of nitrogen from bulk liquid metal in the plume region to the gas-metal interface is obtained using mass transport principles. The model predicts variation of Nitrogen content in both the reactors with time. The model is validated with industrial process and the predicted results were found to have fair agreement with the measured results.
Improving RNA nearest neighbor parameters for helices by going beyond the two-state model.
Spasic, Aleksandar; Berger, Kyle D; Chen, Jonathan L; Seetin, Matthew G; Turner, Douglas H; Mathews, David H
2018-06-01
RNA folding free energy change nearest neighbor parameters are widely used to predict folding stabilities of secondary structures. They were determined by linear regression to datasets of optical melting experiments on small model systems. Traditionally, the optical melting experiments are analyzed assuming a two-state model, i.e. a structure is either complete or denatured. Experimental evidence, however, shows that structures exist in an ensemble of conformations. Partition functions calculated with existing nearest neighbor parameters predict that secondary structures can be partially denatured, which also directly conflicts with the two-state model. Here, a new approach for determining RNA nearest neighbor parameters is presented. Available optical melting data for 34 Watson-Crick helices were fit directly to a partition function model that allows an ensemble of conformations. Fitting parameters were the enthalpy and entropy changes for helix initiation, terminal AU pairs, stacks of Watson-Crick pairs and disordered internal loops. The resulting set of nearest neighbor parameters shows a 38.5% improvement in the sum of residuals in fitting the experimental melting curves compared to the current literature set.
Ip, Ryan H L; Li, W K; Leung, Kenneth M Y
2013-09-15
Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, Shun-Peng; Huang, Hong-Zhong; Li, Haiqing; Sun, Rui; Zuo, Ming J.
2011-06-01
Based on ductility exhaustion theory and the generalized energy-based damage parameter, a new viscosity-based life prediction model is introduced to account for the mean strain/stress effects in the low cycle fatigue regime. The loading waveform parameters and cyclic hardening effects are also incorporated within this model. It is assumed that damage accrues by means of viscous flow and ductility consumption is only related to plastic strain and creep strain under high temperature low cycle fatigue conditions. In the developed model, dynamic viscosity is used to describe the flow behavior. This model provides a better prediction of Superalloy GH4133's fatigue behavior when compared to Goswami's ductility model and the generalized damage parameter. Under non-zero mean strain conditions, moreover, the proposed model provides more accurate predictions of Superalloy GH4133's fatigue behavior than that with zero mean strains.
Methods of comparing associative models and an application to retrospective revaluation.
Witnauer, James E; Hutchings, Ryan; Miller, Ralph R
2017-11-01
Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.
Dynamics of thin-shell wormholes with different cosmological models
NASA Astrophysics Data System (ADS)
Sharif, Muhammad; Mumtaz, Saadia
This work is devoted to investigate the stability of thin-shell wormholes in Einstein-Hoffmann-Born-Infeld electrodynamics. We also study the attractive and repulsive characteristics of these configurations. A general equation-of-state is considered in the form of linear perturbation which explores the stability of the respective wormhole solutions. We assume Chaplygin, linear and logarithmic gas models to study exotic matter at thin-shell and evaluate stability regions for different values of the involved parameters. It is concluded that the Hoffmann-Born-Infeld parameter and electric charge enhance the stability regions.
Probability distribution functions for intermittent scrape-off layer plasma fluctuations
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-03-01
A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.
Birefringence and hidden photons
NASA Astrophysics Data System (ADS)
Arza, Ariel; Gamboa, J.
2018-05-01
We study a model where photons interact with hidden photons and millicharged particles through a kinetic mixing term. Particularly, we focus on vacuum birefringence effects and we find a bound for the millicharged parameter assuming that hidden photons are a piece of the local dark matter density.
Attractor learning in synchronized chaotic systems in the presence of unresolved scales
NASA Astrophysics Data System (ADS)
Wiegerinck, W.; Selten, F. M.
2017-12-01
Recently, supermodels consisting of an ensemble of interacting models, synchronizing on a common solution, have been proposed as an alternative to the common non-interactive multi-model ensembles in order to improve climate predictions. The connection terms in the interacting ensemble are to be optimized based on the data. The supermodel approach has been successfully demonstrated in a number of simulation experiments with an assumed ground truth and a set of good, but imperfect models. The supermodels were optimized with respect to their short-term prediction error. Nevertheless, they produced long-term climatological behavior that was close to the long-term behavior of the assumed ground truth, even in cases where the long-term behavior of the imperfect models was very different. In these supermodel experiments, however, a perfect model class scenario was assumed, in which the ground truth and imperfect models belong to the same model class and only differ in parameter setting. In this paper, we consider the imperfect model class scenario, in which the ground truth model class is more complex than the model class of imperfect models due to unresolved scales. We perform two supermodel experiments in two toy problems. The first one consists of a chaotically driven Lorenz 63 oscillator ground truth and two Lorenz 63 oscillators with constant forcings as imperfect models. The second one is more realistic and consists of a global atmosphere model as ground truth and imperfect models that have perturbed parameters and reduced spatial resolution. In both problems, we find that supermodel optimization with respect to short-term prediction error can lead to a long-term climatological behavior that is worse than that of the imperfect models. However, we also show that attractor learning can remedy this problem, leading to supermodels with long-term behavior superior to the imperfect models.
Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing
NASA Astrophysics Data System (ADS)
Ou, Meiying; Li, Shihua; Wang, Chaoli
2013-12-01
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.
Molina, J; Sued, M; Valdora, M
2018-06-05
Generalized linear models are often assumed to fit propensity scores, which are used to compute inverse probability weighted (IPW) estimators. To derive the asymptotic properties of IPW estimators, the propensity score is supposed to be bounded away from zero. This condition is known in the literature as strict positivity (or positivity assumption), and, in practice, when it does not hold, IPW estimators are very unstable and have a large variability. Although strict positivity is often assumed, it is not upheld when some of the covariates are unbounded. In real data sets, a data-generating process that violates the positivity assumption may lead to wrong inference because of the inaccuracy in the estimations. In this work, we attempt to conciliate between the strict positivity condition and the theory of generalized linear models by incorporating an extra parameter, which results in an explicit lower bound for the propensity score. An additional parameter is added to fulfil the overlap assumption in the causal framework. Copyright © 2018 John Wiley & Sons, Ltd.
Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.
Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo
2016-09-01
In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.
Annealed Importance Sampling for Neural Mass Models
Penny, Will; Sengupta, Biswa
2016-01-01
Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution. PMID:26942606
Decision Models for Determining the Optimal Life Test Sampling Plans
NASA Astrophysics Data System (ADS)
Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.
2010-11-01
Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.
On Local Ionization Equilibrium and Disk Winds in QSOs
NASA Astrophysics Data System (ADS)
Pereyra, Nicolas A.
2014-11-01
We present theoretical C IV λλ1548,1550 absorption line profiles for QSOs calculated assuming the accretion disk wind (ADW) scenario. The results suggest that the multiple absorption troughs seen in many QSOs may be due to the discontinuities in the ion balance of the wind (caused by X-rays), rather than discontinuities in the density/velocity structure. The profiles are calculated from a 2.5-dimensional time-dependent hydrodynamic simulation of a line-driven disk wind for a typical QSO black hole mass, a typical QSO luminosity, and for a standard Shakura-Sunyaev disk. We include the effects of ionizing X-rays originating from within the inner disk radius by assuming that the wind is shielded from the X-rays from a certain viewing angle up to 90° ("edge on"). In the shielded region, we assume constant ionization equilibrium, and thus constant line-force parameters. In the non-shielded region, we assume that both the line-force and the C IV populations are nonexistent. The model can account for P-Cygni absorption troughs (produced at edge on viewing angles), multiple absorption troughs (produced at viewing angles close to the angle that separates the shielded region and the non-shielded region), and for detached absorption troughs (produced at an angle in between the first two absorption line types); that is, the model can account for the general types of broad absorption lines seen in QSOs as a viewing angle effect. The steady nature of ADWs, in turn, may account for the steady nature of the absorption structure observed in multiple-trough broad absorption line QSOs. The model parameters are M bh = 109 M ⊙ and L disk = 1047 erg s-1.
Mino, H
2007-01-01
To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.
Non-LTE analysis of the Ofpe/WN9 star HDE 269227 (R84)
NASA Technical Reports Server (NTRS)
Schmutz, Werner; Leitherer, Claus; Hubeny, Ivan; Vogel, Manfred; Hamann, Wolf-Rainer
1991-01-01
The paper presents the results of a spectral analysis of the Ofpe/WN9 star HD 269227 (R84), which assumes a spherically expanding atmosphere to find solutions for equations of radiative transfer. The spectra of hydrogen and helium were predicted with a non-LTE model. Six stellar parameters were determined for R84. The shape of the velocity law is empirically found, since it can be probed from the terminal velocity of the wind. The six stellar parameters are further employed in a hydrodynamic model where stellar wind is assumed to be directed by radiation pressure, duplicating the mass-loss rate and the terminal wind velocity. The velocity laws found by computation and analysis are found to agree, supporting the theory of radiation-driven stellar wind. R84 is surmised to be a post-red supergiant which lost half of its initial mass, possibly during the red-supergiant phase. This mass loss is also suggested by its spectroscopic similarity to S Doradus.
Managing distribution changes in time series prediction
NASA Astrophysics Data System (ADS)
Matias, J. M.; Gonzalez-Manteiga, W.; Taboada, J.; Ordonez, C.
2006-07-01
When a problem is modeled statistically, a single distribution model is usually postulated that is assumed to be valid for the entire space. Nonetheless, this practice may be somewhat unrealistic in certain application areas, in which the conditions of the process that generates the data may change; as far as we are aware, however, no techniques have been developed to tackle this problem.This article proposes a technique for modeling and predicting this change in time series with a view to improving estimates and predictions. The technique is applied, among other models, to the hypernormal distribution recently proposed. When tested on real data from a range of stock market indices the technique produces better results that when a single distribution model is assumed to be valid for the entire period of time studied.Moreover, when a global model is postulated, it is highly recommended to select the hypernormal distribution parameter in the same likelihood maximization process.
On predicting monitoring system effectiveness
NASA Astrophysics Data System (ADS)
Cappello, Carlo; Sigurdardottir, Dorotea; Glisic, Branko; Zonta, Daniele; Pozzi, Matteo
2015-03-01
While the objective of structural design is to achieve stability with an appropriate level of reliability, the design of systems for structural health monitoring is performed to identify a configuration that enables acquisition of data with an appropriate level of accuracy in order to understand the performance of a structure or its condition state. However, a rational standardized approach for monitoring system design is not fully available. Hence, when engineers design a monitoring system, their approach is often heuristic with performance evaluation based on experience, rather than on quantitative analysis. In this contribution, we propose a probabilistic model for the estimation of monitoring system effectiveness based on information available in prior condition, i.e. before acquiring empirical data. The presented model is developed considering the analogy between structural design and monitoring system design. We assume that the effectiveness can be evaluated based on the prediction of the posterior variance or covariance matrix of the state parameters, which we assume to be defined in a continuous space. Since the empirical measurements are not available in prior condition, the estimation of the posterior variance or covariance matrix is performed considering the measurements as a stochastic variable. Moreover, the model takes into account the effects of nuisance parameters, which are stochastic parameters that affect the observations but cannot be estimated using monitoring data. Finally, we present an application of the proposed model to a real structure. The results show how the model enables engineers to predict whether a sensor configuration satisfies the required performance.
The magnetotelluric response over 2D media with resistivity frequency dispersion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauriello, P.; Patella, D.; Siniscalchi, A.
1996-09-01
The authors investigate the magnetotelluric response of two-dimensional bodies, characterized by the presence of low-frequency dispersion phenomena of the electrical parameters. The Cole-Cole dispersion model is assumed to represent the frequency dependence of the impedivity complex function, defined as the inverse of Stoyer`s admittivity complex parameter. To simulate real geological situations, they consider three structural models, representing a sedimentary basin, a geothermal system and a magma chamber, assumed to be partially or totally dispersive. From a detailed study of the frequency and space behaviors of the magnetotelluric parameters, taking known non-dispersive results as reference, they outline the main peculiarities ofmore » the local distortion effects, caused by the presence of dispersion in the target media. Finally, they discuss the interpretive errors which can be made by neglecting the dispersion phenomena. The apparent dispersion function, which was defined in a previous paper to describe similar effects in the one-dimensional case, is again used as a reliable indicator of location, shape and spatial extent of the dispersive bodies. The general result of this study is a marked improvement in the resolution power of the magnetotelluric method.« less
Observational constraint on spherical inhomogeneity with CMB and local Hubble parameter
NASA Astrophysics Data System (ADS)
Tokutake, Masato; Ichiki, Kiyotomo; Yoo, Chul-Moon
2018-03-01
We derive an observational constraint on a spherical inhomogeneity of the void centered at our position from the angular power spectrum of the cosmic microwave background (CMB) and local measurements of the Hubble parameter. The late time behaviour of the void is assumed to be well described by the so-called Λ-Lemaȋtre-Tolman-Bondi (ΛLTB) solution. Then, we restrict the models to the asymptotically homogeneous models each of which is approximated by a flat Friedmann-Lemaȋtre-Robertson-Walker model. The late time ΛLTB models are parametrized by four parameters including the value of the cosmological constant and the local Hubble parameter. The other two parameters are used to parametrize the observed distance-redshift relation. Then, the ΛLTB models are constructed so that they are compatible with the given distance-redshift relation. Including conventional parameters for the CMB analysis, we characterize our models by seven parameters in total. The local Hubble measurements are reflected in the prior distribution of the local Hubble parameter. As a result of a Markov-Chains-Monte-Carlo analysis for the CMB temperature and polarization anisotropies, we found that the inhomogeneous universe models with vanishing cosmological constant are ruled out as is expected. However, a significant under-density around us is still compatible with the angular power spectrum of CMB and the local Hubble parameter.
Evolution of non-interacting entropic dark energy and its phantom nature
NASA Astrophysics Data System (ADS)
Mathew, Titus K.; Murali, Chinthak; Shejeelammal, J.
2016-04-01
Assuming the form of the entropic dark energy (EDE) as it arises from the surface term in the Einstein-Hilbert’s action, its evolution was analyzed in an expanding flat universe. The model parameters were evaluated by constraining the model using the Union data on Type Ia supernovae. We found that in the non-interacting case, the model predicts an early decelerated phase and a later accelerated phase at the background level. The evolutions of the Hubble parameter, dark energy (DE) density, equation of state parameter and deceleration parameter were obtained. The model hardly seems to be supporting the linear perturbation growth for the structure formation. We also found that the EDE shows phantom nature for redshifts z < 0.257. During the phantom epoch, the model predicts big rip effect at which both the scale factor of expansion and the DE density become infinitely large and the big rip time is found to be around 36 Giga years from now.
A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates
NASA Astrophysics Data System (ADS)
Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh
2016-10-01
We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the average shape parameter is taken as the regional predictive distribution for this parameter. While the index flood method does not provide a straightforward way to consider the uncertainties in the index flood and in the regional parameters, the results obtained here show that the proposed Bayesian method is able to produce adequate credible intervals for flood quantiles that are in accordance with empirical estimates.
Combined proportional and additive residual error models in population pharmacokinetic modelling.
Proost, Johannes H
2017-11-15
In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatio-Temporal EEG Models for Brain Interfaces
Gonzalez-Navarro, P.; Moghadamfalahi, M.; Akcakaya, M.; Erdogmus, D.
2016-01-01
Multichannel electroencephalography (EEG) is widely used in non-invasive brain computer interfaces (BCIs) for user intent inference. EEG can be assumed to be a Gaussian process with unknown mean and autocovariance, and the estimation of parameters is required for BCI inference. However, the relatively high dimensionality of the EEG feature vectors with respect to the number of labeled observations lead to rank deficient covariance matrix estimates. In this manuscript, to overcome ill-conditioned covariance estimation, we propose a structure for the covariance matrices of the multichannel EEG signals. Specifically, we assume that these covariances can be modeled as a Kronecker product of temporal and spatial covariances. Our results over the experimental data collected from the users of a letter-by-letter typing BCI show that with less number of parameter estimations, the system can achieve higher classification accuracies compared to a method that uses full unstructured covariance estimation. Moreover, in order to illustrate that the proposed Kronecker product structure could enable shortening the BCI calibration data collection sessions, using Cramer-Rao bound analysis on simulated data, we demonstrate that a model with structured covariance matrices will achieve the same estimation error as a model with no covariance structure using fewer labeled EEG observations. PMID:27713590
The Effect of Roughness Model on Scattering Properties of Ice Crystals.
NASA Technical Reports Server (NTRS)
Geogdzhayev, Igor V.; Van Diedenhoven, Bastiaan
2016-01-01
We compare stochastic models of microscale surface roughness assuming uniform and Weibull distributions of crystal facet tilt angles to calculate scattering by roughened hexagonal ice crystals using the geometric optics (GO) approximation. Both distributions are determined by similar roughness parameters, while the Weibull model depends on the additional shape parameter. Calculations were performed for two visible wavelengths (864 nm and 410 nm) for roughness values between 0.2 and 0.7 and Weibull shape parameters between 0 and 1.0 for crystals with aspect ratios of 0.21, 1 and 4.8. For this range of parameters we find that, for a given roughness level, varying the Weibull shape parameter can change the asymmetry parameter by up to about 0.05. The largest effect of the shape parameter variation on the phase function is found in the backscattering region, while the degree of linear polarization is most affected at the side-scattering angles. For high roughness, scattering properties calculated using the uniform and Weibull models are in relatively close agreement for a given roughness parameter, especially when a Weibull shape parameter of 0.75 is used. For smaller roughness values, a shape parameter close to unity provides a better agreement. Notable differences are observed in the phase function over the scattering angle range from 5deg to 20deg, where the uniform roughness model produces a plateau while the Weibull model does not.
SEPARABLE FACTOR ANALYSIS WITH APPLICATIONS TO MORTALITY DATA
Fosdick, Bailey K.; Hoff, Peter D.
2014-01-01
Human mortality data sets can be expressed as multiway data arrays, the dimensions of which correspond to categories by which mortality rates are reported, such as age, sex, country and year. Regression models for such data typically assume an independent error distribution or an error model that allows for dependence along at most one or two dimensions of the data array. However, failing to account for other dependencies can lead to inefficient estimates of regression parameters, inaccurate standard errors and poor predictions. An alternative to assuming independent errors is to allow for dependence along each dimension of the array using a separable covariance model. However, the number of parameters in this model increases rapidly with the dimensions of the array and, for many arrays, maximum likelihood estimates of the covariance parameters do not exist. In this paper, we propose a submodel of the separable covariance model that estimates the covariance matrix for each dimension as having factor analytic structure. This model can be viewed as an extension of factor analysis to array-valued data, as it uses a factor model to estimate the covariance along each dimension of the array. We discuss properties of this model as they relate to ordinary factor analysis, describe maximum likelihood and Bayesian estimation methods, and provide a likelihood ratio testing procedure for selecting the factor model ranks. We apply this methodology to the analysis of data from the Human Mortality Database, and show in a cross-validation experiment how it outperforms simpler methods. Additionally, we use this model to impute mortality rates for countries that have no mortality data for several years. Unlike other approaches, our methodology is able to estimate similarities between the mortality rates of countries, time periods and sexes, and use this information to assist with the imputations. PMID:25489353
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
Software Review: A program for testing capture-recapture data for closure
Stanley, Thomas R.; Richards, Jon D.
2005-01-01
Capture-recapture methods are widely used to estimate population parameters of free-ranging animals. Closed-population capture-recapture models, which assume there are no additions to or losses from the population over the period of study (i.e., the closure assumption), are preferred for population estimation over the open-population models, which do not assume closure, because heterogeneity in detection probabilities can be accounted for and this improves estimates. In this paper we introduce CloseTest, a new Microsoft® Windows-based program that computes the Otis et al. (1978) and Stanley and Burnham (1999) closure tests for capture-recapture data sets. Information on CloseTest features and where to obtain the program are provided.
An Improved Statistical Solution for Global Seismicity by the HIST-ETAS Approach
NASA Astrophysics Data System (ADS)
Chu, A.; Ogata, Y.; Katsura, K.
2010-12-01
For long-term global seismic model fitting, recent work by Chu et al. (2010) applied the spatial-temporal ETAS model (Ogata 1998) and analyzed global data partitioned into tectonic zones based on geophysical characteristics (Bird 2003), and it has shown tremendous improvements of model fitting compared with one overall global model. While the ordinary ETAS model assumes constant parameter values across the complete region analyzed, the hierarchical space-time ETAS model (HIST-ETAS, Ogata 2004) is a newly introduced approach by proposing regional distinctions of the parameters for more accurate seismic prediction. As the HIST-ETAS model has been fit to regional data of Japan (Ogata 2010), our work applies the model to describe global seismicity. Employing the Akaike's Bayesian Information Criterion (ABIC) as an assessment method, we compare the MLE results with zone divisions considered to results obtained by an overall global model. Location dependent parameters of the model and Gutenberg-Richter b-values are optimized, and seismological interpretations are discussed.
Axisymmetric magnetic modes of neutron stars having mixed poloidal and toroidal magnetic fields
NASA Astrophysics Data System (ADS)
Lee, Umin
2018-05-01
We calculate axisymmetric magnetic modes of a neutron star possessing a mixed poloidal and toroidal magnetic field, where the toroidal field is assumed to be proportional to a dimensionless parameter ζ0. Here, we assume an isentropic structure for the neutron star and consider no effects of rotation. Ignoring the equilibrium deformation due to the magnetic field, we employ a polytrope of the index n = 1 as the background model for our modal analyses. For the mixed poloidal and toroidal magnetic field with ζ _0\
Logistic regression for dichotomized counts.
Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W
2016-12-01
Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
Lu, Zhiming
2018-01-30
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhiming
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Robust image modeling techniques with an image restoration application
NASA Astrophysics Data System (ADS)
Kashyap, Rangasami L.; Eom, Kie-Bum
1988-08-01
A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, is presented. The convergence of the estimation algorithm is proved. An algorithm to estimate parameters and original image intensity simultaneously from the impulse-noise-corrupted image, where the model governing the image is not available, is also presented. The robustness of the parameter estimates is demonstrated by simulation. Finally, an algorithm to restore realistic images is presented. The entire image generally does not obey a simple image model, but a small portion (e.g., 8 x 8) of the image is assumed to obey an NSHP model. The original image is divided into windows and the robust estimation algorithm is applied for each window. The restoration algorithm is tested by comparing it to traditional methods on several different images.
The reconstruction of tachyon inflationary potentials
NASA Astrophysics Data System (ADS)
Fei, Qin; Gong, Yungui; Lin, Jiong; Yi, Zhu
2017-08-01
We derive a lower bound on the field excursion for the tachyon inflation, which is determined by the amplitude of the scalar perturbation and the number of e-folds before the end of inflation. Using the relation between the observables like ns and r with the slow-roll parameters, we reconstruct three classes of tachyon potentials. The model parameters are determined from the observations before the potentials are reconstructed, and the observations prefer the concave potential. We also discuss the constraints from the reheating phase preceding the radiation domination for the three classes of models by assuming the equation of state parameter wre during reheating is a constant. Depending on the model parameters and the value of wre, the constraints on Nre and Tre are different. As ns increases, the allowed reheating epoch becomes longer for wre=-1/3, 0 and 1/6 while the allowed reheating epoch becomes shorter for wre=2/3.
NASA Astrophysics Data System (ADS)
Lerner, Paul; Marchal, Olivier; Lam, Phoebe J.; Anderson, Robert F.; Buesseler, Ken; Charette, Matthew A.; Edwards, R. Lawrence; Hayes, Christopher T.; Huang, Kuo-Fang; Lu, Yanbin; Robinson, Laura F.; Solow, Andrew
2016-07-01
Thorium is a highly particle-reactive element that possesses different measurable radio-isotopes in seawater, with well-constrained production rates and very distinct half-lives. As a result, Th has emerged as a key tracer for the cycling of marine particles and of their chemical constituents, including particulate organic carbon. Here two different versions of a model of Th and particle cycling in the ocean are tested using an unprecedented data set from station GT11-22 of the U.S. GEOTRACES North Atlantic Section: (i) 228,230,234Th activities of dissolved and particulate fractions, (ii) 228Ra activities, (iii) 234,238U activities estimated from salinity data and an assumed 234U/238U ratio, and (iv) particle concentrations, below a depth of 125 m. The two model versions assume a single class of particles but rely on different assumptions about the rate parameters for sorption reactions and particle processes: a first version (V1) assumes vertically uniform parameters (a popular description), whereas the second (V2) does not. Both versions are tested by fitting to the GT11-22 data using generalized nonlinear least squares and by analyzing residuals normalized to the data errors. We find that model V2 displays a significantly better fit to the data than model V1. Thus, the mere allowance of vertical variations in the rate parameters can lead to a significantly better fit to the data, without the need to modify the structure or add any new processes to the model. To understand how the better fit is achieved we consider two parameters, K =k1 /(k-1 +β-1) and K/P, where k1 is the adsorption rate constant, k-1 the desorption rate constant, β-1 the remineralization rate constant, and P the particle concentration. We find that the rate constant ratio K is large (⩾ 0.2) in the upper 1000 m and decreases to a nearly uniform value of ca. 0.12 below 2000 m, implying that the specific rate at which Th attaches to particles relative to that at which it is released from particles is higher in the upper ocean than in the deep ocean. In contrast, K/P increases with depth below 500 m. The parameters K and K/P display significant positive and negative monotonic relationship with P, respectively, which is collectively consistent with a particle concentration effect.
Updating finite element dynamic models using an element-by-element sensitivity methodology
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Hemez, Francois M.
1993-01-01
A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.
MIMICKING COUNTERFACTUAL OUTCOMES TO ESTIMATE CAUSAL EFFECTS.
Lok, Judith J
2017-04-01
In observational studies, treatment may be adapted to covariates at several times without a fixed protocol, in continuous time. Treatment influences covariates, which influence treatment, which influences covariates, and so on. Then even time-dependent Cox-models cannot be used to estimate the net treatment effect. Structural nested models have been applied in this setting. Structural nested models are based on counterfactuals: the outcome a person would have had had treatment been withheld after a certain time. Previous work on continuous-time structural nested models assumes that counterfactuals depend deterministically on observed data, while conjecturing that this assumption can be relaxed. This article proves that one can mimic counterfactuals by constructing random variables, solutions to a differential equation, that have the same distribution as the counterfactuals, even given past observed data. These "mimicking" variables can be used to estimate the parameters of structural nested models without assuming the treatment effect to be deterministic.
Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model
NASA Astrophysics Data System (ADS)
Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.
2013-12-01
We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.
A theory for modeling ground-water flow in heterogeneous media
Cooley, Richard L.
2004-01-01
Construction of a ground-water model for a field area is not a straightforward process. Data are virtually never complete or detailed enough to allow substitution into the model equations and direct computation of the results of interest. Formal model calibration through optimization, statistical, and geostatistical methods is being applied to an increasing extent to deal with this problem and provide for quantitative evaluation and uncertainty analysis of the model. However, these approaches are hampered by two pervasive problems: 1) nonlinearity of the solution of the model equations with respect to some of the model (or hydrogeologic) input variables (termed in this report system characteristics) and 2) detailed and generally unknown spatial variability (heterogeneity) of some of the system characteristics such as log hydraulic conductivity, specific storage, recharge and discharge, and boundary conditions. A theory is developed in this report to address these problems. The theory allows construction and analysis of a ground-water model of flow (and, by extension, transport) in heterogeneous media using a small number of lumped or smoothed system characteristics (termed parameters). The theory fully addresses both nonlinearity and heterogeneity in such a way that the parameters are not assumed to be effective values. The ground-water flow system is assumed to be adequately characterized by a set of spatially and temporally distributed discrete values, ?, of the system characteristics. This set contains both small-scale variability that cannot be described in a model and large-scale variability that can. The spatial and temporal variability in ? are accounted for by imagining ? to be generated by a stochastic process wherein ? is normally distributed, although normality is not essential. Because ? has too large a dimension to be estimated using the data normally available, for modeling purposes ? is replaced by a smoothed or lumped approximation y?. (where y is a spatial and temporal interpolation matrix). Set y?. has the same form as the expected value of ?, y 'line' ? , where 'line' ? is the set of drift parameters of the stochastic process; ?. is a best-fit vector to ?. A model function f(?), such as a computed hydraulic head or flux, is assumed to accurately represent an actual field quantity, but the same function written using y?., f(y?.), contains error from lumping or smoothing of ? using y?.. Thus, the replacement of ? by y?. yields nonzero mean model errors of the form E(f(?)-f(y?.)) throughout the model and covariances between model errors at points throughout the model. These nonzero means and covariances are evaluated through third and fifth-order accuracy, respectively, using Taylor series expansions. They can have a significant effect on construction and interpretation of a model that is calibrated by estimating ?.. Vector ?.. is estimated as 'hat' ? using weighted nonlinear least squares techniques to fit a set of model functions f(y'hat' ?) to a. corresponding set of observations of f(?), Y. These observations are assumed to be corrupted by zero-mean, normally distributed observation errors, although, as for ?, normality is not essential. An analytical approximation of the nonlinear least squares solution is obtained using Taylor series expansions and perturbation techniques that assume model and observation errors to be small. This solution is used to evaluate biases and other results to second-order accuracy in the errors. The correct weight matrix to use in the analysis is shown to be the inverse of the second-moment matrix E(Y-f(y?.))(Y-f(y?.))', but the weight matrix is assumed to be arbitrary in most developments. The best diagonal approximation is the inverse of the matrix of diagonal elements of E(Y-f(y?.))(Y-f(y?.))', and a method of estimating this diagonal matrix when it is unknown is developed using a special objective function to compute 'hat' ?. When considered to be an estimate of f
Analytical Solution for Reactive Solute Transport Considering Incomplete Mixing
NASA Astrophysics Data System (ADS)
Bellin, A.; Chiogna, G.
2013-12-01
The laboratory experiments of Gramling et al. (2002) showed that incomplete mixing at the pore scale exerts a significant impact on transport of reactive solutes and that assuming complete mixing leads to overestimation of product concentration in bimolecular reactions. We consider here the family of equilibrium reactions for which the concentration of the reactants and the product can be expressed as a function of the mixing ratio, the concentration of a fictitious non reactive solute. For this type of reactions we propose, in agreement with previous studies, to model the effect of incomplete mixing at scales smaller than the Darcy scale assuming that the mixing ratio is distributed within an REV according to a Beta distribution. We compute the parameters of the Beta model by imposing that the mean concentration is equal to the value that the concentration assumes at the continuum Darcy scale, while the variance decays with time as a power law. We show that our model reproduces the concentration profiles of the reaction product measured in the Gramling et al. (2002) experiments using the transport parameters obtained from conservative experiments and an instantaneous reaction kinetic. The results are obtained applying analytical solutions both for conservative and for reactive solute transport, thereby providing a method to handle the effect of incomplete mixing on multispecies reactive solute transport, which is simpler than other previously developed methods. Gramling, C. M., C. F. Harvey, and L. C. Meigs (2002), Reactive transport in porous media: A comparison of model prediction with laboratory visualization, Environ. Sci. Technol., 36(11), 2508-2514.
Pellejero-Ibanez, Marco; Chuang, Chia -Hsun; Rubino-Martin, J. A.; ...
2016-03-28
Here, we develop a new methodology called double-probe analysis with the aim of minimizing informative priors in the estimation of cosmological parameters. We extract the dark-energy-model-independent cosmological constraints from the joint data sets of Baryon Oscillation Spectroscopic Survey (BOSS) galaxy sample and Planck cosmic microwave background (CMB) measurement. We measure the mean values and covariance matrix of {R, l a, Ω bh 2, n s, log(A s), Ω k, H(z), D A(z), f(z)σ 8(z)}, which give an efficient summary of Planck data and 2-point statistics from BOSS galaxy sample, where R = √Ω mH 2 0, and l a =more » πr(z *)/r s(z *), z * is the redshift at the last scattering surface, and r(z *) and r s(z *) denote our comoving distance to z * and sound horizon at z * respectively. The advantage of this method is that we do not need to put informative priors on the cosmological parameters that galaxy clustering is not able to constrain well, i.e. Ω bh 2 and n s. Using our double-probe results, we obtain Ω m = 0.304 ± 0.009, H 0 = 68.2 ± 0.7, and σ 8 = 0.806 ± 0.014 assuming ΛCDM; and Ω k = 0.002 ± 0.003 and w = –1.00 ± 0.07 assuming owCDM. The results show no tension with the flat ΛCDM cosmological paradigm. By comparing with the full-likelihood analyses with fixed dark energy models, we demonstrate that the double-probe method provides robust cosmological parameter constraints which can be conveniently used to study dark energy models. We extend our study to measure the sum of neutrino mass and obtain Σm ν < 0.10/0.22 (68%/95%) assuming ΛCDM and Σm ν < 0.26/0.52 (68%/95%) assuming wCDM. This paper is part of a set that analyses the final galaxy clustering dataset from BOSS.« less
Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.
Altmann, Yoann; Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves
2012-06-01
This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.
Urdapilleta, E; Bellotti, M; Bonetto, F J
2006-10-01
In this paper we present a model to describe the electrical properties of a confluent cell monolayer cultured on gold microelectrodes to be used with electric cell-substrate impedance sensing technique. This model was developed from microscopic considerations (distributed effects), and by assuming that the monolayer is an element with mean electrical characteristics (specific lumped parameters). No assumptions were made about cell morphology. The model has only three adjustable parameters. This model and other models currently used for data analysis are compared with data we obtained from electrical measurements of confluent monolayers of Madin-Darby Canine Kidney cells. One important parameter is the cell-substrate height and we found that estimates of this magnitude strongly differ depending on the model used for the analysis. We analyze the origin of the discrepancies, concluding that the estimates from the different models can be considered as limits for the true value of the cell-substrate height.
Estimated effects of temperature on secondary organic aerosol concentrations.
Sheehan, P E; Bowman, F M
2001-06-01
The temperature-dependence of secondary organic aerosol (SOA) concentrations is explored using an absorptive-partitioning model under a variety of simplified atmospheric conditions. Experimentally determined partitioning parameters for high yield aromatics are used. Variation of vapor pressures with temperature is assumed to be the main source of temperature effects. Known semivolatile products are used to define a modeling range of vaporization enthalpy of 10-25 kcal/mol-1. The effect of diurnal temperature variations on model predictions for various assumed vaporization enthalpies, precursor emission rates, and primary organic concentrations is explored. Results show that temperature is likely to have a significant influence on SOA partitioning and resulting SOA concentrations. A 10 degrees C decrease in temperature is estimated to increase SOA yields by 20-150%, depending on the assumed vaporization enthalpy. In model simulations, high daytime temperatures tend to reduce SOA concentrations by 16-24%, while cooler nighttime temperatures lead to a 22-34% increase, compared to constant temperature conditions. Results suggest that currently available constant temperature partitioning coefficients do not adequately represent atmospheric SOA partitioning behavior. Air quality models neglecting the temperature dependence of partitioning are expected to underpredict peak SOA concentrations as well as mistime their occurrence.
NASA Astrophysics Data System (ADS)
Chiogna, Gabriele; Bellin, Alberto
2013-05-01
The laboratory experiments of Gramling et al. (2002) showed that incomplete mixing at the pore scale exerts a significant impact on transport of reactive solutes and that assuming complete mixing leads to overestimation of product concentration in bimolecular reactions. Successively, several attempts have been made to model this experiment, either considering spatial segregation of the reactants, non-Fickian transport applying a Continuous Time Random Walk (CTRW) or an effective upscaled time-dependent kinetic reaction term. Previous analyses of these experimental results showed that, at the Darcy scale, conservative solute transport is well described by a standard advection dispersion equation, which assumes complete mixing at the pore scale. However, reactive transport is significantly affected by incomplete mixing at smaller scales, i.e., within a reference elementary volume (REV). We consider here the family of equilibrium reactions for which the concentration of the reactants and the product can be expressed as a function of the mixing ratio, the concentration of a fictitious non reactive solute. For this type of reactions we propose, in agreement with previous studies, to model the effect of incomplete mixing at scales smaller than the Darcy scale assuming that the mixing ratio is distributed within an REV according to a Beta distribution. We compute the parameters of the Beta model by imposing that the mean concentration is equal to the value that the concentration assumes at the continuum Darcy scale, while the variance decays with time as a power law. We show that our model reproduces the concentration profiles of the reaction product measured in the Gramling et al. (2002) experiments using the transport parameters obtained from conservative experiments and an instantaneous reaction kinetic. The results are obtained applying analytical solutions both for conservative and for reactive solute transport, thereby providing a method to handle the effect of incomplete mixing on multispecies reactive solute transport, which is simpler than other previously developed methods.
Drake, Andrew W; Klakamp, Scott L
2007-01-10
A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data.
Steve P. Verrill; Frank C. Owens; David E. Kretschmann; Rubin Shmulsky
2017-01-01
It is common practice to assume that a two-parameter Weibull probability distribution is suitable for modeling lumber properties. Verrill and co-workers demonstrated theoretically and empirically that the modulus of rupture (MOR) distribution of visually graded or machine stress rated (MSR) lumber is not distributed as a Weibull. Instead, the tails of the MOR...
Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2012-01-01
We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.
Tiktak, A; Boesten, J J T I; van der Linden, A M A; Vanclooster, M
2006-01-01
To support EU policy, indicators of pesticide leaching at the European level are required. For this reason, a metamodel of the spatially distributed European pesticide leaching model EuroPEARL was developed. EuroPEARL considers transient flow and solute transport and assumes Freundlich adsorption, first-order degradation and passive plant uptake of pesticides. Physical parameters are depth dependent while (bio)-chemical parameters are depth, temperature, and moisture dependent. The metamodel is based on an analytical expression that describes the mass fraction of pesticide leached. The metamodel ignores vertical parameter variations and assumes steady flow. The calibration dataset was generated with EuroPEARL and consisted of approximately 60,000 simulations done for 56 pesticides with different half-lives and partitioning coefficients. The target variable was the 80th percentile of the annual average leaching concentration at 1-m depth from a time series of 20 yr. The metamodel explains over 90% of the variation of the original model with only four independent spatial attributes. These parameters are available in European soil and climate databases, so that the calibrated metamodel could be applied to generate maps of the predicted leaching concentration in the European Union. Maps generated with the metamodel showed a good similarity with the maps obtained with EuroPEARL, which was confirmed by means of quantitative performance indicators.
Thermal Evolution of Charon and the Major Satellites of Uranus: Constraints on Early Differentiation
NASA Astrophysics Data System (ADS)
Spohn, T.; Multhaup, K.
2007-12-01
A thermal history model developed for medium-sized icy satellites containing silicate rock at low volume fractions is applied to Charon and the satellites of Uranus Ariel, Umbriel, Titania, Oberon and Miranda. The model assumes homogeneously accreted satellites. To calculate the initial temperature profile we assume that infalling planetesimals deposit a fraction h of their kinetic energy as heat at the instantaneous surface of the growing satellites. The parameter h is varied between models. The model continuously checks for convectively unstable shells in the interior by updating the temperature profile and calculating the Rayleigh number and the temperature-dependent viscosity. The viscosity parameter values are taken as those of ice I although the satellites under consideration likely contain admixtures of lighter constituents. Their effects and those of rock on the viscosity are discussed. Convective heat transport is calculated assuming the stagnant lid model for strongly temperature dependent viscosity. In convectively stable regions heat transfer is by conduction with a temperature dependent thermal conductivity. Thermal evolution calculations considering radiogenic heating by the long-lived radiogenic isotopes of U, Th, and K suggest that Ariel, Umbriel, Titania, Oberon and Charon may have started to differentiate after a few hundred million years of evolution. With short-lived isotopes -- if present in sizeable concentrations -- this time will move earlier. Results for Miranda -- the smallest satellite of Uranus -- indicate that it never convected or differentiated if heated by the said long-lived isotopes only. Miranda's interior temperature was found to be not even close to the melting temperatures of reasonable mixtures of water and ammonia. This finding is in contrast to its heavily modified surface and supports theories that propose alternative heating mechanisms such as the decay of short-lived isotopes or early tidal heating.
NASA Astrophysics Data System (ADS)
Pellejero-Ibanez, Marcos; Chuang, Chia-Hsun; Rubiño-Martín, J. A.; Cuesta, Antonio J.; Wang, Yuting; Zhao, Gongbo; Ross, Ashley J.; Rodríguez-Torres, Sergio; Prada, Francisco; Slosar, Anže; Vazquez, Jose A.; Alam, Shadab; Beutler, Florian; Eisenstein, Daniel J.; Gil-Marín, Héctor; Grieb, Jan Niklas; Ho, Shirley; Kitaura, Francisco-Shu; Percival, Will J.; Rossi, Graziano; Salazar-Albornoz, Salvador; Samushia, Lado; Sánchez, Ariel G.; Satpathy, Siddharth; Seo, Hee-Jong; Tinker, Jeremy L.; Tojeiro, Rita; Vargas-Magaña, Mariana; Brownstein, Joel R.; Nichol, Robert C.; Olmstead, Matthew D.
2017-07-01
We develop a new computationally efficient methodology called double-probe analysis with the aim of minimizing informative priors (those coming from extra probes) in the estimation of cosmological parameters. Using our new methodology, we extract the dark energy model-independent cosmological constraints from the joint data sets of the Baryon Oscillation Spectroscopic Survey (BOSS) galaxy sample and Planck cosmic microwave background (CMB) measurements. We measure the mean values and covariance matrix of {R, la, Ωbh2, ns, log(As), Ωk, H(z), DA(z), f(z)σ8(z)}, which give an efficient summary of the Planck data and two-point statistics from the BOSS galaxy sample. The CMB shift parameters are R=√{Ω _m H_0^2} r(z_*) and la = πr(z*)/rs(z*), where z* is the redshift at the last scattering surface, and r(z*) and rs(z*) denote our comoving distance to the z* and sound horizon at z*, respectively; Ωb is the baryon fraction at z = 0. This approximate methodology guarantees that we will not need to put informative priors on the cosmological parameters that galaxy clustering is unable to constrain, I.e. Ωbh2 and ns. The main advantage is that the computational time required for extracting these parameters is decreased by a factor of 60 with respect to exact full-likelihood analyses. The results obtained show no tension with the flat Λ cold dark matter (ΛCDM) cosmological paradigm. By comparing with the full-likelihood exact analysis with fixed dark energy models, on one hand we demonstrate that the double-probe method provides robust cosmological parameter constraints that can be conveniently used to study dark energy models, and on the other hand we provide a reliable set of measurements assuming dark energy models to be used, for example, in distance estimations. We extend our study to measure the sum of the neutrino mass using different methodologies, including double-probe analysis (introduced in this study), full-likelihood analysis and single-probe analysis. From full-likelihood analysis, we obtain Σmν < 0.12 (68 per cent), assuming ΛCDM and Σmν < 0.20 (68 per cent) assuming owCDM. We also find that there is degeneracy between observational systematics and neutrino masses, which suggests that one should take great care when estimating these parameters in the case of not having control over the systematics of a given sample.
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
NASA Astrophysics Data System (ADS)
Kari, Leif
2017-09-01
The constitutive equations of chemically and physically ageing rubber in the audible frequency range are modelled as a function of ageing temperature, ageing time, actual temperature, time and frequency. The constitutive equations are derived by assuming nearly incompressible material with elastic spherical response and viscoelastic deviatoric response, using Mittag-Leffler relaxation function of fractional derivative type, the main advantage being the minimum material parameters needed to successfully fit experimental data over a broad frequency range. The material is furthermore assumed essentially entropic and thermo-mechanically simple while using a modified William-Landel-Ferry shift function to take into account temperature dependence and physical ageing, with fractional free volume evolution modelled by a nonlinear, fractional differential equation with relaxation time identical to that of the stress response and related to the fractional free volume by Doolittle equation. Physical ageing is a reversible ageing process, including trapping and freeing of polymer chain ends, polymer chain reorganizations and free volume changes. In contrast, chemical ageing is an irreversible process, mainly attributed to oxygen reaction with polymer network either damaging the network by scission or reformation of new polymer links. The chemical ageing is modelled by inner variables that are determined by inner fractional evolution equations. Finally, the model parameters are fitted to measurements results of natural rubber over a broad audible frequency range, and various parameter studies are performed including comparison with results obtained by ordinary, non-fractional ageing evolution differential equations.
Bayesian Analysis of Non-Gaussian Long-Range Dependent Processes
NASA Astrophysics Data System (ADS)
Graves, T.; Franzke, C.; Gramacy, R. B.; Watkins, N. W.
2012-12-01
Recent studies have strongly suggested that surface temperatures exhibit long-range dependence (LRD). The presence of LRD would hamper the identification of deterministic trends and the quantification of their significance. It is well established that LRD processes exhibit stochastic trends over rather long periods of time. Thus, accurate methods for discriminating between physical processes that possess long memory and those that do not are an important adjunct to climate modeling. We have used Markov Chain Monte Carlo algorithms to perform a Bayesian analysis of Auto-Regressive Fractionally-Integrated Moving-Average (ARFIMA) processes, which are capable of modeling LRD. Our principal aim is to obtain inference about the long memory parameter, d,with secondary interest in the scale and location parameters. We have developed a reversible-jump method enabling us to integrate over different model forms for the short memory component. We initially assume Gaussianity, and have tested the method on both synthetic and physical time series such as the Central England Temperature. Many physical processes, for example the Faraday time series from Antarctica, are highly non-Gaussian. We have therefore extended this work by weakening the Gaussianity assumption. Specifically, we assume a symmetric α -stable distribution for the innovations. Such processes provide good, flexible, initial models for non-Gaussian processes with long memory. We will present a study of the dependence of the posterior variance σ d of the memory parameter d on the length of the time series considered. This will be compared with equivalent error diagnostics for other measures of d.
Chowell, Gerardo; Viboud, Cécile
2016-10-01
The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.
THM modelling of hydrothermal circulation in deep geothermal reservoirs
NASA Astrophysics Data System (ADS)
Magnenet, Vincent; Fond, Christophe; Schmittbuhl, Jean; Genter, Albert
2014-05-01
Numerous models have been developped for describing deep geothermal reservoirs. Using the opensource finite element software ASTER developped by EDF R&D, we carried out 2D simulations of the hydrothermal circulation in the deep geothermal reservoir of Soultz-sous-Forêts. The model is based on the effective description of Thermo-Hydro-Mechanical (THM) coupling at large scale. Such a model has a fourfold interest: a) the physical integration of laboratory measurements (rock physics), well logging, well head parameters, geological description, and geophysics field measurements; b) the construction of a direct model mechanically based for geophysical inversion: fluid flow, fluid pressure, temperature profile, seismicity monitoring, deformation of the ground surface (INSAR/GPS) related to reservoir modification, gravity or electromagnetic geophysical measurements; c) the sensitivity analysis of the parameters involved in the hydrothermal circulation and identification of the dominant ones; d) the development of a decision tool for drilling planning, stimulation and exploitation. In our model, we introduced extended Thermo-Hydro-Mechanical coupling including not only poro-elastic behavior but also the sensitivity of the fluid density, viscosity, and heat capacity to temperature and pressure. The behavior of solid rock grains is assumed to be thermo-elastic and linear. Hydraulic and thermal phenomena are governed by Darcy and Fourier laws respectively, and most rock properties (like the specific heat at constant stress csσ(T), or the thermal conductivity Λ(T,φ)) are assumed to depend on the temperature T and/or porosity φ. The radioactivity of the rocks is taken into account through a heat source term appearing in the balance equation of enthalpy. To characterize as precisely as possible the convective movement of water and the associated heat flow, water properties (specific mass ρw(T,pw), specific enthalpy hmw(T,pw) dynamic viscosity μw(T), thermal dilation αw(T), and specific heat cwp(T)) are assumed to depend on pressure and/or temperature. The entire set of material properties is extracted from references dealing with investigations at Soultz-sous-Forêts when existing. The reservoir is described at large scale (about 10 km in width and 5 km in height) and it is assumed that the medium is homogenous, porous, and saturated with a single-phase fluid (considering homogenized effective porous and/or fractured layers, neglecting the details of the fracture networks). We performed a feasability study and show that a large scale convection regime is possible using realistic parameters. The size of the convection cell (2.8km) are shown to be compatible with field observations.
Dependence of tropical cyclone development on coriolis parameter: A theoretical model
NASA Astrophysics Data System (ADS)
Deng, Liyuan; Li, Tim; Bi, Mingyu; Liu, Jia; Peng, Melinda
2018-03-01
A simple theoretical model was formulated to investigate how tropical cyclone (TC) intensification depends on the Coriolis parameter. The theoretical framework includes a two-layer free atmosphere and an Ekman boundary layer at the bottom. The linkage between the free atmosphere and the boundary layer is through the Ekman pumping vertical velocity in proportion to the vorticity at the top of the boundary layer. The closure of this linear system assumes a simple relationship between the free atmosphere diabatic heating and the boundary layer moisture convergence. Under a set of realistic atmospheric parameter values, the model suggests that the most preferred latitude for TC development is around 5° without considering other factors. The theoretical result is confirmed by high-resolution WRF model simulations in a zero-mean flow and a constant SST environment on an f -plane with different Coriolis parameters. Given an initially balanced weak vortex, the TC-like vortex intensifies most rapidly at the reference latitude of 5°. Thus, the WRF model simulations confirm the f-dependent characteristics of TC intensification rate as suggested by the theoretical model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.
A new colloid transport model is introduced that is conceptually simple but captures the essential features of complicated attachment and detachment behavior of colloids when conditions of secondary minimum attachment exist. This model eliminates the empirical concept of collision efficiency; the attachment rate is computed directly from colloid filtration theory. Also, a new paradigm for colloid detachment based on colloid population heterogeneity is introduced. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of colloids that attach irreversibly and (2) the rate at which reversibly attached colloids leave themore » surface. These two parameters were correlated to physical parameters that control colloid transport such as the depth of the secondary minimum and pore water velocity. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport. This model can be extended to heterogeneous systems characterized by both primary and secondary minimum deposition by simply increasing the fraction of colloids that attach irreversibly.« less
NASA Astrophysics Data System (ADS)
Latifi, Koorosh; Kaviani, Ayoub; Rümpker, Georg; Mahmoodabadi, Meysam; Ghassemi, Mohammad R.; Sadidkhouy, Ahmad
2018-05-01
The contribution of crustal anisotropy to the observation of SKS splitting parameters is often assumed to be negligible. Based on synthetic models, we show that the impact of crustal anisotropy on the SKS splitting parameters can be significant even in the case of moderate to weak anisotropy within the crust. In addition, real-data examples reveal that significant azimuthal variations in SKS splitting parameters can be caused by crustal anisotropy. Ps-splitting analysis of receiver functions (RF) can be used to infer the anisotropic parameters of the crust. These crustal splitting parameters may then be used to constrain the inversion of SKS apparent splitting parameters to infer the anisotropy of the mantle. The observation of SKS splitting for different azimuths is indispensable to verify the presence or absence of multiple layers of anisotropy beneath a seismic station. By combining SKS and RF observations in different azimuths at a station, we are able to uniquely decipher the anisotropic parameters of crust and upper mantle.
Revisiting gamma-ray burst afterglows with time-dependent parameters
NASA Astrophysics Data System (ADS)
Yang, Chao; Zou, Yuan-Chuan; Chen, Wei; Liao, Bin; Lei, Wei-Hua; Liu, Yu
2018-02-01
The relativistic external shock model of gamma-ray burst (GRB) afterglows has been established with five free parameters, i.e., the total kinetic energy E, the equipartition parameters for electrons {{ε }}{{e}} and for the magnetic field {{ε }}{{B}}, the number density of the environment n and the index of the power-law distribution of shocked electrons p. A lot of modified models have been constructed to consider the variety of GRB afterglows, such as: the wind medium environment by letting n change with radius, the energy injection model by letting kinetic energy change with time and so on. In this paper, by assuming all four parameters (except p) change with time, we obtain a set of formulas for the dynamics and radiation, which can be used as a reference for modeling GRB afterglows. Some interesting results are obtained. For example, in some spectral segments, the radiated flux density does not depend on the number density or the profile of the environment. As an application, through modeling the afterglow of GRB 060607A, we find that it can be interpreted in the framework of the time dependent parameter model within a reasonable range.
Analysis of dark energy models in DGP braneworld
NASA Astrophysics Data System (ADS)
Jawad, Abdul
2015-12-01
In this paper, we reconsider the accelerated expansion phenomenon in the DGP braneworld scenario which leads to an accelerated universe without cosmological constant or other form of dark energy for the positive branch (ɛ= +1) which is not more attractive model. Thus, we assume the DGP braneworld scenario with (ɛ= -1) and also interacting Hubble and event horizons pilgrim dark energy models. We extract various cosmological parameters in this scenario and displayed our results with respect to redshift parameter. It is found that the ranges of Hubble parameter are coincided with observational results. The equation of state parameter lies within the suggested ranges of different observational schemes. The squared speed of sound shows stability for all present models in DGP braneworld scenario. The ω_{\\vartheta}-ω'_{\\vartheta} planes lie in the range (ω_{\\vartheta}=-1.13^{+0.24}_{-0.25},ω'_{\\vartheta}<1.32) which has been obtained through different observational schemes. It is remarked that our results of various cosmological parameters shows consistency with different observational data like Planck, WP, BAO, H0 and SNLS.
Chimera regimes in a ring of oscillators with local nonlinear interaction
NASA Astrophysics Data System (ADS)
Shepelev, Igor A.; Zakharova, Anna; Vadivasova, Tatiana E.
2017-03-01
One of important problems concerning chimera states is the conditions of their existence and stability. Until now, it was assumed that chimeras could arise only in ensembles with nonlocal character of interactions. However, this assumption is not exactly right. In some special cases chimeras can be realized for local type of coupling [1-3]. We propose a simple model of ensemble with local coupling when chimeras are realized. This model is a ring of linear oscillators with the local nonlinear unidirectional interaction. Chimera structures in the ring are found using computer simulations for wide area of values of parameters. Diagram of the regimes on plane of control parameters is plotted and scenario of chimera destruction are studied when the parameters are changed.
On the Reproduction Number of a Gut Microbiota Model.
Barril, Carles; Calsina, Àngel; Ripoll, Jordi
2017-11-01
A spatially structured linear model of the growth of intestinal bacteria is analysed from two generational viewpoints. Firstly, the basic reproduction number associated with the bacterial population, i.e. the expected number of daughter cells per bacterium, is given explicitly in terms of biological parameters. Secondly, an alternative quantity is introduced based on the number of bacteria produced within the intestine by one bacterium originally in the external media. The latter depends on the parameters in a simpler way and provides more biological insight than the standard reproduction number, allowing the design of experimental procedures. Both quantities coincide and are equal to one at the extinction threshold, below which the bacterial population becomes extinct. Optimal values of both reproduction numbers are derived assuming parameter trade-offs.
Solid phase extraction of copper(II) by fixed bed procedure on cation exchange complexing resins.
Pesavento, Maria; Sturini, Michela; D'Agostino, Girolamo; Biesuz, Raffaela
2010-02-19
The efficiency of the metal ion recovery by solid phase extraction (SPE) in complexing resins columns is predicted by a simple model based on two parameters reflecting the sorption equilibria and kinetics of the metal ion on the considered resin. The parameter related to the adsorption equilibria was evaluated by the Gibbs-Donnan model, and that related to the kinetics by assuming that the ion exchange is the adsorption rate determining step. The predicted parameters make it possible to evaluate the breakthrough volume of the considered metal ion, Cu(II), from different kinds of complexing resins, and at different conditions, such as acidity and ionic composition. Copyright 2009. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Fukuda, J.; Johnson, K. M.
2017-12-01
Postseismic deformation following the 2011 Mw9.0 Tohoku-oki earthquake has been captured by both on-land GNSS and seafloor GPS/Acoustic networks. Previous studies have shown that the observed postseismic displacements can be reproduced as the sum of contributions from viscoelastic relaxation of coseismic stress changes in the upper mantle and afterslip on the plate interface surrounding the coseismic rupture. In most previous studies, viscoelastic relaxation and afterslip were modeled separately and afterslip was estimated kinematically. In this study, we develop a mechanical model of postseismic deformation in which afterslip and viscoelastic relaxation are driven by coseismic stress perturbations and are mechanically coupled. We assume that afterslip is governed by a rate-strengthening friction law that is characterized with a friction parameter (a-b)*sigma, where a-b represents the rate dependence of steady-state friction and sigma is the effective normal stress. Viscoelastic relaxation of the upper mantle is modeled with a biviscous Burgers rheology that is characterized with the steady-state and transient viscosities. We calculate the evolution of afterslip and viscoelastic relaxation using stress changes computed from an assumed coseismic slip model as the initial condition. We examine the effects of the friction parameters, mantle viscosities, elastic thickness of the slab and upper plate, and coseismic slip distribution on the model prediction and explore the range of the parameters that can fit the observed postseismic displacements. We find that the vertical postseismic displacements are particularly sensitive to these parameters. Our modeling results indicate that the on-land postseismic deformation is dominated by afterslip, whereas the seafloor postseismic deformation is dominated by viscoelastic relaxation. We also examine if afterslip overlaps regions that ruptured seismically during M6.3-7.2 earthquakes between 2003 and 2010. We find that significant overlap between afterslip and the historical M6.3-7.2 coseismic rupture areas are required to fit the horizontal postseismic displacements.
Adressing optimality principles in DGVMs: Dynamics of Carbon allocation changes
NASA Astrophysics Data System (ADS)
Pietsch, Stephan
2017-04-01
DGVMs are designed to reproduce and quantify ecosystem processes. Based on plant functions or species specific parameter sets, the energy, carbon, nitrogen and water cycles of different ecosystems are assessed. These models have been proven to be important tools to investigate ecosystem fluxes as they are derived by plant, site and environmental factors. The general model approach assumes steady state conditions and constant model parameters. Both assumptions, however, are wrong, since: (i) No given ecosystem ever is at steady state! (ii) Ecosystems have the capability to adapt to changes in growth conditions, e.g. via changes in allocation patterns! This presentation will give examples how these general failures within current DGVMs may be addressed.
Adressing optimality principles in DGVMs: Dynamics of Carbon allocation changes.
NASA Astrophysics Data System (ADS)
Pietsch, S.
2016-12-01
DGVMs are designed to reproduce and quantify ecosystem processes. Based on plant functions or species specific parameter sets, the energy, carbon, nitrogen and water cycles of different ecosystems are assessed. These models have been proven to be important tools to investigate ecosystem fluxes as they are derived by plant, site and environmental factors. The general model approach assumes steady state conditions and constant model parameters. Both assumptions, however, are wrong. Any given ecosystem never is at steady state! Ecosystems have the capability to adapt to changes in growth conditions, e.g. via changes in allocation patterns! This presentation will give examples how these general failures within current DGVMs may be addressed.
Constraints on Non-flat Cosmologies with Massive Neutrinos after Planck 2015
NASA Astrophysics Data System (ADS)
Chen, Yun; Ratra, Bharat; Biesiada, Marek; Li, Song; Zhu, Zong-Hong
2016-10-01
We investigate two dark energy cosmological models (I.e., the ΛCDM and ϕCDM models) with massive neutrinos assuming two different neutrino mass hierarchies in both the spatially flat and non-flat scenarios, where in the ϕCDM model the scalar field possesses an inverse power-law potential, V(ϕ) ∝ ϕ -α (α > 0). Cosmic microwave background data from Planck 2015, baryon acoustic oscillation data from 6dFGS, SDSS-MGS, BOSS-LOWZ and BOSS CMASS-DR11, the joint light-curve analysis compilation of SNe Ia apparent magnitude observations, and the Hubble Space Telescope H 0 prior, are jointly employed to constrain the model parameters. We first determine constraints assuming three species of degenerate massive neutrinos. In the spatially flat (non-flat) ΛCDM model, the sum of neutrino masses is bounded as Σm ν < 0.165(0.299) eV at 95% confidence level (CL). Correspondingly, in the flat (non-flat) ϕCDM model, we find Σm ν < 0.164(0.301) eV at 95% CL. The inclusion of spatial curvature as a free parameter results in a significant broadening of confidence regions for Σm ν and other parameters. In the scenario where the total neutrino mass is dominated by the heaviest neutrino mass eigenstate, we obtain similar conclusions to those obtained in the degenerate neutrino mass scenario. In addition, the results show that the bounds on Σm ν based on two different neutrino mass hierarchies have insignificant differences in the spatially flat case for both the ΛCDM and ϕCDM models; however, the corresponding differences are larger in the non-flat case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yun; Ratra, Bharat; Biesiada, Marek
We investigate two dark energy cosmological models (i.e., the ΛCDM and ϕ CDM models) with massive neutrinos assuming two different neutrino mass hierarchies in both the spatially flat and non-flat scenarios, where in the ϕ CDM model the scalar field possesses an inverse power-law potential, V ( ϕ ) ∝ ϕ {sup −} {sup α} ( α > 0). Cosmic microwave background data from Planck 2015, baryon acoustic oscillation data from 6dFGS, SDSS-MGS, BOSS-LOWZ and BOSS CMASS-DR11, the joint light-curve analysis compilation of SNe Ia apparent magnitude observations, and the Hubble Space Telescope H {sub 0} prior, are jointly employedmore » to constrain the model parameters. We first determine constraints assuming three species of degenerate massive neutrinos. In the spatially flat (non-flat) ΛCDM model, the sum of neutrino masses is bounded as Σ m {sub ν} < 0.165(0.299) eV at 95% confidence level (CL). Correspondingly, in the flat (non-flat) ϕ CDM model, we find Σ m {sub ν} < 0.164(0.301) eV at 95% CL. The inclusion of spatial curvature as a free parameter results in a significant broadening of confidence regions for Σ m {sub ν} and other parameters. In the scenario where the total neutrino mass is dominated by the heaviest neutrino mass eigenstate, we obtain similar conclusions to those obtained in the degenerate neutrino mass scenario. In addition, the results show that the bounds on Σ m {sub ν} based on two different neutrino mass hierarchies have insignificant differences in the spatially flat case for both the ΛCDM and ϕ CDM models; however, the corresponding differences are larger in the non-flat case.« less
Padhi, Radhakant; Bhardhwaj, Jayender R
2009-06-01
An adaptive drug delivery design is presented in this paper using neural networks for effective treatment of infectious diseases. The generic mathematical model used describes the coupled evolution of concentration of pathogens, plasma cells, antibodies and a numerical value that indicates the relative characteristic of a damaged organ due to the disease under the influence of external drugs. From a system theoretic point of view, the external drugs can be interpreted as control inputs, which can be designed based on control theoretic concepts. In this study, assuming a set of nominal parameters in the mathematical model, first a nonlinear controller (drug administration) is designed based on the principle of dynamic inversion. This nominal drug administration plan was found to be effective in curing "nominal model patients" (patients whose immunological dynamics conform to the mathematical model used for the control design exactly. However, it was found to be ineffective in curing "realistic model patients" (patients whose immunological dynamics may have off-nominal parameter values and possibly unwanted inputs) in general. Hence, to make the drug delivery dosage design more effective for realistic model patients, a model-following adaptive control design is carried out next by taking the help of neural networks, that are trained online. Simulation studies indicate that the adaptive controller proposed in this paper holds promise in killing the invading pathogens and healing the damaged organ even in the presence of parameter uncertainties and continued pathogen attack. Note that the computational requirements for computing the control are very minimal and all associated computations (including the training of neural networks) can be carried out online. However it assumes that the required diagnosis process can be carried out at a sufficient faster rate so that all the states are available for control computation.
Estimation of variance in Cox's regression model with shared gamma frailties.
Andersen, P K; Klein, J P; Knudsen, K M; Tabanera y Palacios, R
1997-12-01
The Cox regression model with a shared frailty factor allows for unobserved heterogeneity or for statistical dependence between the observed survival times. Estimation in this model when the frailties are assumed to follow a gamma distribution is reviewed, and we address the problem of obtaining variance estimates for regression coefficients, frailty parameter, and cumulative baseline hazards using the observed nonparametric information matrix. A number of examples are given comparing this approach with fully parametric inference in models with piecewise constant baseline hazards.
Gabran, S R I; Saad, J H; Salama, M M A; Mansour, R R
2009-01-01
This paper demonstrates the electromagnetic modeling and simulation of an implanted Medtronic deep brain stimulation (DBS) electrode using finite difference time domain (FDTD). The model is developed using Empire XCcel and represents the electrode surrounded with brain tissue assuming homogenous and isotropic medium. The model is created to study the parameters influencing the electric field distribution within the tissue in order to provide reference and benchmarking data for DBS and intra-cortical electrode development.
Karamisheva, Ralica D; Islam, M A
2005-01-01
Assuming that settling takes place in two zones (a constant rate zone and a variable rate zone), a model using four parameters accounting for the nature of the water-suspension system has been proposed for describing batch sedimentation processes. The sludge volume index (SVI) has been expressed in terms of these parameters. Some disadvantages of the SVI application as a design parameter have been pointed out, and it has been shown that a relationship between zone settling velocity and sludge concentration is more consistent for describing the settling behavior and for design of settling tanks. The permissible overflow rate has been related to the technological parameters of secondary settling tank by simple working equations. The graphical representations of these equations could be used to optimize the design and operation of secondary settling tanks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, Jim; Flicker, Dawn; Ide, Kayo
2006-05-20
This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from amore » single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand.« less
Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects
NASA Astrophysics Data System (ADS)
Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca
2018-02-01
Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.
Optimal ordering and production policy for a recoverable item inventory system with learning effect
NASA Astrophysics Data System (ADS)
Tsai, Deng-Maw
2012-02-01
This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.
The Impact of Uncertain Physical Parameters on HVAC Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Elizondo, Marcelo A.; Lu, Shuai
HVAC units are currently one of the major resources providing demand response (DR) in residential buildings. Models of HVAC with DR function can improve understanding of its impact on power system operations and facilitate the deployment of DR technologies. This paper investigates the importance of various physical parameters and their distributions to the HVAC response to DR signals, which is a key step to the construction of HVAC models for a population of units with insufficient data. These parameters include the size of floors, insulation efficiency, the amount of solid mass in the house, and efficiency of the HVAC units.more » These parameters are usually assumed to follow Gaussian or Uniform distributions. We study the effect of uncertainty in the chosen parameter distributions on the aggregate HVAC response to DR signals, during transient phase and in steady state. We use a quasi-Monte Carlo sampling method with linear regression and Prony analysis to evaluate sensitivity of DR output to the uncertainty in the distribution parameters. The significance ranking on the uncertainty sources is given for future guidance in the modeling of HVAC demand response.« less
Studies and comparison of currently utilized models for ablation in Electrothermal-chemical guns
NASA Astrophysics Data System (ADS)
Jia, Shenli; Li, Rui; Li, Xingwen
2009-10-01
Wall ablation is a key process taking place in the capillary plasma generator in Electrothermal-Chemical (ETC) guns, whose characteristic directly decides the generator's performance. In the present article, this ablation process is theoretically studied. Currently widely used mathematical models designed to describe such process are analyzed and compared, including a recently developed kinetic model which takes into account the unsteady state in plasma-wall transition region by dividing it into two sub-layers, a Knudsen layer and a collision dominated non-equilibrium Hydrodynamic layer, a model based on Langmuir Law, as well as a simplified model widely used in arc-wall interaction process in circuit breakers, which assumes a proportional factor and an ablation enthalpy obtained empirically. Bulk plasma state and parameters are assumed to be consistent while analyzing and comparing each model, in order to take into consideration only the difference caused by model itself. Finally ablation rate is calculated in each method respectively and differences are discussed.
Information spreading dynamics in hypernetworks
NASA Astrophysics Data System (ADS)
Suo, Qi; Guo, Jin-Li; Shen, Ai-Zhong
2018-04-01
Contact pattern and spreading strategy fundamentally influence the spread of information. Current mathematical methods largely assume that contacts between individuals are fixed by networks. In fact, individuals are affected by all his/her neighbors in different social relationships. Here, we develop a mathematical approach to depict the information spreading process in hypernetworks. Each individual is viewed as a node, and each social relationship containing the individual is viewed as a hyperedge. Based on SIS epidemic model, we construct two spreading models. One model is based on global transmission, corresponding to RP strategy. The other is based on local transmission, corresponding to CP strategy. These models can degenerate into complex network models with a special parameter. Thus hypernetwork models extend the traditional models and are more realistic. Further, we discuss the impact of parameters including structure parameters of hypernetwork, spreading rate, recovering rate as well as information seed on the models. Propagation time and density of informed nodes can reveal the overall trend of information dissemination. Comparing these two models, we find out that there is no spreading threshold in RP, while there exists a spreading threshold in CP. The RP strategy induces a broader and faster information spreading process under the same parameters.
Model based estimation of sediment erosion in groyne fields along the River Elbe
NASA Astrophysics Data System (ADS)
Prohaska, Sandra; Jancke, Thomas; Westrich, Bernhard
2008-11-01
River water quality is still a vital environmental issue, even though ongoing emissions of contaminants are being reduced in several European rivers. The mobility of historically contaminated deposits is key issue in sediment management strategy and remediation planning. Resuspension of contaminated sediments impacts the water quality and thus, it is important for river engineering and ecological rehabilitation. The erodibility of the sediments and associated contaminants is difficult to predict due to complex time depended physical, chemical, and biological processes, as well as due to the lack of information. Therefore, in engineering practice the values for erosion parameters are usually assumed to be constant despite their high spatial and temporal variability, which leads to a large uncertainty of the erosion parameters. The goal of presented study is to compare the deterministic approach assuming constant critical erosion shear stress and an innovative approach which takes the critical erosion shear stress as a random variable. Furthermore, quantification of the effective value of the critical erosion shear stress, its applicability in numerical models, and erosion probability will be estimated. The results presented here are based on field measurements and numerical modelling of the River Elbe groyne fields.
Multiplicative Versus Additive Filtering for Spacecraft Attitude Determination
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2003-01-01
The absence of a globally nonsingular three-parameter representation of rotations forces attitude Kalman filters to estimate either a singular or a redundant attitude representation. We compare two filtering strategies using simplified kinematics and measurement models. Our favored strategy estimates a three-parameter representation of attitude deviations from a reference attitude specified by a higher- dimensional nonsingular parameterization. The deviations from the reference are assumed to be small enough to avoid any singularity or discontinuity of the three-dimensional parameterization. We point out some disadvantages of the other strategy, which directly estimates the four-parameter quaternion representation.
An inexact reverse logistics model for municipal solid waste management systems.
Zhang, Yi Mei; Huang, Guo He; He, Li
2011-03-01
This paper proposed an inexact reverse logistics model for municipal solid waste management systems (IRWM). Waste managers, suppliers, industries and distributors were involved in strategic planning and operational execution through reverse logistics management. All the parameters were assumed to be intervals to quantify the uncertainties in the optimization process and solutions in IRWM. To solve this model, a piecewise interval programming was developed to deal with Min-Min functions in both objectives and constraints. The application of the model was illustrated through a classical municipal solid waste management case. With different cost parameters for landfill and the WTE, two scenarios were analyzed. The IRWM could reflect the dynamic and uncertain characteristics of MSW management systems, and could facilitate the generation of desired management plans. The model could be further advanced through incorporating methods of stochastic or fuzzy parameters into its framework. Design of multi-waste, multi-echelon, multi-uncertainty reverse logistics model for waste management network would also be preferred. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Gibbons, D. E.; Richard, R. R.
1979-01-01
The methods used to calculate the sensitivity parameter noise equivalent reflectance of a remote-sensing scanner are explored, and the results are compared with values measured over calibrated test sites. Data were acquired on four occasions covering a span of 4 years and providing various atmospheric conditions. One of the calculated values was based on assumed atmospheric conditions, whereas two others were based on atmospheric models. Results indicate that the assumed atmospheric conditions provide useful answers adequate for many purposes. A nomograph was developed to indicate sensitivity variations due to geographic location, time of day, and season.
Dynamical features of an anisotropic cosmological model
NASA Astrophysics Data System (ADS)
Mishra, B.; Tarai, Sankarsan; Tripathy, S. K.
2018-04-01
The dynamical features of Bianchi type VI_h (BVI_h) universe are investigated in f(R, T) theory of gravity. The field equations and the physical properties of the model are derived considering a power law expansion of the universe. The effect of anisotropy on the dynamics of the universe as well as on the energy conditions are studied. The assumed anisotropy of the model is found to have substantial effects on the energy conditions and dynamical parameters.
Evaluating the Controls on Magma Ascent Rates Through Numerical Modelling
NASA Astrophysics Data System (ADS)
Thomas, M. E.; Neuberg, J. W.
2015-12-01
The estimation of the magma ascent rate is a key factor in predicting styles of volcanic activity and relies on the understanding of how strongly the ascent rate is controlled by different magmatic parameters. The ability to link potential changes in such parameters to monitoring data is an essential step to be able to use these data as a predictive tool. We present the results of a suite of conduit flow models that assess the influence of individual model parameters such as the magmatic water content, temperature or bulk magma composition on the magma flow in the conduit during an extrusive dome eruption. By systematically varying these parameters we assess their relative importance to changes in ascent rate. The results indicate that potential changes to conduit geometry and excess pressure in the magma chamber are amongst the dominant controlling variables that effect ascent rate, but the single most important parameter is the volatile content (assumed in this case as only water). Modelling this parameter across a range of reported values causes changes in the calculated ascent velocities of up to 800%, triggering fluctuations in ascent rates that span the potential threshold between effusive and explosive eruptions.
NASA Astrophysics Data System (ADS)
Irving, J.; Koepke, C.; Elsheikh, A. H.
2017-12-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.
1988-01-01
During the shutdown of the space shuttle main engine, oxygen flow is shut off from the fuel preburner and helium is used to push the residual oxygen into the combustion chamber. During this process a low frequency combustion instability, or chug, occurs. This chug has resulted in damage to the engine's augmented spark igniter due to backflow of the contents of the preburner combustion chamber into the oxidizer feed system. To determine possible causes and fixes for the chug, the fuel preburner was modeled as a heterogeneous stirred tank combustion chamber, a variable mass flow rate oxidizer feed system, a constant mass flow rate fuel feed system and an exit turbine. Within the combustion chamber gases were assumed perfectly mixed. To account for liquid in the combustion chamber, a uniform droplet distribution was assumed to exist in the chamber, with mean droplet diameter determined from an empirical relation. A computer program was written to integrate the resulting differential equations. Because chamber contents were assumed perfectly mixed, the fuel preburner model erroneously predicted that combustion would not take place during shutdown. The combustion rate model was modified to assume that all liquid oxygen that vaporized instantaneously combusted with fuel. Using this combustion model, the effect of engine parameters on chamber pressure oscillations during the SSME shutdown was calculated.
Disorder-induced losses in photonic crystal waveguides with line defects.
Gerace, Dario; Andreani, Lucio Claudio
2004-08-15
A numerical analysis of extrinsic diffraction losses in two-dimensional photonic crystal slabs with line defects is reported. To model disorder, a Gaussian distribution of hole radii in the triangular lattice of airholes is assumed. The extrinsic losses below the light line increase quadratically with the disorder parameter, decrease slightly with increasing core thickness, and depend weakly on the hole radius. For typical values of the disorder parameter the calculated loss values of guided modes below the light line compare favorably with available experimental results.
A program for identification of linear systems
NASA Technical Reports Server (NTRS)
Buell, J.; Kalaba, R.; Ruspini, E.; Yakush, A.
1971-01-01
A program has been written for the identification of parameters in certain linear systems. These systems appear in biomedical problems, particularly in compartmental models of pharmacokinetics. The method presented here assumes that some of the state variables are regularly modified by jump conditions. This simulates administration of drugs following some prescribed drug regime. Parameters are identified by a least-square fit of the linear differential system to a set of experimental observations. The method is especially suited when the interval of observation of the system is very long.
An approximate generalized linear model with random effects for informative missing data.
Follmann, D; Wu, M
1995-03-01
This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.
Ellis, John; Evans, Jason L.; Mustafayev, Azar; ...
2016-10-28
Here, we revisit minimal supersymmetric SU(5) grand unification (GUT) models in which the soft supersymmetry-breaking parameters of the minimal supersymmetric Standard Model (MSSM) are universal at some input scale, M in, above the supersymmetric gauge-coupling unification scale, M GUT. As in the constrained MSSM (CMSSM), we assume that the scalar masses and gaugino masses have common values, m 0 and m 1/2, respectively, at M in, as do the trilinear soft supersymmetry-breaking parameters A 0. Going beyond previous studies of such a super-GUT CMSSM scenario, we explore the constraints imposed by the lower limit on the proton lifetime and themore » LHC measurement of the Higgs mass, m h. We find regions of m 0, m 1/2 A 0 and the parameters of the SU(5) superpotential that are compatible with these and other phenomenological constraints such as the density of cold dark matter, which we assume to be provided by the lightest neutralino. Typically, these allowed regions appear for m 0 and m 1/2 in the multi-TeV region, for suitable values of the unknown SU(5) GUT-scale phases and superpotential couplings, and with the ratio of supersymmetric Higgs vacuum expectation values tan β≲6.« less
State and Parameter Estimation for a Coupled Ocean--Atmosphere Model
NASA Astrophysics Data System (ADS)
Ghil, M.; Kondrashov, D.; Sun, C.
2006-12-01
The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.
Kosmidis, Kosmas; Argyrakis, Panos; Macheras, Panos
2003-07-01
To verify the Higuchi law and study the drug release from cylindrical and spherical matrices by means of Monte Carlo computer simulation. A one-dimensional matrix, based on the theoretical assumptions of the derivation of the Higuchi law, was simulated and its time evolution was monitored. Cylindrical and spherical three-dimensional lattices were simulated with sites at the boundary of the lattice having been denoted as leak sites. Particles were allowed to move inside it using the random walk model. Excluded volume interactions between the particles was assumed. We have monitored the system time evolution for different lattice sizes and different initial particle concentrations. The Higuchi law was verified using the Monte Carlo technique in a one-dimensional lattice. It was found that Fickian drug release from cylindrical matrices can be approximated nicely with the Weibull function. A simple linear relation between the Weibull function parameters and the specific surface of the system was found. Drug release from a matrix, as a result of a diffusion process assuming excluded volume interactions between the drug molecules, can be described using a Weibull function. This model, although approximate and semiempirical, has the benefit of providing a simple physical connection between the model parameters and the system geometry, which was something missing from other semiempirical models.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection
NASA Astrophysics Data System (ADS)
Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan
2017-08-01
Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.
Analyzing ROC curves using the effective set-size model
NASA Astrophysics Data System (ADS)
Samuelson, Frank W.; Abbey, Craig K.; He, Xin
2018-03-01
The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical imaging tasks.
Time-Related Decay or Interference-Based Forgetting in Working Memory?
ERIC Educational Resources Information Center
Portrat, Sophie; Barrouillet, Pierre; Camos, Valerie
2008-01-01
The time-based resource-sharing model of working memory assumes that memory traces suffer from a time-related decay when attention is occupied by concurrent activities. Using complex continuous span tasks in which temporal parameters are carefully controlled, P. Barrouillet, S. Bernardin, S. Portrat, E. Vergauwe, & V. Camos (2007) recently…
Ionizing Shocks in Argon. Part 1: Collisional-Radiative Model and Steady-State Structure (Preprint)
2010-09-09
absorption oscillator strength is given by fabsij = gj gi Aji 3γ . (43) Contributions to the parameter γ have been assumed to result from a combination of...discretization, the Saha temperatures of the higher states (green, red and blue solid curves) overshoot Te and relaxes with Th, indicating over
Calibration of Response Data Using MIRT Models with Simple and Mixed Structures
ERIC Educational Resources Information Center
Zhang, Jinming
2012-01-01
It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…
Poisson mixture model for measurements using counting.
Miller, Guthrie; Justus, Alan; Vostrotin, Vadim; Dry, Donald; Bertelli, Luiz
2010-03-01
Starting with the basic Poisson statistical model of a counting measurement process, 'extraPoisson' variance or 'overdispersion' are included by assuming that the Poisson parameter representing the mean number of counts itself comes from another distribution. The Poisson parameter is assumed to be given by the quantity of interest in the inference process multiplied by a lognormally distributed normalising coefficient plus an additional lognormal background that might be correlated with the normalising coefficient (shared uncertainty). The example of lognormal environmental background in uranium urine data is discussed. An additional uncorrelated background is also included. The uncorrelated background is estimated from a background count measurement using Bayesian arguments. The rather complex formulas are validated using Monte Carlo. An analytical expression is obtained for the probability distribution of gross counts coming from the uncorrelated background, which allows straightforward calculation of a classical decision level in the form of a gross-count alarm point with a desired false-positive rate. The main purpose of this paper is to derive formulas for exact likelihood calculations in the case of various kinds of backgrounds.
Resonant Tidal Excitation of Internal Waves in the Earth's Fluid Core
NASA Technical Reports Server (NTRS)
Tyler, Robert H.; Kuang, Weijia
2014-01-01
It has long been speculated that there is a stably stratified layer below the core-mantle boundary, and two recent studies have improved the constraints on the parameters describing this stratification. Here we consider the dynamical implications of this layer using a simplified model. We first show that the stratification in this surface layer has sensitive control over the rate at which tidal energy is transferred to the core. We then show that when the stratification parameters from the recent studies are used in this model, a resonant configuration arrives whereby tidal forces perform elevated rates of work in exciting core flow. Specifically, the internal wave speed derived from the two independent studies (150 and 155 m/s) are in remarkable agreement with the speed (152 m/s) required for excitation of the primary normal mode of oscillation as calculated from full solutions of the Laplace Tidal Equations applied to a reduced-gravity idealized model representing the stratified layer. In evaluating this agreement it is noteworthy that the idealized model assumed may be regarded as the most reduced representation of the stratified dynamics of the layer, in that there are no non-essential dynamical terms in the governing equations assumed. While it is certainly possible that a more realistic treatment may require additional dynamical terms or coupling, it is also clear that this reduced representation includes no freedom for coercing the correlation described. This suggests that one must accept either (1) that tidal forces resonantly excite core flow and this is predicted by a simple model or (2) that either the independent estimates or the dynamical model does not accurately portray the core surface layer and there has simply been an unlikely coincidence between three estimates of a stratification parameter which would otherwise have a broad plausible range.
NASA Astrophysics Data System (ADS)
Bajargaan, Ruchi; Patel, Arvind
2018-04-01
One-dimensional unsteady adiabatic flow behind an exponential shock wave propagating in a self-gravitating, rotating, axisymmetric dusty gas with heat conduction and radiation heat flux, which has exponentially varying azimuthal and axial fluid velocities, is investigated. The shock wave is driven out by a piston moving with time according to an exponential law. The dusty gas is taken to be a mixture of a non-ideal gas and small solid particles. The density of the ambient medium is assumed to be constant. The equilibrium flow conditions are maintained and energy is varying exponentially, which is continuously supplied by the piston. The heat conduction is expressed in the terms of Fourier's law, and the radiation is assumed of diffusion type for an optically thick grey gas model. The thermal conductivity and the absorption coefficient are assumed to vary with temperature and density according to a power law. The effects of the variation of heat transfer parameters, gravitation parameter and dusty gas parameters on the shock strength, the distance between the piston and the shock front, and on the flow variables are studied out in detail. It is interesting to note that the similarity solution exists under the constant initial angular velocity, and the shock strength is independent from the self gravitation, heat conduction and radiation heat flux.
NASA Astrophysics Data System (ADS)
Gao, K.; van Dommelen, J. A. W.; Göransson, P.; Geers, M. G. D.
2015-09-01
In this paper, a homogenization method is proposed to obtain the parameters of Biot's poroelastic theory from a multiscale perspective. It is assumed that the behavior of a macroscopic material point can be captured through the response of a microscopic Representative Volume Element (RVE) consisting of both a solid skeleton and a gaseous fluid. The macroscopic governing equations are assumed to be Biot's poroelastic equations and the RVE is governed by the conservation of linear momentum and the adopted linear constitutive laws under the isothermal condition. With boundary conditions relying on the macroscopic solid displacement and fluid pressure, the homogenized solid stress and fluid displacement are obtained based on energy consistency. This homogenization framework offers an approach to obtain Biot's parameters directly through the response of the RVE in the regime of Darcy's flow where the pressure gradient is dominating. A numerical experiment is performed in the form of a sound absorption test on a porous material with an idealized partially open microstructure that is described by Biot's equations where the parameters are obtained through the proposed homogenization approach. The result is evaluated by comparison with Direct Numerical Simulations (DNS), showing a superior performance of this approach compared to an alternative semi-phenomenological model for estimating Biot's parameters of the studied porous material.
NASA Astrophysics Data System (ADS)
Xia, Jun-Qing; Yu, Hai; Wang, Guo-Jian; Tian, Shu-Xun; Li, Zheng-Xiang; Cao, Shuo; Zhu, Zong-Hong
2017-01-01
In this paper, we use a recently compiled data set, which comprises 118 galactic-scale strong gravitational lensing (SGL) systems to constrain the statistical property of the SGL system as well as the curvature of the universe without assuming any fiducial cosmological model. Based on the singular isothermal ellipsoid (SIE) model of the SGL system, we obtain that the constrained curvature parameter {{{Ω }}}{{k}} is close to zero from the SGL data, which is consistent with the latest result of Planck measurement. More interestingly, we find that the parameter f in the SIE model is strongly correlated with the curvature {{{Ω }}}{{k}}. Neglecting this correlation in the analysis will significantly overestimate the constraining power of SGL data on the curvature. Furthermore, the obtained constraint on f is different from previous results: f=1.105+/- 0.030 (68% confidence level [C.L.]), which means that the standard singular isothermal sphere (SIS) model (f = 1) is disfavored by the current SGL data at more than a 3σ C.L. We also divide all of the SGL data into two parts according to the centric stellar velocity dispersion {σ }{{c}} and find that the larger the value of {σ }{{c}} for the subsample, the more favored the standard SIS model is. Finally, we extend the SIE model by assuming the power-law density profiles for the total mass density, ρ ={ρ }0{(r/{r}0)}-α , and luminosity density, ν ={ν }0{(r/{r}0)}-δ , and obtain the constraints on the power-law indices: α =1.95+/- 0.04 and δ =2.40+/- 0.13 at a 68% C.L. When assuming the power-law index α =δ =γ , this scenario is totally disfavored by the current SGL data, {χ }\\min ,γ 2-{χ }\\min ,{SIE}2≃ 53.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
Estimation of Snow Parameters from Dual-Wavelength Airborne Radar
NASA Technical Reports Server (NTRS)
Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew
1997-01-01
Estimation of snow characteristics from airborne radar measurements would complement In-situ measurements. While In-situ data provide more detailed information than radar, they are limited in their space-time sampling. In the absence of significant cloud water contents, dual-wavelength radar data can be used to estimate 2 parameters of a drop size distribution if the snow density is assumed. To estimate, rather than assume, a snow density is difficult, however, and represents a major limitation in the radar retrieval. There are a number of ways that this problem can be investigated: direct comparisons with in-situ measurements, examination of the large scale characteristics of the retrievals and their comparison to cloud model outputs, use of LDR measurements, and comparisons to the theoretical results of Passarelli(1978) and others. In this paper we address the first approach and, in part, the second.
A water-vapor radiometer error model. [for ionosphere in geodetic microwave techniques
NASA Technical Reports Server (NTRS)
Beckman, B.
1985-01-01
The water-vapor radiometer (WVR) is used to calibrate unpredictable delays in the wet component of the troposphere in geodetic microwave techniques such as very-long-baseline interferometry (VLBI) and Global Positioning System (GPS) tracking. Based on experience with Jet Propulsion Laboratory (JPL) instruments, the current level of accuracy in wet-troposphere calibration limits the accuracy of local vertical measurements to 5-10 cm. The goal for the near future is 1-3 cm. Although the WVR is currently the best calibration method, many instruments are prone to systematic error. In this paper, a treatment of WVR data is proposed and evaluated. This treatment reduces the effect of WVR systematic errors by estimating parameters that specify an assumed functional form for the error. The assumed form of the treatment is evaluated by comparing the results of two similar WVR's operating near each other. Finally, the observability of the error parameters is estimated by covariance analysis.
Non-ignorable missingness item response theory models for choice effects in examinee-selected items.
Liu, Chen-Wei; Wang, Wen-Chung
2017-11-01
Examinee-selected item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set, always yields incomplete data (i.e., when only the selected items are answered, data are missing for the others) that are likely non-ignorable in likelihood inference. Standard item response theory (IRT) models become infeasible when ESI data are missing not at random (MNAR). To solve this problem, the authors propose a two-dimensional IRT model that posits one unidimensional IRT model for observed data and another for nominal selection patterns. The two latent variables are assumed to follow a bivariate normal distribution. In this study, the mirt freeware package was adopted to estimate parameters. The authors conduct an experiment to demonstrate that ESI data are often non-ignorable and to determine how to apply the new model to the data collected. Two follow-up simulation studies are conducted to assess the parameter recovery of the new model and the consequences for parameter estimation of ignoring MNAR data. The results of the two simulation studies indicate good parameter recovery of the new model and poor parameter recovery when non-ignorable missing data were mistakenly treated as ignorable. © 2017 The British Psychological Society.
Cellular signaling identifiability analysis: a case study.
Roper, Ryan T; Pia Saccomani, Maria; Vicini, Paolo
2010-05-21
Two primary purposes for mathematical modeling in cell biology are (1) simulation for making predictions of experimental outcomes and (2) parameter estimation for drawing inferences from experimental data about unobserved aspects of biological systems. While the former purpose has become common in the biological sciences, the latter is less common, particularly when studying cellular and subcellular phenomena such as signaling-the focus of the current study. Data are difficult to obtain at this level. Therefore, even models of only modest complexity can contain parameters for which the available data are insufficient for estimation. In the present study, we use a set of published cellular signaling models to address issues related to global parameter identifiability. That is, we address the following question: assuming known time courses for some model variables, which parameters is it theoretically impossible to estimate, even with continuous, noise-free data? Following an introduction to this problem and its relevance, we perform a full identifiability analysis on a set of cellular signaling models using DAISY (Differential Algebra for the Identifiability of SYstems). We use our analysis to bring to light important issues related to parameter identifiability in ordinary differential equation (ODE) models. We contend that this is, as of yet, an under-appreciated issue in biological modeling and, more particularly, cell biology. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
A Regionalization Approach to select the final watershed parameter set among the Pareto solutions
NASA Astrophysics Data System (ADS)
Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.
2017-12-01
The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.
Mapping the parameter space of a T2-dependent model of water diffusion MR in brain tissue.
Hansen, Brian; Vestergaard-Poulsen, Peter
2006-10-01
We present a new model for describing the diffusion-weighted (DW) proton nuclear magnetic resonance signal obtained from normal grey matter. Our model is analytical and, in some respects, is an extension of earlier model schemes. We model tissue as composed of three separate compartments with individual properties of diffusion and transverse relaxation. Our study assumes slow exchange between compartments. We attempt to take cell morphology into account, along with its effect on water diffusion in tissues. Using this model, we simulate diffusion-sensitive MR signals and compare model output to experimental data from human grey matter. In doing this comparison, we perform a global search for good fits in the parameter space of the model. The characteristic nonmonoexponential behavior of the signal as a function of experimental b value is reproduced quite well, along with established values for tissue-specific parameters such as volume fraction, tortuosity and apparent diffusion coefficient. We believe that the presented approach to modeling diffusion in grey matter adds new aspects to the treatment of a longstanding problem.
Dettmer, Jan; Dosso, Stan E
2012-10-01
This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Langlois, C; Simon, L; Lécuyer, Ch
2003-12-01
A time-dependent box model is developed to calculate oxygen isotope compositions of bone phosphate as a function of environmental and physiological parameters. Input and output oxygen fluxes related to body water and bone reservoirs are scaled to the body mass. The oxygen fluxes are evaluated by stoichiometric scaling to the calcium accretion and resorption rates, assuming a pure hydroxylapatite composition for the bone and tooth mineral. The model shows how the diet composition, body mass, ambient relative humidity and temperature may control the oxygen isotope composition of bone phosphate. The model also computes how bones and teeth record short-term variations in relative humidity, air temperature and delta18O of drinking water, depending on body mass. The documented diversity of oxygen isotope fractionation equations for vertebrates is accounted for by our model when for each specimen the physiological and diet parameters are adjusted in the living range of environmental conditions.
Langmuir probe analysis in electronegative plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bredin, Jerome, E-mail: jerome.bredin@lpp.polytechnique.fr; Chabert, Pascal; Aanesland, Ane
2014-12-15
This paper compares two methods to analyze Langmuir probe data obtained in electronegative plasmas. The techniques are developed to allow investigations in plasmas, where the electronegativity α{sub 0} = n{sub –}/n{sub e} (the ratio between the negative ion and electron densities) varies strongly. The first technique uses an analytical model to express the Langmuir probe current-voltage (I-V) characteristic and its second derivative as a function of the electron and ion densities (n{sub e}, n{sub +}, n{sub –}), temperatures (T{sub e}, T{sub +}, T{sub –}), and masses (m{sub e}, m{sub +}, m{sub –}). The analytical curves are fitted to the experimental data bymore » adjusting these variables and parameters. To reduce the number of fitted parameters, the ion masses are assumed constant within the source volume, and quasi-neutrality is assumed everywhere. In this theory, Maxwellian distributions are assumed for all charged species. We show that this data analysis can predict the various plasma parameters within 5–10%, including the ion temperatures when α{sub 0} > 100. However, the method is tedious, time consuming, and requires a precise measurement of the energy distribution function. A second technique is therefore developed for easier access to the electron and ion densities, but does not give access to the ion temperatures. Here, only the measured I-V characteristic is needed. The electron density, temperature, and ion saturation current for positive ions are determined by classical probe techniques. The electronegativity α{sub 0} and the ion densities are deduced via an iterative method since these variables are coupled via the modified Bohm velocity. For both techniques, a Child-Law sheath model for cylindrical probes has been developed and is presented to emphasize the importance of this model for small cylindrical Langmuir probes.« less
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
The reconstruction of tachyon inflationary potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fei, Qin; Gong, Yungui; Lin, Jiong
We derive a lower bound on the field excursion for the tachyon inflation, which is determined by the amplitude of the scalar perturbation and the number of e -folds before the end of inflation. Using the relation between the observables like n {sub s} and r with the slow-roll parameters, we reconstruct three classes of tachyon potentials. The model parameters are determined from the observations before the potentials are reconstructed, and the observations prefer the concave potential. We also discuss the constraints from the reheating phase preceding the radiation domination for the three classes of models by assuming the equationmore » of state parameter w {sub re} during reheating is a constant. Depending on the model parameters and the value of w {sub re} , the constraints on N {sub re} and T {sub re} are different. As n {sub s} increases, the allowed reheating epoch becomes longer for w {sub re} =−1/3, 0 and 1/6 while the allowed reheating epoch becomes shorter for w {sub re} =2/3.« less
Aftershock Energy Distribution by Statistical Mechanics Approach
NASA Astrophysics Data System (ADS)
Daminelli, R.; Marcellini, A.
2015-12-01
The aim of our work is to research the most probable distribution of the energy of aftershocks. We started by applying one of the fundamental principles of statistical mechanics that, in case of aftershock sequences, it could be expressed as: the greater the number of different ways in which the energy of aftershocks can be arranged among the energy cells in phase space the more probable the distribution. We assume that each cell in phase space has the same possibility to be occupied, and that more than one cell in the phase space can have the same energy. Seeing that seismic energy is proportional to products of different parameters, a number of different combinations of parameters can produce different energies (e.g., different combination of stress drop and fault area can release the same seismic energy). Let us assume that there are gi cells in the aftershock phase space characterised by the same energy released ɛi. Therefore we can assume that the Maxwell-Boltzmann statistics can be applied to aftershock sequences with the proviso that the judgment on the validity of this hypothesis is the agreement with the data. The aftershock energy distribution can therefore be written as follow: n(ɛ)=Ag(ɛ)exp(-βɛ)where n(ɛ) is the number of aftershocks with energy, ɛ, A and β are constants. Considering the above hypothesis, we can assume g(ɛ) is proportional to ɛ. We selected and analysed different aftershock sequences (data extracted from Earthquake Catalogs of SCEC, of INGV-CNT and other institutions) with a minimum magnitude retained ML=2 (in some cases ML=2.6) and a time window of 35 days. The results of our model are in agreement with the data, except in the very low energy band, where our model resulted in a moderate overestimation.
Performance analysis of wideband data and television channels. [space shuttle communications
NASA Technical Reports Server (NTRS)
Geist, J. M.
1975-01-01
Several aspects are discussed of space shuttle communications, including the return link (shuttle-to-ground) relayed through a satellite repeater (TDRS). The repeater exhibits nonlinear amplification and an amplitude-dependent phase shift. Models were developed for various link configurations, and computer simulation programs based on these models are described. Certain analytical results on system performance were also obtained. For the system parameters assumed, the results indicate approximately 1 db degradation relative to a link employing a linear repeater. While this degradation is dependent upon the repeater, filter bandwidths, and modulation parameters used, the programs can accommodate changes to any of these quantities. Thus the programs can be applied to determine the performance with any given set of parameters, or used as an aid in link design.
Accounting for measurement error in log regression models with applications to accelerated testing.
Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M
2018-01-01
In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.
Modeling interface roughness scattering in a layered seabed for normal-incident chirp sonar signals.
Tang, Dajun; Hefner, Brian T
2012-04-01
Downward looking sonar, such as the chirp sonar, is widely used as a sediment survey tool in shallow water environments. Inversion of geo-acoustic parameters from such sonar data precedes the availability of forward models. An exact numerical model is developed to initiate the simulation of the acoustic field produced by such a sonar in the presence of multiple rough interfaces. The sediment layers are assumed to be fluid layers with non-intercepting rough interfaces.
Optimal distribution of science funding
NASA Astrophysics Data System (ADS)
Huang, Ding-wei
2018-07-01
We propose a new model to investigate the theoretical implications of a novel funding system. We introduce new parameters to model the accumulated advantage. We assume that all scientists are equal and follow the same regulations. The model presents three distinct regimes. In regime (I), the fluidity of funding is significant. The funding distribution is continuous. The concentration of funding is effectively suppressed. In both regimes (II) and (III), a small group of scientists emerges as a circle of elites. Large funding is acquired by a small number of scientists.
NASA Astrophysics Data System (ADS)
Singh, S. Surendra
2018-05-01
Considering the locally rotationally symmetric (LRS) Bianchi type-I metric with cosmological constant Λ, Einstein’s field equations are discussed based on the background of anisotropic fluid. We assumed the condition A = B 1 m for the metric potentials A and B, where m is a positive constant to obtain the viable model of the Universe. It is found that Λ(t) is positive and inversely proportional to time. The values of matter-energy density Ωm, dark energy density ΩΛ and deceleration parameter q are found to be consistent with the values of WMAP observations. State finder parameters and anisotropic deviation parameter are also investigated. It is also observed that the derived model is an accelerating, shearing and non-rotating Universe. Some of the asymptotic and geometrical behaviors of the derived models are investigated with the age of the Universe.
Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang
2014-01-01
Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.
Tikekar superdense stars in electric fields
NASA Astrophysics Data System (ADS)
Komathiraj, K.; Maharaj, S. D.
2007-04-01
We present exact solutions to the Einstein-Maxwell system of equations with a specified form of the electric field intensity by assuming that the hypersurface {t=constant} are spheroidal. The solution of the Einstein-Maxwell system is reduced to a recurrence relation with variable rational coefficients which can be solved in general using mathematical induction. New classes of solutions of linearly independent functions are obtained by restricting the spheroidal parameter K and the electric field intensity parameter α. Consequently, it is possible to find exact solutions in terms of elementary functions, namely, polynomials and algebraic functions. Our result contains models found previously including the superdense Tikekar neutron star model [J. Math. Phys. 31, 2454 (1990)] when K=-7 and α=0. Our class of charged spheroidal models generalize the uncharged isotropic Maharaj and Leach solutions [J. Math. Phys. 37, 430 (1996)]. In particular, we find an explicit relationship directly relating the spheroidal parameter K to the electromagnetic field.
Global sensitivity analysis of groundwater transport
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Soltani, S.; Vigouroux, G.
2015-12-01
In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.
3-D transient hydraulic tomography in unconfined aquifers with fast drainage response
NASA Astrophysics Data System (ADS)
Cardiff, M.; Barrash, W.
2011-12-01
We investigate, through numerical experiments, the viability of three-dimensional transient hydraulic tomography (3DTHT) for identifying the spatial distribution of groundwater flow parameters (primarily, hydraulic conductivity K) in permeable, unconfined aquifers. To invert the large amount of transient data collected from 3DTHT surveys, we utilize an iterative geostatistical inversion strategy in which outer iterations progressively increase the number of data points fitted and inner iterations solve the quasi-linear geostatistical formulas of Kitanidis. In order to base our numerical experiments around realistic scenarios, we utilize pumping rates, geometries, and test lengths similar to those attainable during 3DTHT field campaigns performed at the Boise Hydrogeophysical Research Site (BHRS). We also utilize hydrologic parameters that are similar to those observed at the BHRS and in other unconsolidated, unconfined fluvial aquifers. In addition to estimating K, we test the ability of 3DTHT to estimate both average storage values (specific storage Ss and specific yield Sy) as well as spatial variability in storage coefficients. The effects of model conceptualization errors during unconfined 3DTHT are investigated including: (1) assuming constant storage coefficients during inversion and (2) assuming stationary geostatistical parameter variability. Overall, our findings indicate that estimation of K is slightly degraded if storage parameters must be jointly estimated, but that this effect is quite small compared with the degradation of estimates due to violation of "structural" geostatistical assumptions. Practically, we find for our scenarios that assuming constant storage values during inversion does not appear to have a significant effect on K estimates or uncertainty bounds.
Photospheres of hot stars. IV - Spectral type O4
NASA Technical Reports Server (NTRS)
Bohannan, Bruce; Abbott, David C.; Voels, Stephen A.; Hummer, David G.
1990-01-01
The basic stellar parameters of a supergiant (Zeta Pup) and two main-sequence stars, 9 Sgr and HD 46223, at spectral class O4 are determined using line profile analysis. The stellar parameters are determined by comparing high signal-to-noise hydrogen and helium line profiles with those from stellar atmosphere models which include the effect of radiation scattered back onto the photosphere from an overlying stellar wind, an effect referred to as wind blanketing. At spectral class O4, the inclusion of wind-blanketing in the model atmosphere reduces the effective temperature by an average of 10 percent. This shift in effective temperature is also reflected by shifts in several other stellar parameters relative to previous O4 spectral-type calibrations. It is also shown through the analysis of the two O4 V stars that scatter in spectral type calibrations is introduced by assuming that the observed line profile reflects the photospheric stellar parameters.
NASA Astrophysics Data System (ADS)
Egozcue, J. J.; Pawlowsky-Glahn, V.; Ortego, M. I.
2005-03-01
Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0.
NASA Astrophysics Data System (ADS)
Vallier, Bérénice; Magnenet, Vincent; Fond, Christophe; Schmittbuhl, Jean
2017-04-01
Many numerical models have been developed in deep geothermal reservoir engineering to interpret field measurements of the natural hydro-thermal circulations or to predict exploitation scenarios. They typically aim at analyzing the Thermo-Hydro-Mechanical and Chemical (THMC) coupling including complex rheologies of the rock matrix like thermo-poro-elasticity. Few approaches address in details the role of the fluid rheology and more specifically the non-linear sensitivity of the brine rheology with temperature and pressure. Here we use the finite element Code_Aster to solve the balance equations of a 2D THM model of the Soultz-sous-Forêts reservoir. The brine properties are assumed to depend on the fluid pressure and the temperature as in Magnenet et al. (2014). A sensitive parameter is the thermal dilatation of the brine that is assumed to depend quadratically with temperature as proposed by the experimental measurements of Rowe and Chou (1970). The rock matrix is homogenized at the scale of the equation resolution assuming to have a representative elementary volume of the fractured medium smaller than the mesh size. We still chose four main geological units to adjust the rock physic parameters at large scale: thermal conductivity, permeability, radioactive source production rate, elastic and Biot parameters. We obtain a three layer solution with a large hydro-thermal convection below the cover-basement transition. Interestingly, the geothermal gradient in the sedimentary layer is controlled by the radioactive production rate in the upper altered granite. The second part of the study deals with an inversion approach of the homogenized solid and fluid parameters at large scale using our direct THM model. The goal is to compare the large scale inverted estimates of the rock and brine properties with direct laboratory measurements on cores and discuss their upscaling in the context of a fractured network hydraulically active. Magnenet V., Fond C., Genter A. and Schmittbuhl J.: two-dimensional THM modelling of the large-scale natural hydrothermal circulation at Soultz-sous-Forêts, Geothermal Energy, (2014), 2, 1-17. Rowe A.M. and Chou J.C.S.: Pressure-volume-temperature-concentration relation of aqueous NaCl solutions, J. Chem. Eng. Data., (1970), 15, 61-66.
A parametric model and estimation techniques for the inharmonicity and tuning of the piano.
Rigaud, François; David, Bertrand; Daudet, Laurent
2013-05-01
Inharmonicity of piano tones is an essential property of their timbre that strongly influences the tuning, leading to the so-called octave stretching. It is proposed in this paper to jointly model the inharmonicity and tuning of pianos on the whole compass. While using a small number of parameters, these models are able to reflect both the specificities of instrument design and tuner's practice. An estimation algorithm is derived that can run either on a set of isolated note recordings, but also on chord recordings, assuming that the played notes are known. It is applied to extract parameters highlighting some tuner's choices on different piano types and to propose tuning curves for out-of-tune pianos or piano synthesizers.
Generic instabilities in a fluid membrane coupled to a thin layer of ordered active polar fluid.
Sarkar, Niladri; Basu, Abhik
2013-08-01
We develop an effective two-dimensional coarse-grained description for the coupled system of a planar fluid membrane anchored to a thin layer of polar ordered active fluid below. The macroscopic orientation of the active fluid layer is assumed to be perpendicular to the attached membrane. We demonstrate that activity or nonequilibrium drive of the active fluid makes such a system generically linearly unstable for either signature of a model parameter [Formula: see text] [Formula: see text] that characterises the strength of activity. Depending upon boundary conditions and within a range of the model parameters, underdamped propagating waves may be present in our model. We discuss the phenomenological significance of our results.
Crossing the phantom divide with dissipative normal matter in the Israel-Stewart formalism
NASA Astrophysics Data System (ADS)
Cruz, Norman; Lepe, Samuel
2017-04-01
A phantom solution in the framework of the causal Israel-Stewart (IS) formalism is discussed. We assume a late time behavior of the cosmic evolution by considering only one dominant matter fluid with viscosity. In the model it is assumed a bulk viscosity of the form ξ =ξ0ρ 1 / 2, where ρ is the energy density of the fluid. We evaluate and discuss the behavior of the thermodynamical parameters associated to this solution, like the temperature, rate of entropy, entropy, relaxation time, effective pressure and effective EoS. A discussion about the assumption of near equilibrium of the formalism and the accelerated expansion of the solution is presented. The solution allows to cross the phantom divide without evoking an exotic matter fluid and the effective EoS parameter is always lesser than -1 and time independent. A future singularity (big rip) occurs, but different from the Type I (big rip) solution classified in S. Nojiri, S.D. Odintsov and S. Tsujikawa (2005) [2], if we consider other thermodynamics parameters like, for example, the effective pressure in the presence of viscosity or the relaxation time.
Seabed roughness parameters from joint backscatter and reflection inversion at the Malta Plateau.
Steininger, Gavin; Holland, Charles W; Dosso, Stan E; Dettmer, Jan
2013-09-01
This paper presents estimates of seabed roughness and geoacoustic parameters and uncertainties on the Malta Plateau, Mediterranean Sea, by joint Bayesian inversion of mono-static backscatter and spherical wave reflection-coefficient data. The data are modeled using homogeneous fluid sediment layers overlying an elastic basement. The scattering model assumes a randomly rough water-sediment interface with a von Karman roughness power spectrum. Scattering and reflection data are inverted simultaneously using a population of interacting Markov chains to sample roughness and geoacoustic parameters as well as residual error parameters. Trans-dimensional sampling is applied to treat the number of sediment layers and the order (zeroth or first) of an autoregressive error model (to represent potential residual correlation) as unknowns. Results are considered in terms of marginal posterior probability profiles and distributions, which quantify the effective data information content to resolve scattering/geoacoustic structure. Results indicate well-defined scattering (roughness) parameters in good agreement with existing measurements, and a multi-layer sediment profile over a high-speed (elastic) basement, consistent with independent knowledge of sand layers over limestone.
NASA Astrophysics Data System (ADS)
Hecksher, Tina; Olsen, Niels Boye; Dyre, Jeppe C.
2017-04-01
This paper presents data for supercooled squalane's frequency-dependent shear modulus covering frequencies from 10 mHz to 30 kHz and temperatures from 168 K to 190 K; measurements are also reported for the glass phase down to 146 K. The data reveal a strong mechanical beta process. A model is proposed for the shear response of the metastable equilibrium liquid phase of supercooled liquids. The model is an electrical equivalent-circuit characterized by additivity of the dynamic shear compliances of the alpha and beta processes. The nontrivial parts of the alpha and beta processes are each represented by a "Cole-Cole retardation element" defined as a series connection of a capacitor and a constant-phase element, resulting in the Cole-Cole compliance function well-known from dielectrics. The model, which assumes that the high-frequency decay of the alpha shear compliance loss varies with the angular frequency as ω-1 /2, has seven parameters. Assuming time-temperature superposition for the alpha and beta processes separately, the number of parameters varying with temperature is reduced to four. The model provides a better fit to the data than an equally parametrized Havriliak-Negami type model. From the temperature dependence of the best-fit model parameters, the following conclusions are drawn: (1) the alpha relaxation time conforms to the shoving model; (2) the beta relaxation loss-peak frequency is almost temperature independent; (3) the alpha compliance magnitude, which in the model equals the inverse of the instantaneous shear modulus, is only weakly temperature dependent; (4) the beta compliance magnitude decreases by a factor of three upon cooling in the temperature range studied. The final part of the paper briefly presents measurements of the dynamic adiabatic bulk modulus covering frequencies from 10 mHz to 10 kHz in the temperature range from 172 K to 200 K. The data are qualitatively similar to the shear modulus data by having a significant beta process. A single-order-parameter framework is suggested to rationalize these similarities.
Hecksher, Tina; Olsen, Niels Boye; Dyre, Jeppe C
2017-04-21
This paper presents data for supercooled squalane's frequency-dependent shear modulus covering frequencies from 10 mHz to 30 kHz and temperatures from 168 K to 190 K; measurements are also reported for the glass phase down to 146 K. The data reveal a strong mechanical beta process. A model is proposed for the shear response of the metastable equilibrium liquid phase of supercooled liquids. The model is an electrical equivalent-circuit characterized by additivity of the dynamic shear compliances of the alpha and beta processes. The nontrivial parts of the alpha and beta processes are each represented by a "Cole-Cole retardation element" defined as a series connection of a capacitor and a constant-phase element, resulting in the Cole-Cole compliance function well-known from dielectrics. The model, which assumes that the high-frequency decay of the alpha shear compliance loss varies with the angular frequency as ω -1/2 , has seven parameters. Assuming time-temperature superposition for the alpha and beta processes separately, the number of parameters varying with temperature is reduced to four. The model provides a better fit to the data than an equally parametrized Havriliak-Negami type model. From the temperature dependence of the best-fit model parameters, the following conclusions are drawn: (1) the alpha relaxation time conforms to the shoving model; (2) the beta relaxation loss-peak frequency is almost temperature independent; (3) the alpha compliance magnitude, which in the model equals the inverse of the instantaneous shear modulus, is only weakly temperature dependent; (4) the beta compliance magnitude decreases by a factor of three upon cooling in the temperature range studied. The final part of the paper briefly presents measurements of the dynamic adiabatic bulk modulus covering frequencies from 10 mHz to 10 kHz in the temperature range from 172 K to 200 K. The data are qualitatively similar to the shear modulus data by having a significant beta process. A single-order-parameter framework is suggested to rationalize these similarities.
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Porth, Ilga; Chen, Charles; El-Kassaby, Yousry A.
2016-01-01
The open-pollinated (OP) family testing combines the simplest known progeny evaluation and quantitative genetics analyses as candidates’ offspring are assumed to represent independent half-sib families. The accuracy of genetic parameter estimates is often questioned as the assumption of “half-sibling” in OP families may often be violated. We compared the pedigree- vs. marker-based genetic models by analysing 22-yr height and 30-yr wood density for 214 white spruce [Picea glauca (Moench) Voss] OP families represented by 1694 individuals growing on one site in Quebec, Canada. Assuming half-sibling, the pedigree-based model was limited to estimating the additive genetic variances which, in turn, were grossly overestimated as they were confounded by very minor dominance and major additive-by-additive epistatic genetic variances. In contrast, the implemented genomic pairwise realized relationship models allowed the disentanglement of additive from all nonadditive factors through genetic variance decomposition. The marker-based models produced more realistic narrow-sense heritability estimates and, for the first time, allowed estimating the dominance and epistatic genetic variances from OP testing. In addition, the genomic models showed better prediction accuracies compared to pedigree models and were able to predict individual breeding values for new individuals from untested families, which was not possible using the pedigree-based model. Clearly, the use of marker-based relationship approach is effective in estimating the quantitative genetic parameters of complex traits even under simple and shallow pedigree structure. PMID:26801647
Modeling and Control of Intelligent Flexible Structures
1994-03-26
can be approximated as a simply supported beam in transverse vibration. Assuming that the Euler- Bernoulli beam assumptions hold, linear equations of...The assumptions made during the derivation are that the element can be modeled as an Euler- Bernoulli beam, that the cross-section is symmetric, and...parametes A,. and ,%. andc input maces 3,,. The closed loop system. ecuation (7), is stable when the 3.. 8 and output gain mantices H1., H., H. for
Ground Vehicle System Integration (GVSI) and Design Optimization Model.
1996-07-30
number of stowed kills Same basic load lasts longer range Gun/ammo parameters impact system weight, under - armor volume requirements Round volume...internal volume is reduced, the model assumes that the crew’s ability to operate while under armor will be impaired. If the size of a vehicle crew is...changing swept volume will alter under armor volume requirements for the total system; if system volume is fixed, changing swept volume will
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
LFsGRB: Binary neutron star merger rate via the luminosity function of short gamma-ray bursts
NASA Astrophysics Data System (ADS)
Paul, Debdutta
2018-04-01
LFsGRB models the luminosity function (LF) of short Gamma Ray Bursts (sGRBs) by using the available catalog data of all short GRBs (sGRBs) detected till 2017 October, estimating the luminosities via pseudo-redshifts obtained from the Yonetoku correlation, and then assuming a standard delay distribution between the cosmic star formation rate and the production rate of their progenitors. The data are fit well both by exponential cutoff powerlaw and broken powerlaw models. Using the derived parameters of these models along with conservative values in the jet opening angles seen from afterglow observations, the true rate of short GRBs is derived. Assuming a short GRB is produced from each binary neutron star merger (BNSM), the rate of gravitational wave (GW) detections from these mergers are derived for the past, present and future configurations of the GW detector networks.
Nakanishi, Allen S.; Lilly, Michael R.
1998-01-01
MODFLOW, a finite-difference model of ground-water flow, was used to simulate the flow of water between the aquifer and the Chena River at Fort Wainwright, Alaska. The model was calibrated by comparing simulated ground-water hydrographs to those recorded in wells during periods of fluctuating river levels. The best fit between simulated and observed hydrographs occurred for the following: 20 feet per day for vertical hydraulic conductivity, 400 feet per day for horizontal hydraulic conductivity, 1:20 for anisotropy (vertical to horizontal hydraulic conductivity), and 350 per feet for riverbed conductance. These values include a 30 percent adjustment for geometry effects. The estimated values for hydraulic conductivities of the alluvium are based on assumed values of 0.25 for specific yield and 0.000001 per foot for specific storage of the alluvium; the values assumed for bedrock are 0.1 foot per day horizontal hydraulic conductivity, 0.005 foot per day vertical hydraulic conductivity, and 0.0000001 per foot for specific storage. The resulting diffusivity for the alluvial aquifer is 1,600 feet per day. The estimated values of these hydraulic properties are nearly proportional to the assumed value of specific yield. These values were not found to be sensitive to the assumed values for bedrock. The hydrologic parameters estimated using the cross-sectional model are only valid when taken in context with the other values (both estimated and assumed) used in this study. The model simulates horizontal and vertical flow directions near the river during periods of varying river stage. This information is useful for interpreting bank-storage effects, including the flow of contaminants in the aquifer near the river.
Modeling NAPL dissolution from pendular rings in idealized porous media
NASA Astrophysics Data System (ADS)
Huang, Junqi; Christ, John A.; Goltz, Mark N.; Demond, Avery H.
2015-10-01
The dissolution rate of nonaqueous phase liquid (NAPL) often governs the remediation time frame at subsurface hazardous waste sites. Most formulations for estimating this rate are empirical and assume that the NAPL is the nonwetting fluid. However, field evidence suggests that some waste sites might be organic wet. Thus, formulations that assume the NAPL is nonwetting may be inappropriate for estimating the rates of NAPL dissolution. An exact solution to the Young-Laplace equation, assuming NAPL resides as pendular rings around the contact points of porous media idealized as spherical particles in a hexagonal close packing arrangement, is presented in this work to provide a theoretical prediction for NAPL-water interfacial area. This analytic expression for interfacial area is then coupled with an exact solution to the advection-diffusion equation in a capillary tube assuming Hagen-Poiseuille flow to provide a theoretical means of calculating the mass transfer rate coefficient for dissolution at the NAPL-water interface in an organic-wet system. A comparison of the predictions from this theoretical model with predictions from empirically derived formulations from the literature for water-wet systems showed a consistent range of values for the mass transfer rate coefficient, despite the significant differences in model foundations (water wetting versus NAPL wetting, theoretical versus empirical). This finding implies that, under these system conditions, the important parameter is interfacial area, with a lesser role played by NAPL configuration.
Recharge characteristics of an unconfined aquifer from the rainfall-water table relationship
NASA Astrophysics Data System (ADS)
Viswanathan, M. N.
1984-02-01
The determination of recharge levels of unconfined aquifers, recharged entirely by rainfall, is done by developing a model for the aquifer that estimates the water-table levels from the history of rainfall observations and past water-table levels. In the present analysis, the model parameters that influence the recharge were not only assumed to be time dependent but also to have varying dependence rates for various parameters. Such a model is solved by the use of a recursive least-squares method. The variable-rate parameter variation is incorporated using a random walk model. From the field tests conducted at Tomago Sandbeds, Newcastle, Australia, it was observed that the assumption of variable rates of time dependency of recharge parameters produced better estimates of water-table levels compared to that with constant-recharge parameters. It was observed that considerable recharge due to rainfall occurred on the very same day of rainfall. The increase in water-table level was insignificant for subsequent days of rainfall. The level of recharge very much depends upon the intensity and history of rainfall. Isolated rainfalls, even of the order of 25 mm day -1, had no significant effect on the water-table levels.
Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.; Ryan, Joseph N.
2013-01-01
A colloid transport model is introduced that is conceptually simple yet captures the essential features of colloid transport and retention in saturated porous media when colloid retention is dominated by the secondary minimum because an electrostatic barrier inhibits substantial deposition in the primary minimum. This model is based on conventional colloid filtration theory (CFT) but eliminates the empirical concept of attachment efficiency. The colloid deposition rate is computed directly from CFT by assuming all predicted interceptions of colloids by collectors result in at least temporary deposition in the secondary minimum. Also, a new paradigm for colloid re-entrainment based on colloid population heterogeneity is introduced. To accomplish this, the initial colloid population is divided into two fractions. One fraction, by virtue of physiochemical characteristics (e.g., size and charge), will always be re-entrained after capture in a secondary minimum. The remaining fraction of colloids, again as a result of physiochemical characteristics, will be retained “irreversibly” when captured by a secondary minimum. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of the initial colloid population that will be retained “irreversibly” upon interception by a secondary minimum, and (2) the rate at which reversibly retained colloids leave the secondary minimum. These two parameters were correlated to the depth of the Derjaguin-Landau-Verwey-Overbeek (DLVO) secondary energy minimum and pore-water velocity, two physical forces that influence colloid transport. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport.
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates
ERIC Educational Resources Information Center
Kim, Seonghoon
2012-01-01
Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…
Corridor of existence of thermodynamically consistent solution of the Ornstein-Zernike equation.
Vorob'ev, V S; Martynov, G A
2007-07-14
We obtain the exact equation for a correction to the Ornstein-Zernike (OZ) equation based on the assumption of the uniqueness of thermodynamical functions. We show that this equation is reduced to a differential equation with one arbitrary parameter for the hard sphere model. The compressibility factor within narrow limits of this parameter variation can either coincide with one of the formulas obtained on the basis of analytical solutions of the OZ equation or assume all intermediate values lying in a corridor between these solutions. In particular, we find the value of this parameter when the thermodynamically consistent compressibility factor corresponds to the Carnahan-Stirling formula.
Mixture Rasch model for guessing group identification
NASA Astrophysics Data System (ADS)
Siow, Hoo Leong; Mahdi, Rasidah; Siew, Eng Ling
2013-04-01
Several alternative dichotomous Item Response Theory (IRT) models have been introduced to account for guessing effect in multiple-choice assessment. The guessing effect in these models has been considered to be itemrelated. In the most classic case, pseudo-guessing in the three-parameter logistic IRT model is modeled to be the same for all the subjects but may vary across items. This is not realistic because subjects can guess worse or better than the pseudo-guessing. Derivation from the three-parameter logistic IRT model improves the situation by incorporating ability in guessing. However, it does not model non-monotone function. This paper proposes to study guessing from a subject-related aspect which is guessing test-taking behavior. Mixture Rasch model is employed to detect latent groups. A hybrid of mixture Rasch and 3-parameter logistic IRT model is proposed to model the behavior based guessing from the subjects' ways of responding the items. The subjects are assumed to simply choose a response at random. An information criterion is proposed to identify the behavior based guessing group. Results show that the proposed model selection criterion provides a promising method to identify the guessing group modeled by the hybrid model.
NASA Astrophysics Data System (ADS)
Jolly, A.; Jousset, P.; Neuberg, J.
2003-04-01
We determine locations for low-frequency earthquakes occurring prior to a collapse on June 25th, 1997 using signal amplitudes from a 7-station local seismograph network at the Soufriere Hills volcano on Montserrat, West Indies. Locations are determined by averaging the signal amplitude over the event waveform and inverting these data using an assumed amplitude decay model comprising geometrical spreading and attenuation. Resulting locations are centered beneath the active dome from 500 to 2000 m below sea level assuming body wave geometrical spreading and a quality factor of Q=22. Locations for the same events shifted systematically shallower by about 500 m assuming a surface wave geometrical spreading. Locations are consistent to results obtained using arrival time methods. The validity of the method is tested against synthetic low-frequency events constructed from a 2-D finite difference model including visco-elastic properties. Two example events are tested; one from a point source triggered in a low velocity conduit ranging between 100-1100 m below the surface, and the second triggered in a conduit located 1500-2500 m below the surface. Resulting seismograms have emergent onsets and extended codas and include the effect of conduit resonance. Employing geometrical spreading and attenuation from the finite-difference modelling, we obtain locations within the respective model conduits validating our approach.The location depths are sensitive to the assumed geometric spreading and Q model. We can distinguish between two sources separated by about 1000 meters only if we know the decay parameters.
NASA Astrophysics Data System (ADS)
Diamantopoulos, Efstathios; Durner, Wolfgang
2013-09-01
The description of soil water movement in the unsaturated zone requires the knowledge of the soil hydraulic properties, i.e. the water retention and the hydraulic conductivity function. A great amount of parameterizations for this can be found in the literature, the majority of which represent the complex pore space of soils as a bundle of cylindrical capillary tubes of various sizes. The assumption of zero contact angles between water and surface of the grains is also made. However, these assumptions limit the predictive capabilities of these models, leading often to errors in the prediction of water dynamics in soils. We present a pore-scale analysis for equilibrium liquid configuration in angular pores taking pore-scale hysteresis and the effect of contact angle into account. Furthermore, we propose a derivation of the hydraulic conductivity function, again as a function of the contact angle. An additional parameter was added to the conductivity function in order take into account effects which are not included in the analysis. Finally, we upscale our model from the pore to the sample scale by assuming a gamma statistical distribution of the pore sizes. Closed-form expressions are derived for both water retention and conductivity functions. The new model was tested against experimental data from multistep inflow/outflow (MSI/MSO) experiments for a sandy material. They were conducted using ethanol and water as the wetting liquid. Ethanol was assumed to form a zero contact angle with the soil grains. By keeping constant the parameters fitted from the ethanol MSO experiment we could predict the ethanol MSI dynamics based on our theory. Furthermore, by keeping constant the pore size distribution parameters from the ethanol experiments we could also predict very well the water dynamics for the MSO experiment. Lastly, we could predict the imbibition dynamics for the water MSI experiment by introducing a finite value of the contact angle. Most importantly, the predictions for both ethanol and water MSI/MSO dynamics were made by assuming a unique pore-size distribution.
NASA Astrophysics Data System (ADS)
Kruijt, B.; Jans, W.; Vasconcelos, S.; Tribuzy, E. S.; Felsemburgh, C.; Eliane, M.; Rowland, L.; da Costa, A. C. L.; Meir, P.
2014-12-01
In many dynamic vegetation models, degradation of the tropical forests is induced because they assume that productivity falls rapidly when temperatures rise in the region of 30-40°C. Apart plant respiration, this is due to the assumptions on the temperature optima of photosynthetic capacity, which are low and can differ widely between models, where in fact hardly any empirical information is available for tropical forests. Even less is known about the possibility that photosynthesis will acclimate to changing temperatures. The objective of this study to is to provide better estimates for optima, as well as to determine whether any acclimation to temperature change is to be expected. We present both new and hitherto unpublished data on the temperature response of photosynthesis of Amazon rainforest trees, encompassing three sites, several species and five field campaigns. Leaf photosynthesis and its parameters were determined at a range of temperatures. To study the long-term (seasonal) acclimation of this response, this was combined with an artificial, in situ, multi-season leaf heating experiment. The data show that, on average for all non-heated cases, the photosynthetic parameter Vcmax weakly peaks between 35 and 40 ˚C, while heating does not have a clearly significant effect. Results for Jmax are slightly different, with sharper peaks. Scatter was relatively high, which could indicate weak overall temperature dependence. The combined results were used to fit new parameters to the various temperature response curve functions in a range of DGVMs. The figure shows a typical example: while the default Jules model assumes a temperature optimum for Vcmax at around 33 ˚C, the data suggest that Vcmax keeps rising up to at least 40 ˚C. Of course, calculated photosynthesis, obtained by applying this Vcmax in the Farquhar model, peaks at lower temperature. Finally, the implication of these new model parameters for modelled climate change impact on modelled Amazon forests will be assessed, where it is expected that predicted die-back will be less.
NASA Astrophysics Data System (ADS)
Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.
2009-09-01
SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.
NASA Technical Reports Server (NTRS)
Doty, Keith L
1992-01-01
The author has formulated a new, general model for specifying the kinematic properties of serial manipulators. The new model kinematic parameters do not suffer discontinuities when nominally parallel adjacent axes deviate from exact parallelism. From this new theory the author develops a first-order, lumped-parameter, calibration-model for the ARID manipulator. Next, the author develops a calibration methodology for the ARID based on visual and acoustic sensing. A sensor platform, consisting of a camera and four sonars attached to the ARID end frame, performs calibration measurements. A calibration measurement consists of processing one visual frame of an accurately placed calibration image and recording four acoustic range measurements. A minimum of two measurement protocols determine the kinematics calibration-model of the ARID for a particular region: assuming the joint displacements are accurately measured, the calibration surface is planar, and the kinematic parameters do not vary rapidly in the region. No theoretical or practical limitations appear to contra-indicate the feasibility of the calibration method developed here.
Guilhaumon, François; Gimenez, Olivier; Gaston, Kevin J.; Mouillot, David
2008-01-01
Species-area relationships (SARs) are fundamental to the study of key and high-profile issues in conservation biology and are particularly widely used in establishing the broad patterns of biodiversity that underpin approaches to determining priority areas for biological conservation. Classically, the SAR has been argued in general to conform to a power-law relationship, and this form has been widely assumed in most applications in the field of conservation biology. Here, using nonlinear regressions within an information theoretical model selection framework, we included uncertainty regarding both model selection and parameter estimation in SAR modeling and conducted a global-scale analysis of the form of SARs for vascular plants and major vertebrate groups across 792 terrestrial ecoregions representing almost 97% of Earth's inhabited land. The results revealed a high level of uncertainty in model selection across biomes and taxa, and that the power-law model is clearly the most appropriate in only a minority of cases. Incorporating this uncertainty into a hotspots analysis using multimodel SARs led to the identification of a dramatically different set of global richness hotspots than when the power-law SAR was assumed. Our findings suggest that the results of analyses that assume a power-law model may be at severe odds with real ecological patterns, raising significant concerns for conservation priority-setting schemes and biogeographical studies. PMID:18832179
A statin a day keeps the doctor away: comparative proverb assessment modelling study
Mizdrak, Anja; Scarborough, Peter
2013-01-01
Objective To model the effect on UK vascular mortality of all adults over 50 years old being prescribed either a statin or an apple a day. Design Comparative proverb assessment modelling study. Setting United Kingdom. Population Adults aged over 50 years. Intervention Either a statin a day for people not already taking a statin or an apple a day for everyone, assuming 70% compliance and no change in calorie consumption. The modelling used routinely available UK population datasets; parameters describing the relations between statins, apples, and health were derived from meta-analyses. Main outcome measure Mortality due to vascular disease. Results The estimated annual reduction in deaths from vascular disease of a statin a day, assuming 70% compliance and a reduction in vascular mortality of 12% (95% confidence interval 9% to 16%) per 1.0 mmol/L reduction in low density lipoprotein cholesterol, is 9400 (7000 to 12 500). The equivalent reduction from an apple a day, modelled using the PRIME model (assuming an apple weighs 100 g and that overall calorie consumption remains constant) is 8500 (95% credible interval 6200 to 10 800). Conclusions Both nutritional and pharmaceutical approaches to the prevention of vascular disease may have the potential to reduce UK mortality significantly. With similar reductions in mortality, a 150 year old health promotion message is able to match modern medicine and is likely to have fewer side effects.
Advanced materials for 193-nm resists
NASA Astrophysics Data System (ADS)
Ushirogouchi, Tohru; Asakawa, Koji; Shida, Naomi; Okino, Takeshi; Saito, Satoshi; Funaki, Yoshinori; Takaragi, Akira; Tsutsumi, Kentaro; Nakano, Tatsuya
2000-06-01
Acrylate monomers containing alicyclic side chains featuring a series of polar substituent groups were assumed to be model compounds. Solubility parameters were calculated for the corresponding acrylate polymers. These acrylate monomers were synthesized using a novel aerobic oxidation reaction employing N-hydroxyphtalimide (NHPI) as a catalyst, and then polymerized. These reactions were confirmed to be applicable for the mass-production of those compounds. The calculation results agreed with the hydrophilic parameters measured experimentally. Moreover, the relationship between the resist performance and the above-mentioned solubility parameter has been studied. As a result, a correlation between the resist performance and the calculated solubility parameter was observed. Finally, resolution of 0.13-micron patterns, based on the 1G DRAM design rule, could be successfully fabricated by optimizing the solubility parameter and the resist composition.
What is Neptune's D/H ratio really telling us about its water abundance?
NASA Astrophysics Data System (ADS)
Ali-Dib, Mohamad; Lakhlani, Gunjan
2018-05-01
We investigate the deep-water abundance of Neptune using a simple two-component (core + envelope) toy model. The free parameters of the model are the total mass of heavy elements in the planet (Z), the mass fraction of Z in the envelope (fenv), and the D/H ratio of the accreted building blocks (D/Hbuild).We systematically search the allowed parameter space on a grid and constrain it using Neptune's bulk carbon abundance, D/H ratio, and interior structure models. Assuming solar C/O ratio and cometary D/H for the accreted building blocks are forming the planet, we can fit all of the constraints if less than ˜15 per cent of Z is in the envelope (f_{env}^{median} ˜ 7 per cent), and the rest is locked in a solid core. This model predicts a maximum bulk oxygen abundance in Neptune of 65× solar value. If we assume a C/O of 0.17, corresponding to clathrate-hydrates building blocks, we predict a maximum oxygen abundance of 200× solar value with a median value of ˜140. Thus, both cases lead to oxygen abundance significantly lower than the preferred value of Cavalié et al. (˜540× solar), inferred from model-dependent deep CO observations. Such high-water abundances are excluded by our simple but robust model. We attribute this discrepancy to our imperfect understanding of either the interior structure of Neptune or the chemistry of the primordial protosolar nebula.
A mixed-effects regression model for longitudinal multivariate ordinal data.
Liu, Li C; Hedeker, Donald
2006-03-01
A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.
Mixed Poisson distributions in exact solutions of stochastic autoregulation models.
Iyer-Biswas, Srividya; Jayaprakash, C
2014-11-01
In this paper we study the interplay between stochastic gene expression and system design using simple stochastic models of autoactivation and autoinhibition. Using the Poisson representation, a technique whose particular usefulness in the context of nonlinear gene regulation models we elucidate, we find exact results for these feedback models in the steady state. Further, we exploit this representation to analyze the parameter spaces of each model, determine which dimensionless combinations of rates are the shape determinants for each distribution, and thus demarcate where in the parameter space qualitatively different behaviors arise. These behaviors include power-law-tailed distributions, bimodal distributions, and sub-Poisson distributions. We also show how these distribution shapes change when the strength of the feedback is tuned. Using our results, we reexamine how well the autoinhibition and autoactivation models serve their conventionally assumed roles as paradigms for noise suppression and noise exploitation, respectively.
Electrical description of N2 capacitively coupled plasmas with the global model
NASA Astrophysics Data System (ADS)
Cao, Ming-Lu; Lu, Yi-Jia; Cheng, Jia; Ji, Lin-Hong; Engineering Design Team
2016-10-01
N2 discharges in a commercial capacitively coupled plasma reactor are modelled by a combination of an equivalent circuit and the global model, for a range of gas pressure at 1 4 Torr. The ohmic and inductive plasma bulk and the capacitive sheath are represented as LCR elements, with electrical characteristics determined by plasma parameters. The electron density and electron temperature are obtained from the global model in which a Maxwellian electron distribution is assumed. Voltages and currents are recorded by a VI probe installed after the match network. Using the measured voltage as an input, the current flowing through the discharge volume is calculated from the electrical model and shows excellent agreement with the measurements. The experimentally verified electrical model provides a simple and accurate description for the relationship between the external electrical parameters and the plasma properties, which can serve as a guideline for process window planning in industrial applications.
Ultracold Nonreactive Molecules in an Optical Lattice: Connecting Chemistry to Many-Body Physics.
Doçaj, Andris; Wall, Michael L; Mukherjee, Rick; Hazzard, Kaden R A
2016-04-01
We derive effective lattice models for ultracold bosonic or fermionic nonreactive molecules (NRMs) in an optical lattice, analogous to the Hubbard model that describes ultracold atoms in a lattice. In stark contrast to the Hubbard model, which is commonly assumed to accurately describe NRMs, we find that the single on-site interaction parameter U is replaced by a multichannel interaction, whose properties we elucidate. Because this arises from complex short-range collisional physics, it requires no dipolar interactions and thus occurs even in the absence of an electric field or for homonuclear molecules. We find a crossover between coherent few-channel models and fully incoherent single-channel models as the lattice depth is increased. We show that the effective model parameters can be determined in lattice modulation experiments, which, consequently, measure molecular collision dynamics with a vastly sharper energy resolution than experiments in a free-space ultracold gas.
A general software reliability process simulation technique
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.
1991-01-01
The structure and rationale of the generalized software reliability process, together with the design and implementation of a computer program that simulates this process are described. Given assumed parameters of a particular project, the users of this program are able to generate simulated status timelines of work products, numbers of injected anomalies, and the progress of testing, fault isolation, repair, validation, and retest. Such timelines are useful in comparison with actual timeline data, for validating the project input parameters, and for providing data for researchers in reliability prediction modeling.
Gravitational wave signature of a mini creation event (MCE)
NASA Astrophysics Data System (ADS)
Dhurandhar, S. V.; Narlikar, J. V.
2018-07-01
In light of the recent discoveries of binary black hole events and one neutron star event by the advanced LIGO (aLIGO) and advanced Virgo (aVirgo) detectors, we propose a new astrophysical source, namely, the mini creation event (MCE) as a possible source of gravitational waves (GW) to be detected by advanced detectors. The MCE is at the heart of the quasi steady state cosmology (QSSC) and is not expected to occur in standard cosmology. Generically, the MCE is anisotropic and we assume a Bianchi Tpye I model for its description. We compute its signature waveform and assume masses, distances analogous to the events detected. The striking feature of the waveform associated with this model of the MCE is that it depends only on one amplitude parameter and thus allows for simpler data analysis. By matched filtering the signal we find that, for a broad range of model parameters, the signal to noise ratio of the randomly oriented MCE is sufficiently high for a confident detection by aLIGO and aVirgo. We therefore propose the MCE as a viable astrophysical source of GW. The detection or non-detection of such a source also hold implications for QSSC, namely, whether it is a viable cosmology or not.
Oracle estimation of parametric models under boundary constraints.
Wong, Kin Yau; Goldberg, Yair; Fine, Jason P
2016-12-01
In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.
Hidden Markov model for dependent mark loss and survival estimation
Laake, Jeffrey L.; Johnson, Devin S.; Diefenbach, Duane R.; Ternent, Mark A.
2014-01-01
Mark-recapture estimators assume no loss of marks to provide unbiased estimates of population parameters. We describe a hidden Markov model (HMM) framework that integrates a mark loss model with a Cormack–Jolly–Seber model for survival estimation. Mark loss can be estimated with single-marked animals as long as a sub-sample of animals has a permanent mark. Double-marking provides an estimate of mark loss assuming independence but dependence can be modeled with a permanently marked sub-sample. We use a log-linear approach to include covariates for mark loss and dependence which is more flexible than existing published methods for integrated models. The HMM approach is demonstrated with a dataset of black bears (Ursus americanus) with two ear tags and a subset of which were permanently marked with tattoos. The data were analyzed with and without the tattoo. Dropping the tattoos resulted in estimates of survival that were reduced by 0.005–0.035 due to tag loss dependence that could not be modeled. We also analyzed the data with and without the tattoo using a single tag. By not using.
Bayesian Hierarchical Random Intercept Model Based on Three Parameter Gamma Distribution
NASA Astrophysics Data System (ADS)
Wirawati, Ika; Iriawan, Nur; Irhamah
2017-06-01
Hierarchical data structures are common throughout many areas of research. Beforehand, the existence of this type of data was less noticed in the analysis. The appropriate statistical analysis to handle this type of data is the hierarchical linear model (HLM). This article will focus only on random intercept model (RIM), as a subclass of HLM. This model assumes that the intercept of models in the lowest level are varied among those models, and their slopes are fixed. The differences of intercepts were suspected affected by some variables in the upper level. These intercepts, therefore, are regressed against those upper level variables as predictors. The purpose of this paper would demonstrate a proven work of the proposed two level RIM of the modeling on per capita household expenditure in Maluku Utara, which has five characteristics in the first level and three characteristics of districts/cities in the second level. The per capita household expenditure data in the first level were captured by the three parameters Gamma distribution. The model, therefore, would be more complex due to interaction of many parameters for representing the hierarchical structure and distribution pattern of the data. To simplify the estimation processes of parameters, the computational Bayesian method couple with Markov Chain Monte Carlo (MCMC) algorithm and its Gibbs Sampling are employed.
NASA Astrophysics Data System (ADS)
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
The Classification of Universes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bjorken, J
2004-04-09
We define a universe as the contents of a spacetime box with comoving walls, large enough to contain essentially all phenomena that can be conceivably measured. The initial time is taken as the epoch when the lowest CMB modes undergo horizon crossing, and the final time taken when the wavelengths of CMB photons are comparable with the Hubble scale, i.e. with the nominal size of the universe. This allows the definition of a local ensemble of similarly constructed universes, using only modest extrapolations of the observed behavior of the cosmos. We then assume that further out in spacetime, similar universesmore » can be constructed but containing different standard model parameters. Within this multiverse ensemble, it is assumed that the standard model parameters are strongly correlated with size, i.e. with the value of the inverse Hubble parameter at the final time, in a manner as previously suggested. This allows an estimate of the range of sizes which allow life as we know it, and invites a speculation regarding the most natural distribution of sizes. If small sizes are favored, this in turn allows some understanding of the hierarchy problems of particle physics. Subsequent sections of the paper explore other possible implications. In all cases, the approach is as bottoms up and as phenomenological as possible, and suggests that theories of the multiverse so constructed may in fact lay some claim of being scientific.« less
NASA Astrophysics Data System (ADS)
Zarlenga, Antonio; de Barros, Felipe; Fiori, Aldo
2016-04-01
We present a probabilistic framework for assessing human health risk due to groundwater contamination. Our goal is to quantify how physical hydrogeological and biochemical parameters control the magnitude and uncertainty of human health risk. Our methodology captures the whole risk chain from the aquifer contamination to the tap water assumption by human population. The contaminant concentration, the key parameter for the risk estimation, is governed by the interplay between the large-scale advection, caused by heterogeneity and the degradation processes strictly related to the local scale dispersion processes. The core of the hazard identification and of the methodology is the reactive transport model: erratic displacement of contaminant in groundwater, due to the spatial variability of hydraulic conductivity (K), is characterized by a first-order Lagrangian stochastic model; different dynamics are considered as possible ways of biodegradation in aerobic and anaerobic conditions. With the goal of quantifying uncertainty, the Beta distribution is assumed for the concentration probability density function (pdf) model, while different levels of approximation are explored for the estimation of the one-point concentration moments. The information pertaining the flow and transport is connected with a proper dose response assessment which generally involves the estimation of physiological parameters of the exposed population. Human health response depends on the exposed individual metabolism (e.g. variability) and is subject to uncertainty. Therefore, the health parameters are intrinsically a stochastic. As a consequence, we provide an integrated in a global probabilistic human health risk framework which allows the propagation of the uncertainty from multiple sources. The final result, the health risk pdf, is expressed as function of a few relevant, physically-based parameters such as the size of the injection area, the Péclet number, the K structure metrics and covariance shape, reaction parameters pertaining to aerobic and anaerobic degradation processes respectively as well as the dose response parameters. Even though the final result assumes a relatively simple form, few numerical quadratures are required in order to evaluate the trajectory moments of the solute plume. In order to perform a sensitivity analysis we apply the methodology to a hypothetical case study. The scenario investigated is made by an aquifer which constitutes a water supply for a population where a continuous source of NAPL contaminant feeds a steady plume. The risk analysis is limited to carcinogenic compounds for which the well-known linear relation for human risk is assumed. Analysis performed shows few interesting findings: the risk distribution is strictly dependent on the pore scale dynamics that trigger dilution and mixing; biodegradation may involve a significant reduction of the risk.
Matsuoka, Hiroshi
2012-11-28
For a deeply supercooled liquid near its glass transition temperature, we suggest a possible way to connect the temperature dependence of its molar excess entropy to that of its viscosity by constructing a macroscopic model, where the deeply supercooled liquid is assumed to be a mixture of solid-like and liquid-like micro regions. In this model, we assume that the mole fraction x of the liquid-like micro regions tends to zero as the temperature T of the liquid is decreased and extrapolated to a temperature T(g)(*), which we assume to be below but close to the lowest glass transition temperature T(g) attainable with the slowest possible cooling rate for the liquid. Without referring to any specific microscopic nature of the solid-like and liquid-like micro regions, we also assume that near T(g), the molar enthalpy of the solid-like micro regions is lower than that of the liquid-like micro regions. We then show that the temperature dependence of x is directly related to that of the molar excess entropy. Close to T(g), we assume that an activated motion of the solid-like micro regions controls the viscosity and that this activated motion is a collective motion involving practically all of the solid-like micro-regions so that the molar activation free energy Δg(a) for the activated motion is proportional to the mole fraction, 1 - x, of the solid-like micro regions. The temperature dependence of the viscosity is thus connected to that of the molar excess entropy s(e) through the temperature dependence of the mole fraction x. As an example, we apply our model to a class of glass formers for which s(e) at temperatures near T(g) is well approximated by s(e) ∝ 1 - T(K)∕T with T(K) < T(g) ≅ T(g)(*) and find their viscosities to be well approximated by the Vogel-Fulcher-Tamman equation for temperatures very close to T(g). We also find that a parameter a appearing in the temperature dependence of x for a glass former in this class is a measure for its fragility. As this class includes both fragile and strong glass formers, our model applies to both fragile and strong glass formers. We estimate the values of three parameters in our model for three glass formers in this class, o-terphenyl, 3-bromopentane, and Pd(40)Ni(40)P(20), which is the least fragile among these three. Finally, we also suggest a way to test our assumption about the solid-like and liquid-like micro regions by means of molecular dynamics simulations of model liquids.
Retrieval of ammonia abundances and cloud opacities on Jupiter from Voyager IRIS spectra
NASA Technical Reports Server (NTRS)
Conrath, B. J.; Gierasch, P. J.
1986-01-01
Gaseous ammonia abundances and cloud opacities are retrieved from Voyager IRIS 5- and 45-micron data on the basis of a simplified atmospheric model and a two-stream radiative transfer approximation, assuming a single cloud layer with 680-mbar base pressure and 0.14 gas scale height. Brightness temperature measurements obtained as a function of emission angle from selected planetary locations are used to verify the model and constrain a number of its parameters.
ZMOTTO- MODELING THE INTERNAL COMBUSTION ENGINE
NASA Technical Reports Server (NTRS)
Zeleznik, F. J.
1994-01-01
The ZMOTTO program was developed to model mathematically a spark-ignited internal combustion engine. ZMOTTO is a large, general purpose program whose calculations can be established at five levels of sophistication. These five models range from an ideal cycle requiring only thermodynamic properties, to a very complex representation demanding full combustion kinetics, transport properties, and poppet valve flow characteristics. ZMOTTO is a flexible and computationally economical program based on a system of ordinary differential equations for cylinder-averaged properties. The calculations assume that heat transfer is expressed in terms of a heat transfer coefficient and that the cylinder average of kinetic plus potential energies remains constant. During combustion, the pressures of burned and unburned gases are assumed equal and their heat transfer areas are assumed proportional to their respective mass fractions. Even the simplest ZMOTTO model provides for residual gas effects, spark advance, exhaust gas recirculation, supercharging, and throttling. In the more complex models, 1) finite rate chemistry replaces equilibrium chemistry in descriptions of both the flame and the burned gases, 2) poppet valve formulas represent fluid flow instead of a zero pressure drop flow, and 3) flame propagation is modeled by mass burning equations instead of as an instantaneous process. Input to ZMOTTO is determined by the model chosen. Thermodynamic data is required for all models. Transport properties and chemical kinetics data are required only as the model complexity grows. Other input includes engine geometry, working fluid composition, operating characteristics, and intake/exhaust data. ZMOTTO accommodates a broad spectrum of reactants. The program will calculate many Otto cycle performance parameters for a number of consecutive cycles (a cycle being an interval of 720 crankangle degrees). A typical case will have a number of initial ideal cycles and progress through levels of nonideal cycles. ZMOTTO has restart capabilities and permits multicycle calculations with parameters varying from cycle to cycle. ZMOTTO is written in FORTRAN IV (IBM Level H) but has also been compiled with IBM VSFORTRAN (1977 standard). It was developed on an IBM 3033 under the TSS operating system and has also been implemented under MVS. Approximately 412K of 8 bit bytes of central memory are required in a nonpaging environment. ZMOTTO was developed in 1985.
Geodynamic inversion to constrain the non-linear rheology of the lithosphere
NASA Astrophysics Data System (ADS)
Baumann, T. S.; Kaus, Boris J. P.
2015-08-01
One of the main methods to determine the strength of the lithosphere is by estimating it's effective elastic thickness. This method assumes that the lithosphere is a thin elastic plate that floats on the mantle and uses both topography and gravity anomalies to estimate the plate thickness. Whereas this seems to work well for oceanic plates, it has given controversial results in continental collision zones. For most of these locations, additional geophysical data sets such as receiver functions and seismic tomography exist that constrain the geometry of the lithosphere and often show that it is rather complex. Yet, lithospheric geometry by itself is insufficient to understand the dynamics of the lithosphere as this also requires knowledge of the rheology of the lithosphere. Laboratory experiments suggest that rocks deform in a viscous manner if temperatures are high and stresses low, or in a plastic/brittle manner if the yield stress is exceeded. Yet, the experimental results show significant variability between various rock types and there are large uncertainties in extrapolating laboratory values to nature, which leaves room for speculation. An independent method is thus required to better understand the rheology and dynamics of the lithosphere in collision zones. The goal of this paper is to discuss such an approach. Our method relies on performing numerical thermomechanical forward models of the present-day lithosphere with an initial geometry that is constructed from geophysical data sets. We employ experimentally determined creep-laws for the various parts of the lithosphere, but assume that the parameters of these creep-laws as well as the temperature structure of the lithosphere are uncertain. This is used as a priori information to formulate a Bayesian inverse problem that employs topography, gravity, horizontal and vertical surface velocities to invert for the unknown material parameters and temperature structure. In order to test the general methodology, we first perform a geodynamic inversion of a synthetic forward model of intraoceanic subduction with known parameters. This requires solving an inverse problem with 14-16 parameters, depending on whether temperature is assumed to be known or not. With the help of a massively parallel direct-search combined with a Markov Chain Monte Carlo method, solving the inverse problem becomes feasible. Results show that the rheological parameters and particularly the effective viscosity structure of the lithosphere can be reconstructed in a probabilistic sense. This also holds, with somewhat larger uncertainties, for the case where the temperature distribution is parametrized. Finally, we apply the method to a cross-section of the India-Asia collision system. In this case, the number of parameters is larger, which requires solving around 1.9 × 106 forward models. The resulting models fit the data within their respective uncertainty bounds, and show that the Indian mantle lithosphere must have a high viscosity. Results for the Tibetan plateau are less clear, and both models with a weak Asian mantle lithosphere and with a weak Asian lower crust fit the data nearly equally well.
A Bayesian model for time-to-event data with informative censoring
Kaciroti, Niko A.; Raghunathan, Trivellore E.; Taylor, Jeremy M. G.; Julius, Stevo
2012-01-01
Randomized trials with dropouts or censored data and discrete time-to-event type outcomes are frequently analyzed using the Kaplan–Meier or product limit (PL) estimation method. However, the PL method assumes that the censoring mechanism is noninformative and when this assumption is violated, the inferences may not be valid. We propose an expanded PL method using a Bayesian framework to incorporate informative censoring mechanism and perform sensitivity analysis on estimates of the cumulative incidence curves. The expanded method uses a model, which can be viewed as a pattern mixture model, where odds for having an event during the follow-up interval (tk−1,tk], conditional on being at risk at tk−1, differ across the patterns of missing data. The sensitivity parameters relate the odds of an event, between subjects from a missing-data pattern with the observed subjects for each interval. The large number of the sensitivity parameters is reduced by considering them as random and assumed to follow a log-normal distribution with prespecified mean and variance. Then we vary the mean and variance to explore sensitivity of inferences. The missing at random (MAR) mechanism is a special case of the expanded model, thus allowing exploration of the sensitivity to inferences as departures from the inferences under the MAR assumption. The proposed approach is applied to data from the TRial Of Preventing HYpertension. PMID:22223746
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurya, D. Ch., E-mail: dcmaurya563@gmail.com; Zia, R., E-mail: rashidzya@gmail.com; Pradhan, A., E-mail: pradhan.anirudh@gmail.com
We discuss a spatially homogeneous and anisotropic string cosmological models in the Brans–Dicke theory of gravitation. For a spatially homogeneous metric, it is assumed that the expansion scalar θ is proportional to the shear scalar σ. This condition leads to A = kB{sup m}, where k and m are constants. With these assumptions and also assuming a variable scale factor a = a(t), we find solutions of the Brans–Dicke field equations. Various phenomena like the Big Bang, expanding universe, and shift from anisotropy to isotropy are observed in the model. It can also be seen that in early stage ofmore » the evolution of the universe, strings dominate over particles, whereas the universe is dominated by massive strings at the late time. Some physical and geometrical behaviors of the models are also discussed and observed to be in good agreement with the recent observations of SNe la supernovae.« less
NASA Technical Reports Server (NTRS)
Wang, Qinglin; Gogineni, S. P.
1991-01-01
A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaboud, M.; Aad, G.; Abbott, B.
A search for squarks and gluinos in final states containing hadronic jets, missing transverse momentum but no electrons or muons is presented. The data were recorded in 2015 by the ATLAS experiment in √s=13 TeV proton–proton collisions at the Large Hadron Collider. No excess above the Standard Model background expectation was observed in 3.2 fb -1 of analyzed data. Results are interpreted within simplified models that assume R-parity is conserved and the neutralino is the lightest supersymmetric particle. An exclusion limit at the 95 % confidence level on the mass of the gluino is set at 1.51 TeV for amore » simplified model incorporating only a gluino octet and the lightest neutralino, assuming the lightest neutralino is massless. For a simplified model involving the strong production of mass-degenerate first- and second-generation squarks, squark masses below 1.03 TeV are excluded for a massless lightest neutralino. Finally, these limits substantially extend the region of supersymmetric parameter space excluded by previous measurements with the ATLAS detector.« less
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas
2018-07-01
This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.
Theoretical study of gas hydrate decomposition kinetics--model development.
Windmeier, Christoph; Oellrich, Lothar R
2013-10-10
In order to provide an estimate of the order of magnitude of intrinsic gas hydrate dissolution and dissociation kinetics, the "Consecutive Desorption and Melting Model" (CDM) is developed by applying only theoretical considerations. The process of gas hydrate decomposition is assumed to comprise two consecutive and repetitive quasi chemical reaction steps. These are desorption of the guest molecule followed by local solid body melting. The individual kinetic steps are modeled according to the "Statistical Rate Theory of Interfacial Transport" and the Wilson-Frenkel approach. All missing required model parameters are directly linked to geometric considerations and a thermodynamic gas hydrate equilibrium model.
Hole superconductivity in a generalized two-band model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, X.Q.; Hirsch, J.E.
1992-06-01
We study superconductivity in a two-band model that generalizes the model introduced by Suhl, Matthias, and Walker: All possible interaction terms coupling both bands are included. The pairing interaction is assumed to originate in the momentum dependence of the intraband interactions that arises in the model of hole superconductivity. The model generically displays a single critical temperature and two gaps, with the larger gap associated with the band with strongest holelike character to the carriers. The dependence of the critical temperature and of the magnitudes of the gaps on the various parameters in the Hamiltonian is studied.
NASA Astrophysics Data System (ADS)
Tubino, Federica
2018-03-01
The effect of human-structure interaction in the vertical direction for footbridges is studied based on a probabilistic approach. The bridge is modeled as a continuous dynamic system, while pedestrians are schematized as moving single-degree-of-freedom systems with random dynamic properties. The non-dimensional form of the equations of motion allows us to obtain results that can be applied in a very wide set of cases. An extensive Monte Carlo simulation campaign is performed, varying the main non-dimensional parameters identified, and the mean values and coefficients of variation of the damping ratio and of the non-dimensional natural frequency of the coupled system are reported. The results obtained can be interpreted from two different points of view. If the characterization of pedestrians' equivalent dynamic parameters is assumed as uncertain, as revealed from a current literature review, then the paper provides a range of possible variations of the coupled system damping ratio and natural frequency as a function of pedestrians' parameters. Assuming that a reliable characterization of pedestrians' dynamic parameters is available (which is not the case at present, but could be in the future), the results presented can be adopted to estimate the damping ratio and natural frequency of the coupled footbridge-pedestrian system for a very wide range of real structures.
Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles
2015-01-01
This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities. PMID:26958606
Compact stars in the non-minimally coupled electromagnetic fields to gravity
NASA Astrophysics Data System (ADS)
Sert, Özcan
2018-03-01
We investigate the gravitational models with the non-minimal Y(R)F^2 coupled electromagnetic fields to gravity, in order to describe charged compact stars, where Y( R) denotes a function of the Ricci curvature scalar R and F^2 denotes the Maxwell invariant term. We determine two parameter family of exact spherically symmetric static solutions and the corresponding non-minimal model without assuming any relation between energy density of matter and pressure. We give the mass-radius, electric charge-radius ratios and surface gravitational redshift which are obtained by the boundary conditions. We reach a wide range of possibilities for the parameters k and α in these solutions. Lastly we show that the models can describe the compact stars even in the more simple case α =3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jiali; Han, Yuefeng; Stein, Michael L.
2016-02-10
The Weather Research and Forecast (WRF) model downscaling skill in extreme maximum daily temperature is evaluated by using the generalized extreme value (GEV) distribution. While the GEV distribution has been used extensively in climatology and meteorology for estimating probabilities of extreme events, accurately estimating GEV parameters based on data from a single pixel can be difficult, even with fairly long data records. This work proposes a simple method assuming that the shape parameter, the most difficult of the three parameters to estimate, does not vary over a relatively large region. This approach is applied to evaluate 31-year WRF-downscaled extreme maximummore » temperature through comparison with North American Regional Reanalysis (NARR) data. Uncertainty in GEV parameter estimates and the statistical significance in the differences of estimates between WRF and NARR are accounted for by conducting bootstrap resampling. Despite certain biases over parts of the United States, overall, WRF shows good agreement with NARR in the spatial pattern and magnitudes of GEV parameter estimates. Both WRF and NARR show a significant increase in extreme maximum temperature over the southern Great Plains and southeastern United States in January and over the western United States in July. The GEV model shows clear benefits from the regionally constant shape parameter assumption, for example, leading to estimates of the location and scale parameters of the model that show coherent spatial patterns.« less
NASA Astrophysics Data System (ADS)
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan
2016-12-01
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming unidimensionality and local independence. Moreover, all distractors of the TUV were analyzed from item response curves (IRC) that represent simplified IRT. Data were gathered on 2392 science and engineering freshmen, from three universities in Thailand. The results revealed IRT analysis to be useful in assessing the test since its item parameters are independent of the ability parameters. The IRT framework reveals item-level information, and indicates appropriate ability ranges for the test. Moreover, the IRC analysis can be used to assess the effectiveness of the test's distractors. Both IRT and IRC approaches reveal test characteristics beyond those revealed by the classical analysis methods of tests. Test developers can apply these methods to diagnose and evaluate the features of items at various ability levels of test takers.
Image informative maps for component-wise estimating parameters of signal-dependent noise
NASA Astrophysics Data System (ADS)
Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem
2013-01-01
We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2015-10-01
Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.
Pseudo and conditional score approach to joint analysis of current count and current status data.
Wen, Chi-Chung; Chen, Yi-Hau
2018-04-17
We develop a joint analysis approach for recurrent and nonrecurrent event processes subject to case I interval censorship, which are also known in literature as current count and current status data, respectively. We use a shared frailty to link the recurrent and nonrecurrent event processes, while leaving the distribution of the frailty fully unspecified. Conditional on the frailty, the recurrent event is assumed to follow a nonhomogeneous Poisson process, and the mean function of the recurrent event and the survival function of the nonrecurrent event are assumed to follow some general form of semiparametric transformation models. Estimation of the models is based on the pseudo-likelihood and the conditional score techniques. The resulting estimators for the regression parameters and the unspecified baseline functions are shown to be consistent with rates of square and cubic roots of the sample size, respectively. Asymptotic normality with closed-form asymptotic variance is derived for the estimator of the regression parameters. We apply the proposed method to a fracture-osteoporosis survey data to identify risk factors jointly for fracture and osteoporosis in elders, while accounting for association between the two events within a subject. © 2018, The International Biometric Society.
Shehla, Romana; Khan, Athar Ali
2016-01-01
Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.
Exploring natural supersymmetry at the LHC
NASA Astrophysics Data System (ADS)
Nasir, Fariha
This dissertation demonstrates how a variety of supersymmetric grand unified theories can resolve the little hierarchy problem in the minimal supersymmetric standard model and also explain the observed deviation in the anomalous magnetic moment of the muon. The origin of the little hierarchy problem lies in the sensitive manner in which the Z boson mass depends on parameters that can be much larger than its mass. Large values of these parameters imply that a large fine tuning is required to obtain the correct Z boson mass. With large fine tuning supersymmetry appears unnatural which is why models that attempt to resolve this problem are referred to as natural SUSY models. We show that a possible way to exhibit natural supersymmetry is to assume non-universal gauginos in a class of supersymmetric grand unified models. We further show that considering non-universal gauginos in a class of supersymmetric models can help explain the apparent anomaly in the magnetic moment of the muon.
The nature of the continuous non-equilibrium phase transition of Axelrod's model
NASA Astrophysics Data System (ADS)
Peres, Lucas R.; Fontanari, José F.
2015-09-01
Axelrod's model in the square lattice with nearest-neighbors interactions exhibits culturally homogeneous as well as culturally fragmented absorbing configurations. In the case in which the agents are characterized by F = 2 cultural features and each feature assumes k states drawn from a Poisson distribution of parameter q, these regimes are separated by a continuous transition at qc = 3.10 +/- 0.02 . Using Monte Carlo simulations and finite-size scaling we show that the mean density of cultural domains μ is an order parameter of the model that vanishes as μ ∼ (q - q_c)^β with β = 0.67 +/- 0.01 at the critical point. In addition, for the correlation length critical exponent we find ν = 1.63 +/- 0.04 and for Fisher's exponent, τ = 1.76 +/- 0.01 . This set of critical exponents places the continuous phase transition of Axelrod's model apart from the known universality classes of non-equilibrium lattice models.
Kron-Branin modelling of ultra-short pulsed signal microelectrode
NASA Astrophysics Data System (ADS)
Xu, Zhifei; Ravelo, Blaise; Liu, Yang; Zhao, Lu; Delaroche, Fabien; Vurpillot, Francois
2018-06-01
An uncommon circuit modelling of microelectrode for ultra-short signal propagation is developed. The proposed model is based on the Tensorial Analysis of Network (TAN) using the Kron-Branin (KB) formalism. The systemic graph topology equivalent to the considered structure problem is established by assuming as unknown variables the branch currents. The TAN mathematical solution is determined after the KB characteristic matrix identification. The TAN can integrate various structure physical parameters. As proof of concept, via hole ended microelectrodes implemented on Kapton substrate were designed, fabricated and tested. The 0.1-MHz-to-6-GHz S-parameter KB model, simulation and measurement are in good agreement. In addition, time-domain analyses with nanosecond duration pulse signals were carried out to predict the microelectrode signal integrity. The modelled microstrip electrode is usually integrated in the atom probe tomography. The proposed unfamiliar KB method is particularly beneficial with respect to the computation speed and adaptability to various structures.
Kumar, Anup; Guria, Chandan; Chitres, G; Chakraborty, Arunangshu; Pathak, A K
2016-10-01
A comprehensive mathematical model involving NPK-10:26:26 fertilizer, NaCl, NaHCO3, light and temperature operating variables for Dunaliella tertiolecta cultivation is formulated to predict microalgae-biomass and lipid productivity. Proposed model includes Monod/Andrews kinetics for the absorption of essential nutrients into algae-biomass and Droop model involving internal nutrient cell quota for microalgae growth, assuming algae-biomass is composed of sugar, functional-pool and neutral-lipid. Biokinetic model parameters are determined by minimizing the residual-sum-of-square-errors between experimental and computed microalgae-biomass and lipid productivity using genetic algorithm. Developed model is validated with the experiments of Dunaliella tertiolecta cultivation using air-agitated sintered-disk chromatographic glass-bubble column and the effects of operating variables on microalgae-biomass and lipid productivity is investigated. Finally, parametric sensitivity analysis is carried out to know the sensitivity of model parameters on the obtained results in the input parameter space. Proposed model may be helpful in scale-up studies and implementation of model-based control strategy in large-scale algal cultivation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modeling the dynamics of DDT in a remote tropical floodplain: indications of post-ban use?
Mendez, Annelle; Ng, Carla A; Torres, João Paulo Machado; Bastos, Wanderley; Bogdal, Christian; Dos Reis, George Alexandre; Hungerbuehler, Konrad
2016-06-01
Significant knowledge gaps exist regarding the fate and transport of persistent organic pollutants like dichlorodiphenyltrichloroethane (DDT) in tropical environments. In Brazil, indoor residual spraying with DDT to combat malaria and leishmaniasis began in the 1950s and was banned in 1998. Nonetheless, high concentrations of DDT and its metabolites were recently detected in human breast milk in the community of Lake Puruzinho in the Brazilian Amazon. In this work, we couple analysis of soils and sediments from 2005 to 2014 at Puruzinho with a novel dynamic floodplain model to investigate the movement and distribution of DDT and its transformation products (dichlorodiphenyldichloroethylene (DDE) and dichlorodiphenyldichloroethane (DDD)) and implications for human exposure. The model results are in good agreement with the accumulation pattern observed in the measurements, in which DDT, DDE, and DDD (collectively, DDX) accumulate primarily in upland soils and sediments. However, a significant increase was observed in DDX concentrations in soil samples from 2005 to 2014, coupled with a decrease of DDT/DDE ratios, which do not agree with model results assuming a post-ban regime. These observations strongly suggest recent use. We used the model to investigate possible re-emissions after the ban through two scenarios: one assuming DDT use for IRS and the other assuming use against termites and leishmaniasis. Median DDX concentrations and p,p'-DDT/p,p'-DDE ratios from both of these scenarios agreed with measurements in soils, suggesting that the soil parameterization in our model was appropriate. Measured DDX concentrations in sediments were between the two re-emission scenarios. Therefore, both soil and sediment comparisons suggest re-emissions indeed occurred between 2005 and 2014, but additional measurements would be needed to better understand the actual re-emission patterns. Monte Carlo analysis revealed model predictions for sediments were very sensitive to highly uncertain parameters associated with DDT degradation and partitioning. With this model as a tool for understanding inter-media cycling, additional research to refine these parameters would improve our understanding of DDX fate and transport in tropical sediments.
NASA Astrophysics Data System (ADS)
Gross, Lutz; Tyson, Stephen
2015-04-01
Fracture density and orientation are key parameters controlling productivity of coal seam gas reservoirs. Seismic anisotropy can help to identify and quantify fracture characteristics. In particular, wide offset and dense azimuthal coverage land seismic recordings offers the opportunity for recovery of anisotropy parameters. In many coal seam gas reservoirs (eg. Walloon Subgroup in the Surat Basin, Queensland, Australia (Esterle et al. 2013)) the thickness of coal-beds and interbeds (e.g mud-stone) are well below the seismic wave length (0.3-1m versus 5-15m). In these situations, the observed seismic anisotropy parameters represent effective elastic properties of the composite media formed of fractured, anisotropic coal and isotropic interbed. As a consequence observed seismic anisotropy cannot directly be linked to fracture characteristics but requires a more careful interpretation. In the paper we will discuss techniques to estimate effective seismic anisotropy parameters from well log data with the objective to improve the interpretation for the case of layered thin coal beds. In the first step we use sonic log data to reconstruct the elasticity parameters as function of depth (at the resolution of the sonic log). It is assumed that within a sample fractures are sparse, of the same size and orientation, penny-shaped and equally spaced. Following classical fracture model this can be modeled as an elastic horizontally transversely isotropic (HTI) media (Schoenberg & Sayers 1995). Under the additional assumption of dry fractures, normal and tangential fracture weakness is estimated from slow and fast shear wave velocities of the sonic log. In the second step we apply Backus-style upscaling to construct effective anisotropy parameters on an appropriate length scale. In order to honor the HTI anisotropy present at each layer we have developed a new extension of the classical Backus averaging for layered isotropic media (Backus 1962) . Our new method assumes layered HTI media with constant anisotropy orientation as recovered in the first step. It leads to an effective horizontal orthorhombic elastic model. From this model Thomsen-style anisotropy parameters are calculated to derive azimuth-dependent normal move out (NMO) velocities (see Grechka & Tsvankin 1998). In our presentation we will show results of our approach from sonic well logs in the Surat Basin to investigate the potential of reconstructing S-wave velocity anisotropy and fracture density from azimuth dependent NMO velocities profiles.
Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr
Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.
Accounting For Gains And Orientations In Polarimetric SAR
NASA Technical Reports Server (NTRS)
Freeman, Anthony
1992-01-01
Calibration method accounts for characteristics of real radar equipment invalidating standard 2 X 2 complex-amplitude R (receiving) and T (transmitting) matrices. Overall gain in each combination of transmitting and receiving channels assumed different even when only one transmitter and one receiver used. One characterizes departure of polarimetric Synthetic Aperture Radar (SAR) system from simple 2 X 2 model in terms of single parameter used to transform measurements into format compatible with simple 2 X 2 model. Data processed by applicable one of several prior methods based on simple model.
A mathematical model to optimize the drain phase in gravity-based peritoneal dialysis systems.
Akonur, Alp; Lo, Ying-Cheng; Cizman, Borut
2010-01-01
Use of patient-specific drain-phase parameters has previously been suggested to improve peritoneal dialysis (PD) adequacy. Improving management of the drain period may also help to minimize intraperitoneal volume (IPV). A typical gravity-based drain profile consists of a relatively constant initial fast-flow period, followed by a transition period and a decaying slow-flow period. That profile was modeled using the equation VD(t) = (V(D0) - Q(MAX) x t) xphi + (V(D0) x e(-alphat)) x (1 - phi), where V(D)(t) is the time-dependent dialysate volume; V(D0), the dialysate volume at the start of the drain; Q(MAX), the maximum drain flow rate; alpha, the exponential drain constant; and phi, the unit step function with respect to the flow transition. We simulated the effects of the assumed patient-specific maximum drain flow (Q(MAX)) and transition volume (psi), and the peritoneal volume percentage when transition occurs,for fixed device-specific drain parameters. Average patient transport parameters were assumed during 5-exchange therapy with 10 L of PD solution. Changes in therapy performance strongly depended on the drain parameters. Comparing 400 mL/85% with 200 mL/65% (Q(MAX/psi), drain time (7.5 min vs. 13.5 min) and IPV (2769 mL vs. 2355 mL) increased when the initial drain flow was low and the transition quick. Ultrafiltration and solute clearances remained relatively similar. Such differences were augmented up to a drain time of 22 minutes and an IPV of more than 3 L when Q(MAX) was 100 mL/min. The ability to model individual drain conditions together with water and solute transport may help to prevent patient discomfort with gravity-based PD. However, it is essential to note that practical difficulties such as displaced catheters and obstructed flow paths cause variability in drain characteristics even for the same patient, limiting the clinical applicability of this model.
Model estimation of claim risk and premium for motor vehicle insurance by using Bayesian method
NASA Astrophysics Data System (ADS)
Sukono; Riaman; Lesmana, E.; Wulandari, R.; Napitupulu, H.; Supian, S.
2018-01-01
Risk models need to be estimated by the insurance company in order to predict the magnitude of the claim and determine the premiums charged to the insured. This is intended to prevent losses in the future. In this paper, we discuss the estimation of risk model claims and motor vehicle insurance premiums using Bayesian methods approach. It is assumed that the frequency of claims follow a Poisson distribution, while a number of claims assumed to follow a Gamma distribution. The estimation of parameters of the distribution of the frequency and amount of claims are made by using Bayesian methods. Furthermore, the estimator distribution of frequency and amount of claims are used to estimate the aggregate risk models as well as the value of the mean and variance. The mean and variance estimator that aggregate risk, was used to predict the premium eligible to be charged to the insured. Based on the analysis results, it is shown that the frequency of claims follow a Poisson distribution with parameter values λ is 5.827. While a number of claims follow the Gamma distribution with parameter values p is 7.922 and θ is 1.414. Therefore, the obtained values of the mean and variance of the aggregate claims respectively are IDR 32,667,489.88 and IDR 38,453,900,000,000.00. In this paper the prediction of the pure premium eligible charged to the insured is obtained, which amounting to IDR 2,722,290.82. The prediction of the claims and premiums aggregate can be used as a reference for the insurance company’s decision-making in management of reserves and premiums of motor vehicle insurance.
Mahalingam, Arun; Gawandalkar, Udhav Ulhas; Kini, Girish; Buradi, Abdulrajak; Araki, Tadashi; Ikeda, Nobutaka; Nicolaides, Andrew; Laird, John R; Saba, Luca; Suri, Jasjit S
2016-06-01
Local hemodynamics plays an important role in atherogenesis and the progression of coronary atherosclerosis disease (CAD). The primary biological effect due to blood turbulence is the change in wall shear stress (WSS) on the endothelial cell membrane, while the local oscillatory nature of the blood flow affects the physiological changes in the coronary artery. In coronary arteries, the blood flow Reynolds number ranges from few tens to several hundreds and hence it is generally assumed to be laminar while calculating the WSS calculations. However, the pulsatile blood flow through coronary arteries under stenotic condition could result in transition from laminar to turbulent flow condition. In the present work, the onset of turbulent transition during pulsatile flow through coronary arteries for varying degree of stenosis (i.e., 0%, 30%, 50% and 70%) is quantitatively analyzed by calculating the turbulent parameters distal to the stenosis. Also, the effect of turbulence transition on hemodynamic parameters such as WSS and oscillatory shear index (OSI) for varying degree of stenosis is quantified. The validated transitional shear stress transport (SST) k-ω model used in the present investigation is the best suited Reynolds averaged Navier-Stokes turbulence model to capture the turbulent transition. The arterial wall is assumed to be rigid and the dynamic curvature effect due to myocardial contraction on the blood flow has been neglected. Our observations shows that for stenosis 50% and above, the WSSavg, WSSmax and OSI calculated using turbulence model deviates from laminar by more than 10% and the flow disturbances seems to significantly increase only after 70% stenosis. Our model shows reliability and completely validated. Blood flow through stenosed coronary arteries seems to be turbulent in nature for area stenosis above 70% and the transition to turbulent flow begins from 50% stenosis.
Convenient models of the atmosphere: optics and solar radiation
NASA Astrophysics Data System (ADS)
Alexander, Ginsburg; Victor, Frolkis; Irina, Melnikova; Sergey, Novikov; Dmitriy, Samulenkov; Maxim, Sapunov
2017-11-01
Simple optical models of clear and cloudy atmosphere are proposed. Four versions of atmospheric aerosols content are considered: a complete lack of aerosols in the atmosphere, low background concentration (500 cm-3), high concentrations (2000 cm-3) and very high content of particles (5000 cm-3). In a cloud scenario, the model of external mixture is assumed. The values of optical thickness and single scattering albedo for 13 wavelengths are calculated in the short wavelength range of 0.28-0.90 µm, with regard to the molecular absorption bands, that is simulated with triangle function. A comparison of the proposed optical parameters with results of various measurements and retrieval (lidar measurement, sampling, processing radiation measurements) is presented. For a cloudy atmosphere models of single-layer and two-layer atmosphere are proposed. It is found that cloud optical parameters with assuming the "external mixture" agrees with retrieved values from airborne observations. The results of calculating hemispherical fluxes of the reflected and transmitted solar radiation and the radiative divergence are obtained with the Delta-Eddington approach. The calculation is done for surface albedo values of 0, 0.5, 0.9 and for spectral values of the sandy surface. Four values of solar zenith angle: 0°, 30°, 40° and 60° are taken. The obtained values are compared with data of radiative airborne observations. Estimating the local instantaneous radiative forcing of atmospheric aerosols and clouds for considered models is presented together with the heating rate.
A source-specific model for lossless compression of global Earth data
NASA Astrophysics Data System (ADS)
Kess, Barbara Lynne
A Source Specific Model for Global Earth Data (SSM-GED) is a lossless compression method for large images that captures global redundancy in the data and achieves a significant improvement over CALIC and DCXT-BT/CARP, two leading lossless compression schemes. The Global Land 1-Km Advanced Very High Resolution Radiometer (AVHRR) data, which contains 662 Megabytes (MB) per band, is an example of a large data set that requires decompression of regions of the data. For this reason, SSM-GED compresses the AVHRR data as a collection of subwindows. This approach defines the statistical parameters for the model prior to compression. Unlike universal models that assume no a priori knowledge of the data, SSM-GED captures global redundancy that exists among all of the subwindows of data. The overlap in parameters among subwindows of data enables SSM-GED to improve the compression rate by increasing the number of parameters and maintaining a small model cost for each subwindow of data. This lossless compression method is applicable to other large volumes of image data such as video.
Atmospheric mold spore counts in relation to meteorological parameters
NASA Astrophysics Data System (ADS)
Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.
Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (P<0.02) with average daily temperature, relative humidity, and negatively correlated with precipitation. Alternaria and Epicoccum did not show increased predictability with weather variables. A mathematical model was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.
NASA Astrophysics Data System (ADS)
Flores, J. C.
2015-12-01
For ancient civilizations, the shift from disorder to organized urban settlements is viewed as a phase-transition simile. The number of monumental constructions, assumed to be a signature of civilization processes, corresponds to the order parameter, and effective connectivity becomes related to the control parameter. Based on parameter estimations from archaeological and paleo-climatological data, this study analyzes the rise and fall of the ancient Caral civilization on the South Pacific coast during a period of small ENSO fluctuations (approximately 4500 BP). Other examples considered include civilizations on Easter Island and the Maya Lowlands. This work considers a typical nonlinear third order evolution equation and numerical simulations.
Nonstandard Analysis and Jump Conditions for Converging Shock Waves
NASA Technical Reports Server (NTRS)
Baty, Roy S.; Farassat, Fereidoun; Tucker, Don H.
2008-01-01
Nonstandard analysis is an area of modern mathematics which studies abstract number systems containing both infinitesimal and infinite numbers. This article applies nonstandard analysis to derive jump conditions for one-dimensional, converging shock waves in a compressible, inviscid, perfect gas. It is assumed that the shock thickness occurs on an infinitesimal interval and the jump functions in the thermodynamic and fluid dynamic parameters occur smoothly across this interval. Predistributions of the Heaviside function and the Dirac delta measure are introduced to model the flow parameters across a shock wave. The equations of motion expressed in nonconservative form are then applied to derive unambiguous relationships between the jump functions for the flow parameters.
Knights, Jonathan; Rohatagi, Shashank
2015-12-01
Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.
NASA Astrophysics Data System (ADS)
Norton, P. A., II
2015-12-01
The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.
Estimation in a discrete tail rate family of recapture sampling models
NASA Technical Reports Server (NTRS)
Gupta, Rajan; Lee, Larry D.
1990-01-01
In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.
Fuzzy multinomial logistic regression analysis: A multi-objective programming approach
NASA Astrophysics Data System (ADS)
Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan
2017-05-01
Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.
Nonparametric Transfer Function Models
Liu, Jun M.; Chen, Rong; Yao, Qiwei
2009-01-01
In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584
NON-HOMOGENEOUS POISSON PROCESS MODEL FOR GENETIC CROSSOVER INTERFERENCE.
Leu, Szu-Yun; Sen, Pranab K
2014-01-01
The genetic crossover interference is usually modeled with a stationary renewal process to construct the genetic map. We propose two non-homogeneous, also dependent, Poisson process models applied to the known physical map. The crossover process is assumed to start from an origin and to occur sequentially along the chromosome. The increment rate depends on the position of the markers and the number of crossover events occurring between the origin and the markers. We show how to obtain parameter estimates for the process and use simulation studies and real Drosophila data to examine the performance of the proposed models.
Complex discrete dynamics from simple continuous population models.
Gamarra, Javier G P; Solé, Ricard V
2002-05-01
Nonoverlapping generations have been classically modelled as difference equations in order to account for the discrete nature of reproductive events. However, other events such as resource consumption or mortality are continuous and take place in the within-generation time. We have realistically assumed a hybrid ODE bidimensional model of resources and consumers with discrete events for reproduction. Numerical and analytical approaches showed that the resulting dynamics resembles a Ricker map, including the doubling route to chaos. Stochastic simulations with a handling-time parameter for indirect competition of juveniles may affect the qualitative behaviour of the model.
Cooling and solidification of liquid-metal drops in a gaseous atmosphere
NASA Technical Reports Server (NTRS)
Mccoy, J. K.; Markworth, A. J.; Collings, E. W.; Brodkey, R. S.
1992-01-01
The free fall of a liquid-metal drop, heat transfer from the drop to its environment, and solidification of the drop are described for both gaseous and vacuum atmospheres. A simple model, in which the drop is assumed to fall rectilinearly, with behavior like that of a rigid particle, is developed to describe cooling behavior. Recalescence of supercooled drops is assumed to occur instantaneously when a specified temperature is passed. The effects of solidification and experimental parameters on drop cooling are calculated and discussed. Major results include temperature as a function of time, and of drag, time to complete solidification, and drag as a function of the fraction of the drop solidified.
NASA Astrophysics Data System (ADS)
Khan, Mair; Malik, M. Y.; Salahuddin, T.; Hussian, Arif.
2018-03-01
The present analysis is devoted to explore the computational solution of the problem addressing the variable viscosity and inclined Lorentz force effects on Williamson nanofluid over a stretching sheet. Variable viscosity is assumed to vary as a linear function of temperature. The basic mathematical modelled problem i.e. system of PDE's is converted nonlinear into ODE's via applying suitable transformations. Computational solutions of the problem is also achieved via efficient numerical technique shooting. Characteristics of controlling parameters i.e. stretching index, inclined angle, Hartmann number, Weissenberg number, variable viscosity parameter, mixed convention parameter, Brownian motion parameter, Prandtl number, Lewis number, thermophoresis parameter and chemical reactive species on concentration, temperature and velocity gradient. Additionally, friction factor coefficient, Nusselt number and Sherwood number are describe with the help of graphics as well as tables verses flow controlling parameters.
Tachyon constant-roll inflation
NASA Astrophysics Data System (ADS)
Mohammadi, A.; Saaidi, Kh.; Golanbari, T.
2018-04-01
The constant-roll inflation is studied where the inflaton is taken as a tachyon field. Based on this approach, the second slow-roll parameter is taken as a constant which leads to a differential equation for the Hubble parameter. Finding an exact solution for the Hubble parameter is difficult and leads us to a numerical solution for the Hubble parameter. On the other hand, since in this formalism the slow-roll parameter η is constant and could not be assumed to be necessarily small, the perturbation parameters should be reconsidered again which, in turn, results in new terms appearing in the amplitude of scalar perturbations and the scalar spectral index. Utilizing the numerical solution for the Hubble parameter, we estimate the perturbation parameter at the horizon exit time and compare it with observational data. The results show that, for specific values of the constant parameter η , we could have an almost scale-invariant amplitude of scalar perturbations. Finally, the attractor behavior for the solution of the model is presented, and we determine that the feature could be properly satisfied.
A fuzzy mathematical model of West Java population with logistic growth model
NASA Astrophysics Data System (ADS)
Nurkholipah, N. S.; Amarti, Z.; Anggriani, N.; Supriatna, A. K.
2018-03-01
In this paper we develop a mathematics model of population growth in the West Java Province Indonesia. The model takes the form as a logistic differential equation. We parameterize the model using several triples of data, and choose the best triple which has the smallest Mean Absolute Percentage Error (MAPE). The resulting model is able to predict the historical data with a high accuracy and it also able to predict the future of population number. Predicting the future population is among the important factors that affect the consideration is preparing a good management for the population. Several experiment are done to look at the effect of impreciseness in the data. This is done by considering a fuzzy initial value to the crisp model assuming that the model propagates the fuzziness of the independent variable to the dependent variable. We assume here a triangle fuzzy number representing the impreciseness in the data. We found that the fuzziness may disappear in the long-term. Other scenarios also investigated, such as the effect of fuzzy parameters to the crisp initial value of the population. The solution of the model is obtained numerically using the fourth-order Runge-Kutta scheme.
Zaikin, Alexey; Míguez, Joaquín
2017-01-01
We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087
Development of a second order closure model for computation of turbulent diffusion flames
NASA Technical Reports Server (NTRS)
Varma, A. K.; Donaldson, C. D.
1974-01-01
A typical eddy box model for the second-order closure of turbulent, multispecies, reacting flows developed. The model structure was quite general and was valid for an arbitrary number of species. For the case of a reaction involving three species, the nine model parameters were determined from equations for nine independent first- and second-order correlations. The model enabled calculation of any higher-order correlation involving mass fractions, temperatures, and reaction rates in terms of first- and second-order correlations. Model predictions for the reaction rate were in very good agreement with exact solutions of the reaction rate equations for a number of assumed flow distributions.
Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links.
Sardi, Shira; Vardi, Roni; Goldental, Amir; Sheinin, Anton; Uzan, Herut; Kanter, Ido
2018-03-23
Physical models typically assume time-independent interactions, whereas neural networks and machine learning incorporate interactions that function as adjustable parameters. Here we demonstrate a new type of abundant cooperative nonlinear dynamics where learning is attributed solely to the nodes, instead of the network links which their number is significantly larger. The nodal, neuronal, fast adaptation follows its relative anisotropic (dendritic) input timings, as indicated experimentally, similarly to the slow learning mechanism currently attributed to the links, synapses. It represents a non-local learning rule, where effectively many incoming links to a node concurrently undergo the same adaptation. The network dynamics is now counterintuitively governed by the weak links, which previously were assumed to be insignificant. This cooperative nonlinear dynamic adaptation presents a self-controlled mechanism to prevent divergence or vanishing of the learning parameters, as opposed to learning by links, and also supports self-oscillations of the effective learning parameters. It hints on a hierarchical computational complexity of nodes, following their number of anisotropic inputs and opens new horizons for advanced deep learning algorithms and artificial intelligence based applications, as well as a new mechanism for enhanced and fast learning by neural networks.
Constraints on Average Radial Anisotropy in the Lower Mantle
NASA Astrophysics Data System (ADS)
Trampert, J.; De Wit, R. W. L.; Kaeufl, P.; Valentine, A. P.
2014-12-01
Quantifying uncertainties in seismological models is challenging, yet ideally quality assessment is an integral part of the inverse method. We invert centre frequencies for spheroidal and toroidal modes for three parameters of average radial anisotropy, density and P- and S-wave velocities in the lower mantle. We adopt a Bayesian machine learning approach to extract the information on the earth model that is available in the normal mode data. The method is flexible and allows us to infer probability density functions (pdfs), which provide a quantitative description of our knowledge of the individual earth model parameters. The parameters describing shear- and P-wave anisotropy show little deviations from isotropy, but the intermediate parameter η carries robust information on negative anisotropy of ~1% below 1900 km depth. The mass density in the deep mantle (below 1900 km) shows clear positive deviations from existing models. Other parameters (P- and shear-wave velocities) are close to PREM. Our results require that the average mantle is about 150K colder than commonly assumed adiabats and consist of a mixture of about 60% perovskite and 40% ferropericlase containing 10-15% iron. The anisotropy favours a specific orientation of the two minerals. This observation has important consequences for the nature of mantle flow.
Arihood, Leslie D.
2009-01-01
In 2005, the U.S. Geological Survey began a pilot study for the National Assessment of Water Availability and Use Program to assess the availability of water and water use in the Great Lakes Basin. Part of the study involves constructing a ground-water flow model for the Lake Michigan part of the Basin. Most ground-water flow occurs in the glacial sediments above the bedrock formations; therefore, adequate representation by the model of the horizontal and vertical hydraulic conductivity of the glacial sediments is important to the accuracy of model simulations. This work processed and analyzed well records to provide the hydrogeologic parameters of horizontal and vertical hydraulic conductivity and ground-water levels for the model layers used to simulated ground-water flow in the glacial sediments. The methods used to convert (1) lithology descriptions into assumed values of horizontal and vertical hydraulic conductivity for entire model layers, (2) aquifer-test data into point values of horizontal hydraulic conductivity, and (3) static water levels into water-level calibration data are presented. A large data set of about 458,000 well driller well logs for monitoring, observation, and water wells was available from three statewide electronic data bases to characterize hydrogeologic parameters. More than 1.8 million records of lithology from the well logs were used to create a lithologic-based representation of horizontal and vertical hydraulic conductivity of the glacial sediments. Specific-capacity data from about 292,000 well logs were converted into horizontal hydraulic conductivity values to determine specific values of horizontal hydraulic conductivity and its aerial variation. About 396,000 well logs contained data on ground-water levels that were assembled into a water-level calibration data set. A lithology-based distribution of hydraulic conductivity was created by use of a computer program to convert well-log lithology descriptions into aquifer or nonaquifer categories and to calculate equivalent horizontal and vertical hydraulic conductivities (K and KZ, respectively) for each of the glacial layers of the model. The K was based on an assumed value of 100 ft/d (feet per day) for aquifer materials and 1 ft/d for nonaquifer materials, whereas the equivalent KZ was based on an assumed value of 10 ft/d for aquifer materials and 0.001 ft/d for nonaquifer materials. These values were assumed for convenience to determine a relative contrast between aquifer and nonaquifer materials. The point values of K and KZ from wells that penetrate at least 50 percent of a model layer were interpolated into a grid of values. The K distribution was based on an inverse distance weighting equation that used an exponent of 2. The KZ distribution used inverse distance weighting with an exponent of 4 to represent the abrupt change in KZ that commonly occurs between aquifer and nonaquifer materials. The values of equivalent hydraulic conductivity for aquifer sediments needed to be adjusted to actual values in the study area for the ground-water flow modeling. The specific-capacity data (discharge, drawdown, and time data) from the well logs were input to a modified version of the Theis equation to calculate specific capacity based horizontal hydraulic conductivity values (KSC). The KSC values were used as a guide for adjusting the assumed value of 100 ft/d for aquifer deposits to actual values used in the model. Water levels from well logs were processed to improve reliability of water levels for comparison to simulated water levels in a model layer during model calibration. Water levels were interpolated by kriging to determine a composite water-level surface. The difference between the kriged surface and individual water levels was used to identify outlier water levels. Examination of the well-log lithology data in map form revealed that the data were not only useful for model input, but also were useful for understanding th
Holographic dark energy in higher derivative gravity with time varying model parameter c2
NASA Astrophysics Data System (ADS)
Borah, B.; Ansari, M.
2015-01-01
Purpose of this paper is to study holographic dark energy in higher derivative gravity assuming the model parameter c2 as a slowly time varying function. Since dark energy emerges as combined effect of linear as well as non-linear terms of curvature, therefore it is important to see holographic dark energy at higher derivative gravity, where action contains both linear as well as non-linear terms of Ricci curvature R. We consider non-interacting scenario of the holographic dark energy with dark matter in spatially flat universe and obtain evolution of the equation of state parameter. Also, we determine deceleration parameter as well as the evolution of dark energy density to explain expansion of the universe. Further, we investigate validity of generalized second law of thermodynamics in this scenario. Finally, we find out a cosmological application of our work by evaluating a relation for the equation of state of holographic dark energy for low red-shifts containing c2 correction.
Predictive process simulation of cryogenic implants for leading edge transistor design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh
2012-11-06
Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less
Global-scale combustion sources of organic aerosols: sensitivity to formation and removal mechanisms
NASA Astrophysics Data System (ADS)
Tsimpidi, Alexandra P.; Karydis, Vlassis A.; Pandis, Spyros N.; Lelieveld, Jos
2017-06-01
Organic compounds from combustion sources such as biomass burning and fossil fuel use are major contributors to the global atmospheric load of aerosols. We analyzed the sensitivity of model-predicted global-scale organic aerosols (OA) to parameters that control primary emissions, photochemical aging, and the scavenging efficiency of organic vapors. We used a computationally efficient module for the description of OA composition and evolution in the atmosphere (ORACLE) of the global chemistry-climate model EMAC (ECHAM/MESSy Atmospheric Chemistry). A global dataset of aerosol mass spectrometer (AMS) measurements was used to evaluate simulated primary (POA) and secondary (SOA) OA concentrations. Model results are sensitive to the emission rates of intermediate-volatility organic compounds (IVOCs) and POA. Assuming enhanced reactivity of semi-volatile organic compounds (SVOCs) and IVOCs with OH substantially improved the model performance for SOA. The use of a hybrid approach for the parameterization of the aging of IVOCs had a small effect on predicted SOA levels. The model performance improved by assuming that freshly emitted organic compounds are relatively hydrophobic and become increasingly hygroscopic due to oxidation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkattraman, Ayyaswamy
2013-11-15
The post-breakdown characteristics of field emission driven microplasma are studied theoretically and numerically. A cathode fall model assuming a linearly varying electric field is used to obtain equations governing the operation of steady state field emission driven microplasmas. The results obtained from the model by solving these equations are compared with particle-in-cell with Monte Carlo collisions simulation results for parameters including the plasma potential, cathode fall thickness, ion number density in the cathode fall, and current density vs voltage curves. The model shows good overall agreement with the simulations but results in slightly overpredicted values for the plasma potential andmore » the cathode fall thickness attributed to the assumed electric field profile. The current density vs voltage curves obtained show an arc region characterized by negative slope as well as an abnormal glow discharge characterized by a positive slope in gaps as small as 10 μm operating at atmospheric pressure. The model also retrieves the traditional macroscale current vs voltage theory in the absence of field emission.« less
NASA Technical Reports Server (NTRS)
Fontenla, J. M.; Avrett, E. H.; Loeser, R.
1990-01-01
The energy balance in the lower transition region is analyzed by constructing theoretical models which satisfy the energy balance constraint. The energy balance is achieved by balancing the radiative losses and the energy flowing downward from the corona. This energy flow is mainly in two forms: conductive heat flow and hydrogen ionization energy flow due to ambipolar diffusion. Hydrostatic equilibrium is assumed, and, in a first calculation, local mechanical heating and Joule heating are ignored. In a second model, some mechanical heating compatible with chromospheric energy-balance calculations is introduced. The models are computed for a partial non-LTE approach in which radiation departs strongly from LTE but particles depart from Maxwellian distributions only to first order. The results, which apply to cases where the magnetic field is either absent, or uniform and vertical, are compared with the observed Lyman lines and continuum from the average quiet sun. The approximate agreement suggests that this type of model can roughly explain the observed intensities in a physically meaningful way, assuming only a few free parameters specified as chromospheric boundary conditions.
NASA Astrophysics Data System (ADS)
Doutres, Olivier; Atalla, Noureddine; Dong, Kevin
2013-02-01
This paper proposes simple semi-phenomenological models to predict the sound absorption efficiency of highly porous polyurethane foams from microstructure characterization. In a previous paper [J. Appl. Phys. 110, 064901 (2011)], the authors presented a 3-parameter semi-phenomenological model linking the microstructure properties of fully and partially reticulated isotropic polyurethane foams (i.e., strut length l, strut thickness t, and reticulation rate Rw) to the macroscopic non-acoustic parameters involved in the classical Johnson-Champoux-Allard model (i.e., porosity ϕ, airflow resistivity σ, tortuosity α∝, viscous Λ, and thermal Λ' characteristic lengths). The model was based on existing scaling laws, validated for fully reticulated polyurethane foams, and improved using both geometrical and empirical approaches to account for the presence of membrane closing the pores. This 3-parameter model is applied to six polyurethane foams in this paper and is found highly sensitive to the microstructure characterization; particularly to strut's dimensions. A simplified micro-/macro model is then presented. It is based on the cell size Cs and reticulation rate Rw only, assuming that the geometric ratio between strut length l and strut thickness t is known. This simplified model, called the 2-parameter model, considerably simplifies the microstructure characterization procedure. A comparison of the two proposed semi-phenomenological models is presented using six polyurethane foams being either fully or partially reticulated, isotropic or anisotropic. It is shown that the 2-parameter model is less sensitive to measurement uncertainties compared to the original model and allows a better estimation of polyurethane foams sound absorption behavior.
Forward and backward uncertainty propagation: an oxidation ditch modelling example.
Abusam, A; Keesman, K J; van Straten, G
2003-01-01
In the field of water technology, forward uncertainty propagation is frequently used, whereas backward uncertainty propagation is rarely used. In forward uncertainty analysis, one moves from a given (or assumed) parameter subspace towards the corresponding distribution of the output or objective function. However, in the backward uncertainty propagation, one moves in the reverse direction, from the distribution function towards the parameter subspace. Backward uncertainty propagation, which is a generalisation of parameter estimation error analysis, gives information essential for designing experimental or monitoring programmes, and for tighter bounding of parameter uncertainty intervals. The procedure of carrying out backward uncertainty propagation is illustrated in this technical note by working example for an oxidation ditch wastewater treatment plant. Results obtained have demonstrated that essential information can be achieved by carrying out backward uncertainty propagation analysis.
The effect of the dynamic wet troposphere on radio interferometric measurements
NASA Technical Reports Server (NTRS)
Treuhaft, R. N.; Lanyi, G. E.
1987-01-01
A statistical model of water vapor fluctuations is used to describe the effect of the dynamic wet troposphere on radio interferometric measurements. It is assumed that the spatial structure of refractivity is approximated by Kolmogorov turbulence theory, and that the temporal fluctuations are caused by spatial patterns moved over a site by the wind, and these assumptions are examined for the VLBI delay and delay rate observables. The results suggest that the delay rate measurement error is usually dominated by water vapor fluctuations, and water vapor induced VLBI parameter errors and correlations are determined as a function of the delay observable errors. A method is proposed for including the water vapor fluctuations in the parameter estimation method to obtain improved parameter estimates and parameter covariances.
Effects of mass variation on structures of differentially rotating polytropic stars
NASA Astrophysics Data System (ADS)
Kumar, Sunil; Saini, Seema; Singh, Kamal Krishan
2018-07-01
A method is proposed for determining equilibrium structures and various physical parameters of differentially rotating polytropic models of stars, taking into account the effect of mass variation inside the star and on its equipotential surfaces. The law of differential rotation has been assumed to be the form of ω2(s) =b1 +b2s2 +b3s4 . The proposed method utilizes the averaging approach of Kippenhahn and Thomas and concepts of Roche-equipotential to incorporate the effects of differential rotation on the equilibrium structures of polytropic stellar models. Mathematical expressions of determining the equipotential surfaces, volume, surface area and other physical parameters are also obtained under the effects of mass variation inside the stars. Some significant conclusions are also drawn.
Brillouin light scattering study of Fe 15 Å /Pd x multilayers
NASA Astrophysics Data System (ADS)
From, M.; Cheng, Li; Altounian, Zaven
2004-03-01
Brillouin light scattering (BLS) measurements have been carried out on a series of sputtered Fe/Pd multilayers. The Fe thickness in all samples was 15 Å and the Pd spacer thickness ranged from 6 to 43 Å. We compared the composition and magnetic field dependence of the BLS spectra with a single parameter fit of a new BLS model calculation by John Cochran (Phys. Rev. B 64 (2001) 134406). The data are consistent with a surface anisotropy fit parameter of KS=0.35±0.05 ergs/cm 2 except at the thinnest Pd thickness where it is perhaps not surprising that there is some discrepancy with the model since it assumes zero intermixing between the Fe and Pd layers.
Constraints On the Emission Geometries and Spin Evolution Of Gamma-Ray Millisecond Pulsars
NASA Technical Reports Server (NTRS)
Johnson, T. J.; Venter, C.; Harding, A. K.; Guillemot, L.; Smith, D. A.; Kramer, M.; Celik, O.; den Hartog, P. R.; Ferrara, E. C.; Hou, X.;
2014-01-01
Millisecond pulsars (MSPs) are a growing class of gamma-ray emitters. Pulsed gamma-ray signals have been detected from more than 40 MSPs with the Fermi Large Area Telescope (LAT). The wider radio beams and more compact magnetospheres of MSPs enable studies of emission geometries over a broader range of phase space than non-recycled radio-loud gamma-ray pulsars. We have modeled the gamma-ray light curves of 40 LAT-detected MSPs using geometric emission models assuming a vacuum retarded-dipole magnetic field. We modeled the radio profiles using a single-altitude hollow-cone beam, with a core component when indicated by polarimetry; however, for MSPs with gamma-ray and radio light curve peaks occurring at nearly the same rotational phase, we assume that the radio emission is co-located with the gamma rays and caustic in nature. The best-fit parameters and confidence intervals are determined using amaximum likelihood technique.We divide the light curves into three model classes, with gamma-ray peaks trailing (Class I), aligned (Class II), or leading (Class III) the radio peaks. Outer gap and slot gap (two-pole caustic) models best fit roughly equal numbers of Class I and II, while Class III are exclusively fit with pair-starved polar cap models. Distinguishing between the model classes based on typical derived parameters is difficult. We explore the evolution of the magnetic inclination angle with period and spin-down power, finding possible correlations. While the presence of significant off-peak emission can often be used as a discriminator between outer gap and slot gap models, a hybrid model may be needed.
Constraints On The Emission Geometries And Spin Evolution Of Gamma-Ray Millisecond Pulsars
Johnson, T. J.; Venter, C.; Harding, A. K.; ...
2014-06-18
Millisecond pulsars (MSPs) are a growing class of gamma-ray emitters. Pulsed gamma-ray signals have been detected from more than 40 MSPs with the Fermi Large Area Telescope (LAT). The wider radio beams and more compact magnetospheres of MSPs enable studies of emission geometries over a broader range of phase space than non-recycled radio-loud gamma-ray pulsars. We have modeled the gamma-ray light curves of 40 LAT-detected MSPs using geometric emission models assuming a vacuum retarded-dipole magnetic eld. We modeled the radio pro les using a single-altitude hollow-cone beam, with a core component when indicated by polarimetry; however, for MSPs with gamma-raymore » and radio light curve peaks occurring at nearly the same rotational phase we assume that the radio emission is co-located with the gamma rays and caustic in nature. The best- t parameters and con dence intervals are determined using a maximum likelihood technique. We divide the light curves into three model classes, with gamma-ray peaks trailing (Class I), aligned (Class II) or leading (Class III) the radio peaks. Outer gap and slot gap (two-pole caustic) models best t roughly equal numbers of Class I and II, while Class III are exclusively t with pair-starved polar cap models. Distinguishing between the model classes based on typical derived parameters is diffcult. We explore the evolution of magnetic inclination angle with period and spin-down power, nding possible correlations. While the presence of signi cant off- peak emission can often be used as a discriminator between outer gap and slot gap models, a hybrid model may be needed.« less
Yu-Kang, Tu
2016-12-01
Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Suzuki, Takashi; Takao, Hiroyuki; Suzuki, Takamasa; Suzuki, Tomoaki; Masuda, Shunsuke; Dahmani, Chihebeddine; Watanabe, Mitsuyoshi; Mamori, Hiroya; Ishibashi, Toshihiro; Yamamoto, Hideki; Yamamoto, Makoto; Murayama, Yuichi
2017-01-01
In most simulations of intracranial aneurysm hemodynamics, blood is assumed to be a Newtonian fluid. However, it is a non-Newtonian fluid, and its viscosity profile differs among individuals. Therefore, the common viscosity assumption may not be valid for all patients. This study aims to test the suitability of the common viscosity assumption. Blood viscosity datasets were obtained from two healthy volunteers. Three simulations were performed for three different-sized aneurysms, two using measured value-based non-Newtonian models and one using a Newtonian model. The parameters proposed to predict an aneurysmal rupture obtained using the non-Newtonian models were compared with those obtained using the Newtonian model. The largest difference (25%) in the normalized wall shear stress (NWSS) was observed in the smallest aneurysm. Comparing the difference ratio to the NWSS with the Newtonian model between the two Non-Newtonian models, the difference of the ratio was 17.3%. Irrespective of the aneurysmal size, computational fluid dynamics simulations with either the common Newtonian or non-Newtonian viscosity assumption could lead to values different from those of the patient-specific viscosity model for hemodynamic parameters such as NWSS.
Modulation instability in high power laser amplifiers.
Rubenchik, Alexander M; Turitsyn, Sergey K; Fedoruk, Michail P
2010-01-18
The modulation instability (MI) is one of the main factors responsible for the degradation of beam quality in high-power laser systems. The so-called B-integral restriction is commonly used as the criteria for MI control in passive optics devices. For amplifiers the adiabatic model, assuming locally the Bespalov-Talanov expression for MI growth, is commonly used to estimate the destructive impact of the instability. We present here the exact solution of MI development in amplifiers. We determine the parameters which control the effect of MI in amplifiers and calculate the MI growth rate as a function of those parameters. The safety range of operational parameters is presented. The results of the exact calculations are compared with the adiabatic model, and the range of validity of the latest is determined. We demonstrate that for practical situations the adiabatic approximation noticeably overestimates MI. The additional margin of laser system design is quantified.
Saturn systems holddown acoustic efficiency and normalized acoustic power spectrum.
NASA Technical Reports Server (NTRS)
Gilbert, D. W.
1972-01-01
Saturn systems field acoustic data are used to derive mid- and far-field prediction parameters for rocket engine noise. The data were obtained during Saturn vehicle launches at the Kennedy Space Center. The data base is a sorted set of acoustic data measured during the period 1961 through 1971 for Saturn system launches SA-1 through AS-509. The model assumes hemispherical radiation from a simple source located at the intersection of the longitudinal axis of each booster and the engine exit plane. The model parameters are evaluated only during vehicle holddown. The acoustic normalized power spectrum and efficiency for each system are isolated as a composite from the data using linear numerical methods. The specific definitions of each allows separation. The resulting power spectra are nondimensionalized as a function of rocket engine parameters. The nondimensional Saturn system acoustic spectrum and efficiencies are compared as a function of Strouhal number with power spectra from other systems.
Transmission Parameters of the 2001 Foot and Mouth Epidemic in Great Britain
Chis Ster, Irina; Ferguson, Neil M.
2007-01-01
Despite intensive ongoing research, key aspects of the spatial-temporal evolution of the 2001 foot and mouth disease (FMD) epidemic in Great Britain (GB) remain unexplained. Here we develop a Markov Chain Monte Carlo (MCMC) method for estimating epidemiological parameters of the 2001 outbreak for a range of simple transmission models. We make the simplifying assumption that infectious farms were completely observed in 2001, equivalent to assuming that farms that were proactively culled but not diagnosed with FMD were not infectious, even if some were infected. We estimate how transmission parameters varied through time, highlighting the impact of the control measures on the progression of the epidemic. We demonstrate statistically significant evidence for assortative contact patterns between animals of the same species. Predictive risk maps of the transmission potential in different geographic areas of GB are presented for the fitted models. PMID:17551582
NASA Technical Reports Server (NTRS)
Demarest, H. H., Jr.
1972-01-01
The elastic constants and the entire frequency spectrum were calculated up to high pressure for the alkali halides in the NaCl lattice, based on an assumed functional form of the inter-atomic potential. The quasiharmonic approximation is used to calculate the vibrational contribution to the pressure and the elastic constants at arbitrary temperature. By explicitly accounting for the effect of thermal and zero point motion, the adjustable parameters in the potential are determined to a high degree of accuracy from the elastic constants and their pressure derivatives measured at zero pressure. The calculated Gruneisen parameter, the elastic constants and their pressure derivatives are in good agreement with experimental results up to about 600 K. The model predicts that for some alkali halides the Grunesen parameter may decrease monotonically with pressure, while for others it may increase with pressure, after an initial decrease.
NASA Astrophysics Data System (ADS)
Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.
2018-06-01
Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.
NASA Astrophysics Data System (ADS)
Shue, J.; Jhuang, B.; Song, P.; Safrankova, J.; Nemecek, Z.; Russell, C. T.; Chen, S.
2008-12-01
The solar wind dynamic pressure is reduced when the solar wind flows around the magnetosphere due to the diversion of the flows. The magnetopause is the boundary where the reduced dynamic pressure is balanced with the magnetic pressure of the compressed magnetosphere by the solar wind. The size and shape of the magnetopause have long been considered among the most important parameters in Solar Terrestrial physics. Previous models of the size and shape of the magnetopause often assumed the axis- symmetry of the magnetopause with respect to the Sun-Earth line. With a large number of magnetopause crossings by ISEE-1 and -2, AMPTE/IRM, Hawkeye, Geotail, Interball-1, and Magion-4, we are able to consider the asymmetry of the magnetopuase. In the Shue et al. [1997] model, the magnetopause was modeled by two parameters, r0 and alpha, representing the subsolar standoff distance and the flaring level of the magnetopause, respectively. Parameter alpha was assumed to be independent of phi in the Shue et al. [1997] model, where phi is the angle between the Z axis and the mapping of the radial vector of the magnetopause on the YZ plane. In the present study we allow alpha to be a function of phi. We separate crossings with different phis and fit them in each bin to the new functional form proposed by Shue et al. [1997]. We find that the magnetopause is symmetric in the dawn-dusk direction for northward IMF. However, its size on the dawnside becomes larger when the IMF is southward. The function of alpha in terms of phi can be combined with the 2-D Shue et al. [1997] model into a 3-D magnetopause model. (Shue, J.-H., J. K. Chao, H. C. Fu, C. T. Russell, P. Song, K. K. Khurana, and H. J. Singer, A new functional form to study the solar wind control of the magnetopause size and shape, J. Geophys. Res., 102, 9497, 1997.)
A Bayesian approach to modeling 2D gravity data using polygon states
NASA Astrophysics Data System (ADS)
Titus, W. J.; Titus, S.; Davis, J. R.
2015-12-01
We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.
Study of the ablative effects on tektites. [wake shielding during atmospheric entry
NASA Technical Reports Server (NTRS)
Sepri, P.; Chen, K. K.
1976-01-01
Equations are presented which provide approximate parameters describing surface heating and tektite deceleration during atmosphere passage. Numerical estimates of these parameters using typical initial and ambient conditions support the conclusion that the commonly assumed trajectories would not have produced some of the observed surface markings. It is suggested that tektites did not enter the atmosphere singly but rather in a swarm dense enough to afford wake shielding according to a shock envelope model which is proposed. A further aerodynamic mechanism is described which is compatible with hemispherical pits occurring on tektite surfaces.
PUSHing core-collapse simulations to explosion
NASA Astrophysics Data System (ADS)
Fröhlich, C.; Perego, A.; Hempe, M.; Ebinger, K.; Eichler, M.; Casanova, J.; Liebendörfer, M.; Thielemann, F.-K.
2018-01-01
We report on the PUSH method for artificially triggering core-collapse supernova explosions of massive stars in spherical symmetry. The PUSH method increases the energy deposition in the gain region proportionally to the heavy flavor neutrino fluxes.We summarize the parameter dependence of the method and calibrate PUSH to reproduce SN 1987A observables. We identify a best-fit progenitor and set of parameters that fit the explosion properties of SN 1987A, assuming 0.1 M⊙ of fallback. For the explored progenitor range of 18-21 M⊙, we find correlations between explosion properties and the compactness of the progenitor model.
Combined natural gamma ray spectral/litho-density measurements applied to complex lithologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quirein, J.A.; Gardner, J.S.; Watson, J.T.
1982-09-01
Well log data has long been used to provide lithological descriptions of complex formations. Historically, most of the approaches used have been restrictive because they assumed fixed, known, and distinct lithologies for specified zones. The approach described in this paper attempts to alleviate this restriction by estimating the ''probability of a model'' for the models suggested as most likely by the reservoir geology. Lithological variables are simultaneously estimated from response equations for each model and combined in accordance with the probability of each respective model. The initial application of this approach has been the estimation of calcite, quartz, and dolomitemore » in the presence of clays, feldspars, anhydrite, or salt. Estimations were made by using natural gamma ray spectra, photoelectric effect, bulk density, and neutron porosity information. For each model, response equations and parameter selections are obtained from the thorium vs potassium crossplot and the apparent matrix density vs apparent volumetric photoelectric cross section crossplot. The thorium and potassium response equations are used to estimate the volumes of clay and feldspar. The apparent matrix density and volumetric cross section response equations can then be corrected for the presence of clay and feldspar. A test ensures that the clay correction lies within the limits for the assumed lithology model. Results are presented for varying lithologies. For one test well, 6,000 feet were processed in a single pass, without zoning and without adjusting more than one parameter pick. The program recognized sand, limestone, dolomite, clay, feldspar, anhydrite, and salt without analyst intervention.« less
Effects of anisotropies in turbulent magnetic diffusion in mean-field solar dynamo models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pipin, V. V.; Kosovichev, A. G.
2014-04-10
We study how anisotropies of turbulent diffusion affect the evolution of large-scale magnetic fields and the dynamo process on the Sun. The effect of anisotropy is calculated in a mean-field magnetohydrodynamics framework assuming that triple correlations provide relaxation to the turbulent electromotive force (so-called the 'minimal τ-approximation'). We examine two types of mean-field dynamo models: the well-known benchmark flux-transport model and a distributed-dynamo model with a subsurface rotational shear layer. For both models, we investigate effects of the double- and triple-cell meridional circulation, recently suggested by helioseismology and numerical simulations. To characterize the anisotropy effects, we introduce a parameter ofmore » anisotropy as a ratio of the radial and horizontal intensities of turbulent mixing. It is found that the anisotropy affects the distribution of magnetic fields inside the convection zone. The concentration of the magnetic flux near the bottom and top boundaries of the convection zone is greater when the anisotropy is stronger. It is shown that the critical dynamo number and the dynamo period approach to constant values for large values of the anisotropy parameter. The anisotropy reduces the overlap of toroidal magnetic fields generated in subsequent dynamo cycles, in the time-latitude 'butterfly' diagram. If we assume that sunspots are formed in the vicinity of the subsurface shear layer, then the distributed dynamo model with the anisotropic diffusivity satisfies the observational constraints from helioseismology and is consistent with the value of effective turbulent diffusion estimated from the dynamics of surface magnetic fields.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, John A.
2010-02-10
The SUSY breaking in Cybersusy is proportional to the VEV that breaks the gauge symmetry SU(2)xU(1) down to U(1), and it is rather specific to models like the SSM. Assuming full breaking, as explained below, for the leptons, Cybersusy predicts a spectrum of SUSY breaking that is in accord with experimental results so far. In particular, for the choice of parameters below, Cybersusy predicts that the lowest mass superpartner for the charged leptons is a charged vector boson lepton (the Velectron), which has a mass of 316 Gev. The Selectron has a mass of 771 Gev for that choice ofmore » parameters. The theory also leads to a zero cosmological constant after SUSY breaking. The mechanism generates equations that restrict models like the SSM. This version of this paper incorporates recent results and changes discovered subsequent to the talk.« less
Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)
NASA Technical Reports Server (NTRS)
Greenwood, Eric
2011-01-01
A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.
An improved computer model for prediction of axial gas turbine performance losses
NASA Technical Reports Server (NTRS)
Jenkins, R. M.
1984-01-01
The calculation model performs a rapid preliminary pitchline optimization of axial gas turbine annular flowpath geometry, as well as an initial estimate of blade profile shapes, given only a minimum of thermodynamic cycle requirements. No geometric parameters need be specified. The following preliminary design data are determined: (1) the optimum flowpath geometry, within mechanical stress limits; (2) initial estimates of cascade blade shapes; and (3) predictions of expected turbine performance. The model uses an inverse calculation technique whereby blade profiles are generated by designing channels to yield a specified velocity distribution on the two walls. Velocity distributions are then used to calculate the cascade loss parameters. Calculated blade shapes are used primarily to determine whether the assumed velocity loadings are physically realistic. Model verification is accomplished by comparison of predicted turbine geometry and performance with an array of seven NASA single-stage axial gas turbine configurations.
Experimental Investigation of the Formation of Complex Craters
NASA Astrophysics Data System (ADS)
Martellato, E.; Dörfler, M. A.; Schuster, B.; Wünnemman, K.; Kenkmann, T.
2017-09-01
The formation of complex impact craters is still poorly understood, because standard material models fail to explain the gravity-driven collapse at the observed size-range of a bowl-shaped transient crater into a flat-floored crater structure with a central peak or ring and terraced rim. To explain such a collapse the so-called Acoustic Fluidization (AF) model has been proposed. The AF assumes that heavily fractured target rocks surrounding the transient crater are temporarily softened by an acoustic field in the wake of an expanding shock wave generated upon impact. The AF has been successfully employed in numerous modeling studies of complex crater formation; however, there is no clear relationship between model parameters and observables. In this study, we present preliminary results of laboratory experiments aiming at relating the AF parameters to observables such as the grain size, average wave length of the acoustic field and its decay time τ relative to the crater formation time.
NASA Astrophysics Data System (ADS)
Günther, U.; Moniz, P.; Zhuk, A.
2003-08-01
We consider multidimensional gravitational models with a nonlinear scalar curvature term and form fields in the action functional. In our scenario it is assumed that the higher dimensional spacetime undergoes a spontaneous compactification to a warped product manifold. Particular attention is paid to models with quadratic scalar curvature terms and a Freund-Rubin-like ansatz for solitonic form fields. It is shown that for certain parameter ranges the extra dimensions are stabilized. In particular, stabilization is possible for any sign of the internal space curvature, the bulk cosmological constant, and of the effective four-dimensional cosmological constant. Moreover, the effective cosmological constant can satisfy the observable limit on the dark energy density. Finally, we discuss the restrictions on the parameters of the considered nonlinear models and how they follow from the connection between the D-dimensional and the four-dimensional fundamental mass scales.
A power-law coupled three-form dark energy model
NASA Astrophysics Data System (ADS)
Yao, Yan-Hong; Yan, Yang-Jie; Meng, Xin-He
2018-02-01
We consider a field theory model of coupled dark energy which treats dark energy as a three-form field and dark matter as a spinor field. By assuming the effective mass of dark matter as a power-law function of the three-form field and neglecting the potential term of dark energy, we obtain three solutions of the autonomous system of evolution equations, including a de Sitter attractor, a tracking solution and an approximate solution. To understand the strength of the coupling, we confront the model with the latest Type Ia Supernova, Baryon Acoustic Oscillations and Cosmic Microwave Background radiation observations, with the conclusion that the combination of these three databases marginalized over the present dark matter density parameter Ω _{m0} and the present three-form field κ X0 gives stringent constraints on the coupling constant, - 0.017< λ <0.047 (2σ confidence level), by which we present the model's applicable parameter range.
A new method of measurement of tension on a moving magnetic tape
NASA Technical Reports Server (NTRS)
Kurtinaytis, A. K.; Lauzhinskas, Y. S.
1973-01-01
The possibility of no-contact measurement of the tension on a moving magnetic tape, assuming the tape is uniform, is discussed. A scheme for calculation of the natural frequency of transverse vibrations of magnetic tape is shown. Mathematical models are developed to show the relationships of the parameters. The method is applicable to the analysis of accurate tape feed mechanisms design.
Fractal scaling laws of black carbon aerosol and their influence on spectral radiative properties
NASA Astrophysics Data System (ADS)
Tiwari, S.; Chakrabarty, R. K.; Heinson, W.
2016-12-01
Current estimates of the direct radiative forcing for Black Carbon (BC) aerosol span over a poorly constrained range between 0.2 and 1 W.m-2. To improve this large uncertainty, tighter constraints need to be placed on BC's key wavelength-dependent optical properties, namely, the absorption (MAC) and scattering (MSC) cross sections per unit mass and hemispherical upscatter fraction (β; a dimensionless scattering directionality parameter). These parameters are very sensitive to changes in particle morphology and complex refractive index nindex. Their interplay determines the magnitude of net positive or negative radiative forcing efficiencies. The current approach among climate modelers for estimating MAC and MSC values of BC is from their optical cross-sections calculated assuming spherical particle morphology with homogeneous, constant-valued refractive index in the visible solar spectrum. The β values are typically assumed to be a constant across this spectrum. This approach, while being computationally inexpensive and convenient, ignores the inherent fractal morphology of BC and its scaling behaviors, and resulting optical properties. In this talk, I will present recent results from my laboratory on determination of the fractal scaling laws of BC aggregate packing density and its complex refractive index for size spanning across three orders of magnitude, and their effects on spectral (Visible-infrared wavelength) scaling of MAC, MSC, and β values. Our experiments synergistically combined novel BC generation techniques, aggregation models, contact-free multi-wavelength optical measurements, and electron microscopy analysis. The scale dependence of nindex on aggregate size followed power-law exponents of -1.4 and -0.5 for sub- and super-micron size aggregates, respectively. The spherical Rayleigh-optics approximation limits, used by climate models for spectral extrapolation of BC optical cross-sections and deconvolution of multi-species mixing ratios, are redefined using the concept of phase shift parameter. I will highlight the importance of size-dependent β values and its role in offsetting the strong light absorbing nature of BC. Finally, the errors introduced in forcing efficiency calculations of BC by assuming spherical homogeneous morphology will be evaluated.
Modeling the Atmospheric Phase Effects of a Digital Antenna Array Communications System
NASA Technical Reports Server (NTRS)
Tkacenko, A.
2006-01-01
In an antenna array system such as that used in the Deep Space Network (DSN) for satellite communication, it is often necessary to account for the effects due to the atmosphere. Typically, the atmosphere induces amplitude and phase fluctuations on the transmitted downlink signal that invalidate the assumed stationarity of the signal model. The degree to which these perturbations affect the stationarity of the model depends both on parameters of the atmosphere, including wind speed and turbulence strength, and on parameters of the communication system, such as the sampling rate used. In this article, we focus on modeling the atmospheric phase fluctuations in a digital antenna array communications system. Based on a continuous-time statistical model for the atmospheric phase effects, we show how to obtain a related discrete-time model based on sampling the continuous-time process. The effects of the nonstationarity of the resulting signal model are investigated using the sample matrix inversion (SMI) algorithm for minimum mean-squared error (MMSE) equalization of the received signal
Mad cows and computer models: the U.S. response to BSE.
Ackerman, Frank; Johnecheck, Wendy A
2008-01-01
The proportion of slaughtered cattle tested for BSE is much smaller in the U.S. than in Europe and Japan, leaving the U.S. heavily dependent on statistical models to estimate both the current prevalence and the spread of BSE. We examine the models relied on by USDA, finding that the prevalence model provides only a rough estimate, due to limited data availability. Reassuring forecasts from the model of the spread of BSE depend on the arbitrary constraint that worst-case values are assumed by only one of 17 key parameters at a time. In three of the six published scenarios with multiple worst-case parameter values, there is at least a 25% probability that BSE will spread rapidly. In public policy terms, reliance on potentially flawed models can be seen as a gamble that no serious BSE outbreak will occur. Statistical modeling at this level of abstraction, with its myriad, compound uncertainties, is no substitute for precautionary policies to protect public health against the threat of epidemics such as BSE.
NASA Astrophysics Data System (ADS)
Montzka, S. A.; Butler, J. H.; Dutton, G.; Thompson, T. M.; Hall, B.; Mondeel, D. J.; Elkins, J. W.
2005-05-01
The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.
Quantile Regression Models for Current Status Data
Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen
2016-01-01
Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307
Glover, Susan M
2009-10-01
Traditionally, models of resource extraction assume individuals act as if they form strategies based on complete information. In reality, gathering information about environmental parameters may be costly. An efficient information gathering strategy is to observe the foraging behavior of others, termed public information. However, media can exploit this strategy by appearing to supply accurate information while actually shaping information to manipulate people to behave in ways that benefit the media or their clients. Here, I use Central Place Foraging (CPF) models to investigate how newspaper propaganda shaped ore foraging strategies of late nineteenth-century Colorado silver prospectors. Data show that optimistic values of silver ore published in local newspapers led prospectors to place mines at a much greater distance than was profitable. Models assuming perfect information neglect the possibility of misinformation among investors, and may underestimate the extent and degree of human impacts on areas of resource extraction.
Evolution Models with Conditional Mutation Rates: Strange Plateaus in Population Distribution
NASA Astrophysics Data System (ADS)
Saakian, David B.
2017-08-01
Cancer is related to clonal evolution with a strongly nonlinear, collective behavior. Here we investigate a slightly advanced version of the popular Crow-Kimura evolution model, suggested recently, by simply assuming a conditional mutation rate. We investigated the steady-state solution and found a highly intriguing plateau in the distribution. There are selective and nonselective phases, with a rather narrow plateau in the distribution at the peak in the first phase, and a wide plateau for many Hamming classes (a collection of genomes with the same number of mutations from the reference genome) in the second phase. We analytically solved the steady state distribution in the selective and nonselective phases, calculating the widths of the plateaus. Numerically, we also found an intermediate phase with several plateaus in the steady-state distribution, related to large finite-genome-length corrections. We assume that the newly observed phenomena should exist in other versions of evolution dynamics when the parameters of the model are conditioned to the population distribution.
Nonlinear saturation of the slab ITG instability and zonal flow generation with fully kinetic ions
NASA Astrophysics Data System (ADS)
Miecnikowski, Matthew T.; Sturdevant, Benjamin J.; Chen, Yang; Parker, Scott E.
2018-05-01
Fully kinetic turbulence models are of interest for their potential to validate or replace gyrokinetic models in plasma regimes where the gyrokinetic expansion parameters are marginal. Here, we demonstrate fully kinetic ion capability by simulating the growth and nonlinear saturation of the ion-temperature-gradient instability in shearless slab geometry assuming adiabatic electrons and including zonal flow dynamics. The ion trajectories are integrated using the Lorentz force, and the cyclotron motion is fully resolved. Linear growth and nonlinear saturation characteristics show excellent agreement with analogous gyrokinetic simulations across a wide range of parameters. The fully kinetic simulation accurately reproduces the nonlinearly generated zonal flow. This work demonstrates nonlinear capability, resolution of weak gradient drive, and zonal flow physics, which are critical aspects of modeling plasma turbulence with full ion dynamics.
Hadorn, Daniela C; Haracic, Sabina Seric; Stärk, Katharina DC
2008-01-01
Background Globalization and subsequent growth in international trade in animals and animal products has increased the importance of international disease reporting. Efficient and reliable surveillance systems are needed in order to document the disease status of a population at a given time. In this context, passive surveillance plays an important role in early warning systems. However, it is not yet routinely integrated in the assessment of disease surveillance systems because different factors like the disease awareness (DA) of people reporting suspect cases influence the detection performance of passive surveillance. In this paper, we used scenario tree methodology in order to evaluate and compare the quality and benefit of abortion testing (ABT) for Brucella melitensis (Bm) between the disease free situation in Switzerland (CH) and a hypothetical disease free situation in Bosnia and Herzegovina (BH), taking into account DA levels assumed for the current endemic situation in BH. Results The structure and input parameters of the scenario tree were identical for CH and BH with the exception of population data in small ruminants and the DA in farmers and veterinarians. The sensitivity analysis of the stochastic scenario tree model showed that the small ruminant population structure and the DA of farmers were important influential parameters with regard to the unit sensitivity of ABT in both CH and BH. The DA of both farmers and veterinarians was assumed to be higher in BH than in CH due to the current endemic situation in BH. Although the same DA cannot necessarily be assumed for the modelled hypothetical disease free situation as for the actual endemic situation, it shows the importance of the higher vigilance of people reporting suspect cases on the probability that an average unit processed in the ABT-component would test positive. Conclusion The actual sensitivity of passive surveillance approaches heavily depends on the context in which they are applied. Scenario tree modelling allows for the evaluation of such passive surveillance system components under assumed disease free situation. Despite data gaps, this is a real opportunity to compare different situations and to explore consequences of changes that could be made. PMID:19099610
Hadorn, Daniela C; Haracic, Sabina Seric; Stärk, Katharina D C
2008-12-22
Globalization and subsequent growth in international trade in animals and animal products has increased the importance of international disease reporting. Efficient and reliable surveillance systems are needed in order to document the disease status of a population at a given time. In this context, passive surveillance plays an important role in early warning systems. However, it is not yet routinely integrated in the assessment of disease surveillance systems because different factors like the disease awareness (DA) of people reporting suspect cases influence the detection performance of passive surveillance. In this paper, we used scenario tree methodology in order to evaluate and compare the quality and benefit of abortion testing (ABT) for Brucella melitensis (Bm) between the disease free situation in Switzerland (CH) and a hypothetical disease free situation in Bosnia and Herzegovina (BH), taking into account DA levels assumed for the current endemic situation in BH. The structure and input parameters of the scenario tree were identical for CH and BH with the exception of population data in small ruminants and the DA in farmers and veterinarians. The sensitivity analysis of the stochastic scenario tree model showed that the small ruminant population structure and the DA of farmers were important influential parameters with regard to the unit sensitivity of ABT in both CH and BH. The DA of both farmers and veterinarians was assumed to be higher in BH than in CH due to the current endemic situation in BH. Although the same DA cannot necessarily be assumed for the modelled hypothetical disease free situation as for the actual endemic situation, it shows the importance of the higher vigilance of people reporting suspect cases on the probability that an average unit processed in the ABT-component would test positive. The actual sensitivity of passive surveillance approaches heavily depends on the context in which they are applied. Scenario tree modelling allows for the evaluation of such passive surveillance system components under assumed disease free situation. Despite data gaps, this is a real opportunity to compare different situations and to explore consequences of changes that could be made.
An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators
NASA Technical Reports Server (NTRS)
Tew, Roy C.; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei
2006-01-01
The objective of this paper is to define empirical parameters for an initial thermal non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two codes currently used at Glenn Research Center for Stirling modeling are Fluent and CFD-ACE. The codes porous-media models are equilibrium models, which assume solid matrix and fluid are in thermal equilibrium. This is believed to be a poor assumption for Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, use non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location and time during the cycle. Experimentally based information was used to define: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity was also estimated. Determination of model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Converter (TDC), which uses a random-fiber regenerator matrix. Emphasis is on use of available data to define empirical parameters needed in a thermal non-equilibrium porous media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates.
Ocean data assimilation using optimal interpolation with a quasi-geostrophic model
NASA Technical Reports Server (NTRS)
Rienecker, Michele M.; Miller, Robert N.
1991-01-01
A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.
Thermodynamic perturbation theory for fused sphere hard chain fluids using nonadditive interactions
NASA Astrophysics Data System (ADS)
Abu-Sharkh, Basel F.; Sunaidi, Abdallah; Hamad, Esam Z.
2004-03-01
A model is developed for the equation of state of fused chains based on Wertheim thermodynamic perturbation theory and nonadditive size interactions. The model also assumes that the structure (represented by the radial distribution function) of the fused chain fluid is the same as that of the touching hard sphere chain fluid. The model is completely based on spherical additive and nonadditive size interactions. The model has the advantage of offering good agreement with simulation data while at the same time being independent of fitted parameters. The model is most accurate for short chains, small values of Δ (slightly fused spheres) and at intermediate (liquidlike) densities.
Impacts of relative permeability on CO2 phase behavior, phase distribution, and trapping mechanisms
NASA Astrophysics Data System (ADS)
Moodie, N.; McPherson, B. J. O. L.; Pan, F.
2015-12-01
A critical aspect of geologic carbon storage, a carbon-emissions reduction method under extensive review and testing, is effective multiphase CO2 flow and transport simulation. Relative permeability is a flow parameter particularly critical for accurate forecasting of multiphase behavior of CO2 in the subsurface. The relative permeability relationship assumed and especially the irreducible saturation of the gas phase greatly impacts predicted CO2 trapping mechanisms and long-term plume migration behavior. A primary goal of this study was to evaluate the impact of relative permeability on efficacy of regional-scale CO2 sequestration models. To accomplish this we built a 2-D vertical cross-section of the San Rafael Swell area of East-central Utah. This model simulated injection of CO2 into a brine aquifer for 30 years. The well was then shut-in and the CO2 plume behavior monitored for another 970 years. We evaluated five different relative permeability relationships to quantify their relative impacts on forecasted flow results of the model, with all other parameters maintained uniform and constant. Results of this analysis suggest that CO2 plume movement and behavior are significantly dependent on the specific relative permeability formulation assigned, including the assumed irreducible saturation values of CO2 and brine. More specifically, different relative permeability relationships translate to significant differences in CO2 plume behavior and corresponding trapping mechanisms.
NASA Astrophysics Data System (ADS)
de Farias Aires, Juarez Everton; da Silva, Wilton Pereira; de Almeida Farias Aires, Kalina Lígia Cavalcante; da Silva Júnior, Aluízio Freire; da Silva e Silva, Cleide Maria Diniz Pereira
2018-04-01
The main objective of this study is the presentation of a numerical model of liquid diffusion for the description of the convective drying of apple slices submitted to pretreatment of osmotic dehydration able of predicting the spatial distribution of effective mass diffusivity values in apple slabs. Two models that use numerical solutions of the two-dimensional diffusion equation in Cartesian coordinates with the boundary condition of third kind were proposed to describe drying. The first one does not consider the shrinkage of the product and assumes that the process parameters remain constant along the convective drying. The second one considers the shrinkage of the product and assumes that the effective mass diffusivity of water varies according to the local value of the water content in the apple samples. Process parameters were estimated from experimental data through an optimizer coupled to the numerical solutions. The osmotic pretreatment did not reduce the drying time in relation to the fresh fruits when the drying temperature was equal to 40 °C. The use of the temperature of 60 °C led to a reduction in the drying time. The model that considers the variations in the dimensions of the product and the variation in the effective mass diffusivity proved to be more adequate to describe the process.
NASA Astrophysics Data System (ADS)
Rasa, E.; Foglia, L.; Mackay, D. M.; Ginn, T. R.; Scow, K. M.
2009-12-01
A numerical groundwater fate and transport model was developed for analyses of data from field experiments evaluating the impacts of ethanol on the natural attenuation of benzene, toluene, ethylbenzene, and xylenes (BTEX) and methyl tert-butyl ether (MTBE) at Vandenberg Air Force Base, Site 60. We used the U.S. Geological Survey (USGS) groundwater flow (MODFLOW2000) and transport (MT3DMS) models in conjunction with the USGS universal inverse modeling code (UCODE) to jointly determine flow and transport parameters using bromide tracer data from multiple experiments in the same location. The key flow and transport parameters include hydraulic conductivity of aquifer and aquitard layers, porosity, and transverse and longitudinal dispersivity. Aquifer and aquitard layers were assumed homogenous in this study. Therefore, the calibration parameters were not spatially variable within each layer. A total of 162 monitoring wells in seven transects perpendicular to the mean flow direction were monitored over the course of ten months, resulting in 1,766 bromide concentration data points and 149 head values used as observations for the inverse modeling. The results showed the significance of the concentration observation data in predicting the flow model parameters and indicated the sensitivity of the hydraulic conductivity of different zones in the aquifer including the excavated former contaminant zone. The model has already been used to evaluate alternative designs for further experiments on in situ bioremediation of the tert-butyl alcohol (TBA) plume remaining at the site. We describe the recent applications of the model and future work, including adding reaction submodels to the calibrated flow model.
NASA Astrophysics Data System (ADS)
Buczkowski, M.; Fisz, J. J.
2008-07-01
In this paper the possibility of the numerical data modelling in the case of angle- and time-resolved fluorescence spectroscopy is investigated. The asymmetric fluorescence probes are assumed to undergo the restricted rotational diffusion in a hosting medium. This process is described quantitatively by the diffusion tensor and the aligning potential. The evolution of the system is expressed in terms of the Smoluchowski equation with an appropriate time-developing operator. A matrix representation of this operator is calculated, then symmetrized and diagonalized. The resulting propagator is used to generate the synthetic noisy data set that imitates results of experimental measurements. The data set serves as a groundwork to the χ2 optimization, performed by the genetic algorithm followed by the gradient search, in order to recover model parameters, which are diagonal elements of the diffusion tensor, aligning potential expansion coefficients and directions of the electronic dipole moments. This whole procedure properly identifies model parameters, showing that the outlined formalism should be taken in the account in the case of analysing real experimental data.
NASA Astrophysics Data System (ADS)
Fang, Fei; Xia, Guanghui; Wang, Jianguo
2018-02-01
The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.
NASA Astrophysics Data System (ADS)
Fang, Fei; Xia, Guanghui; Wang, Jianguo
2018-06-01
The nonlinear dynamics of cantilevered piezoelectric beams is investigated under simultaneous parametric and external excitations. The beam is composed of a substrate and two piezoelectric layers and assumed as an Euler-Bernoulli model with inextensible deformation. A nonlinear distributed parameter model of cantilevered piezoelectric energy harvesters is proposed using the generalized Hamilton's principle. The proposed model includes geometric and inertia nonlinearity, but neglects the material nonlinearity. Using the Galerkin decomposition method and harmonic balance method, analytical expressions of the frequency-response curves are presented when the first bending mode of the beam plays a dominant role. Using these expressions, we investigate the effects of the damping, load resistance, electromechanical coupling, and excitation amplitude on the frequency-response curves. We also study the difference between the nonlinear lumped-parameter and distributed-parameter model for predicting the performance of the energy harvesting system. Only in the case of parametric excitation, we demonstrate that the energy harvesting system has an initiation excitation threshold below which no energy can be harvested. We also illustrate that the damping and load resistance affect the initiation excitation threshold.
NASA Astrophysics Data System (ADS)
Kuzmina, N. P.; Zhurbas, N. V.; Emelianov, M. V.; Pyzhevich, M. L.
2014-09-01
Interleaving models of pure thermohaline and baroclinic frontal zones are applied to describe intrusions at the fronts found in the upper part of the Deep Polar Water (DPW) when the stratification was absolutely stable. It is assumed that differential mixing is the main mechanism of the intrusion formation. Important parameters of the interleaving such as the growth rate, vertical scale, and slope of the most unstable modes relative to the horizontal plane are calculated. It was found that the interleaving model for a pure thermohaline front satisfactory describes the important intrusion parameters observed at the frontal zone. In the case of a baroclinic front, satisfactory agreement over all the interleaving parameters is observed between the model calculations and observations provided that the vertical momentum diffusivity significantly exceeds the corresponding coefficient of mass diffusivity. Under specific (reasonable) constraints of the vertical momentum diffusivity, the most unstable mode has a vertical scale approximately two-three times smaller than the vertical scale of the observed intrusions. A thorough discussion of the results is presented.
Müller, Dirk K; Pampel, André; Möller, Harald E
2013-05-01
Quantification of magnetization-transfer (MT) experiments are typically based on the assumption of the binary spin-bath model. This model allows for the extraction of up to six parameters (relative pool sizes, relaxation times, and exchange rate constants) for the characterization of macromolecules, which are coupled via exchange processes to the water in tissues. Here, an approach is presented for estimating MT parameters acquired with arbitrary saturation schemes and imaging pulse sequences. It uses matrix algebra to solve the Bloch-McConnell equations without unwarranted simplifications, such as assuming steady-state conditions for pulsed saturation schemes or neglecting imaging pulses. The algorithm achieves sufficient efficiency for voxel-by-voxel MT parameter estimations by using a polynomial interpolation technique. Simulations, as well as experiments in agar gels with continuous-wave and pulsed MT preparation, were performed for validation and for assessing approximations in previous modeling approaches. In vivo experiments in the normal human brain yielded results that were consistent with published data. Copyright © 2013 Elsevier Inc. All rights reserved.
A computational model for biosonar echoes from foliage
Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao
2017-01-01
Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals’ sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats. PMID:28817631
A computational model for biosonar echoes from foliage.
Ming, Chen; Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao; Müller, Rolf
2017-01-01
Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals' sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats.
Testing anthropic reasoning for the cosmological constant with a realistic galaxy formation model
NASA Astrophysics Data System (ADS)
Sudoh, Takahiro; Totani, Tomonori; Makiya, Ryu; Nagashima, Masahiro
2017-01-01
The anthropic principle is one of the possible explanations for the cosmological constant (Λ) problem. In previous studies, a dark halo mass threshold comparable with our Galaxy must be assumed in galaxy formation to get a reasonably large probability of finding the observed small value, P(<Λobs), though stars are found in much smaller galaxies as well. Here we examine the anthropic argument by using a semi-analytic model of cosmological galaxy formation, which can reproduce many observations such as galaxy luminosity functions. We calculate the probability distribution of Λ by running the model code for a wide range of Λ, while other cosmological parameters and model parameters for baryonic processes of galaxy formation are kept constant. Assuming that the prior probability distribution is flat per unit Λ, and that the number of observers is proportional to stellar mass, we find P(<Λobs) = 6.7 per cent without introducing any galaxy mass threshold. We also investigate the effect of metallicity; we find P(<Λobs) = 9.0 per cent if observers exist only in galaxies whose metallicity is higher than the solar abundance. If the number of observers is proportional to metallicity, we find P(<Λobs) = 9.7 per cent. Since these probabilities are not extremely small, we conclude that the anthropic argument is a viable explanation, if the value of Λ observed in our Universe is determined by a probability distribution.
Mathematical Model of Naive T Cell Division and Survival IL-7 Thresholds.
Reynolds, Joseph; Coles, Mark; Lythe, Grant; Molina-París, Carmen
2013-01-01
We develop a mathematical model of the peripheral naive T cell population to study the change in human naive T cell numbers from birth to adulthood, incorporating thymic output and the availability of interleukin-7 (IL-7). The model is formulated as three ordinary differential equations: two describe T cell numbers, in a resting state and progressing through the cell cycle. The third is introduced to describe changes in IL-7 availability. Thymic output is a decreasing function of time, representative of the thymic atrophy observed in aging humans. Each T cell is assumed to possess two interleukin-7 receptor (IL-7R) signaling thresholds: a survival threshold and a second, higher, proliferation threshold. If the IL-7R signaling strength is below its survival threshold, a cell may undergo apoptosis. When the signaling strength is above the survival threshold, but below the proliferation threshold, the cell survives but does not divide. Signaling strength above the proliferation threshold enables entry into cell cycle. Assuming that individual cell thresholds are log-normally distributed, we derive population-average rates for apoptosis and entry into cell cycle. We have analyzed the adiabatic change in homeostasis as thymic output decreases. With a parameter set representative of a healthy individual, the model predicts a unique equilibrium number of T cells. In a parameter range representative of persistent viral or bacterial infection, where naive T cell cycle progression is impaired, a decrease in thymic output may result in the collapse of the naive T cell repertoire.
Analysis of a model of gambiense sleeping sickness in humans and cattle.
Ndondo, A M; Munganga, J M W; Mwambakana, J N; Saad-Roy, C M; van den Driessche, P; Walo, R O
2016-01-01
Human African Trypanosomiasis (HAT) and Nagana in cattle, commonly called sleeping sickness, is caused by trypanosome protozoa transmitted by bites of infected tsetse flies. We present a deterministic model for the transmission of HAT caused by Trypanosoma brucei gambiense between human hosts, cattle hosts and tsetse flies. The model takes into account the growth of the tsetse fly, from its larval stage to the adult stage. Disease in the tsetse fly population is modeled by three compartments, and both the human and cattle populations are modeled by four compartments incorporating the two stages of HAT. We provide a rigorous derivation of the basic reproduction number R0. For R0 < 1, the disease free equilibrium is globally asymptotically stable, thus HAT dies out; whereas (assuming no return to susceptibility) for R0 >1, HAT persists. Elasticity indices for R0 with respect to different parameters are calculated with baseline parameter values appropriate for HAT in West Africa; indicating parameters that are important for control strategies to bring R0 below 1. Numerical simulations with R0 > 1 show values for the infected populations at the endemic equilibrium, and indicate that with certain parameter values, HAT could not persist in the human population in the absence of cattle.
Cell membrane water exchange effects in prostate DCE-MRI
NASA Astrophysics Data System (ADS)
Li, Xin; Priest, Ryan A.; Woodward, William J.; Siddiqui, Faisal; Beer, Tomasz M.; Garzotto, Mark G.; Rooney, William D.; Springer, Charles S.
2012-05-01
Prostate Dynamic-Contrast-Enhanced (DCE) MRI often exhibits fast and extensive global contrast reagent (CR) extravasation - measured by Ktrans, a pharmacokinetic parameter proportional to its rate. This implies that the CR concentration [CR] is high in the extracellular, extravascular space (EES) during a large portion of the DCE-MRI study. Since CR is detected indirectly, through water proton signal change, the effects of equilibrium transcytolemmal water exchange may be significant in the data and thus should be admitted in DCE-MRI pharmacokinetic modeling. The implications for parameter values were investigated through simulations, and analyses of actual prostate data, with different models. Model parameter correlation and precision were also explored. A near-optimal version of the exchange-sensitized model was found. Our results indicate that ΔKtrans (the Ktrans difference returned by this version and a model assuming exchange to be effectively infinitely fast) may be a very useful biomarker for discriminating malignant from benign prostate tissue. Using an exchange-sensitized model, we find that the mean intracellular water lifetime (τi) - an exchange measure - can be meaningfully mapped for the prostate. Our results show prostate glandular zone differences in τi values.
NASA Astrophysics Data System (ADS)
Ahmadian, A.; Ismail, F.; Salahshour, S.; Baleanu, D.; Ghaemi, F.
2017-12-01
The analysis of the behaviors of physical phenomena is important to discover significant features of the character and the structure of mathematical models. Frequently the unknown parameters involve in the models are assumed to be unvarying over time. In reality, some of them are uncertain and implicitly depend on several factors. In this study, to consider such uncertainty in variables of the models, they are characterized based on the fuzzy notion. We propose here a new model based on fractional calculus to deal with the Kelvin-Voigt (KV) equation and non-Newtonian fluid behavior model with fuzzy parameters. A new and accurate numerical algorithm using a spectral tau technique based on the generalized fractional Legendre polynomials (GFLPs) is developed to solve those problems under uncertainty. Numerical simulations are carried out and the analysis of the results highlights the significant features of the new technique in comparison with the previous findings. A detailed error analysis is also carried out and discussed.
Discrete-to-continuum modelling of weakly interacting incommensurate two-dimensional lattices.
Español, Malena I; Golovaty, Dmitry; Wilber, J Patrick
2018-01-01
In this paper, we derive a continuum variational model for a two-dimensional deformable lattice of atoms interacting with a two-dimensional rigid lattice. The starting point is a discrete atomistic model for the two lattices which are assumed to have slightly different lattice parameters and, possibly, a small relative rotation. This is a prototypical example of a three-dimensional system consisting of a graphene sheet suspended over a substrate. We use a discrete-to-continuum procedure to obtain the continuum model which recovers both qualitatively and quantitatively the behaviour observed in the corresponding discrete model. The continuum model predicts that the deformable lattice develops a network of domain walls characterized by large shearing, stretching and bending deformation that accommodates the misalignment and/or mismatch between the deformable and rigid lattices. Two integer-valued parameters, which can be identified with the components of a Burgers vector, describe the mismatch between the lattices and determine the geometry and the details of the deformation associated with the domain walls.
NASA Astrophysics Data System (ADS)
Shabani, Hamid
In this paper, we investigate cosmological consequences as well as statefinder diagnosis of a scenario for recently reported accelerated expansion of the universe in the framework of f(R,T) = R + h(T) gravity theories. In these models, R and T denote the Ricci curvature scalar and the trace of the energy-momentum tensor (EMT), respectively. Our scenario assumes that the generalized Chaplygin gas (GCG) along with the baryonic matter are responsible for this observed phenomenon. We consider three classes of Chaplygin gas models which include three different forms of f(R,T) function; those models which employ the standard CG (SCG), models which use GCG in the high pressure regimes and finally, the third case is devoted to investigating high density regimes in the presence of GCG. We also test these models using recent Hubble parameter as well as type Ia supernova data. Finally, we compare the predicted present values of the statefinder parameters by these models to the astronomical data.
NASA Astrophysics Data System (ADS)
Chougule, Abhijit; Mann, Jakob; Kelly, Mark; Larsen, Gunner C.
2018-06-01
A spectral-tensor model of non-neutral, atmospheric-boundary-layer turbulence is evaluated using Eulerian statistics from single-point measurements of the wind speed and temperature at heights up to 100 m, assuming constant vertical gradients of mean wind speed and temperature. The model has been previously described in terms of the dissipation rate ɛ , the length scale of energy-containing eddies L, a turbulence anisotropy parameter Γ, the Richardson number Ri, and the normalized rate of destruction of temperature variance η _θ ≡ ɛ _θ /ɛ . Here, the latter two parameters are collapsed into a single atmospheric stability parameter z / L using Monin-Obukhov similarity theory, where z is the height above the Earth's surface, and L is the Obukhov length corresponding to Ri,η _θ. Model outputs of the one-dimensional velocity spectra, as well as cospectra of the streamwise and/or vertical velocity components, and/or temperature, and cross-spectra for the spatial separation of all three velocity components and temperature, are compared with measurements. As a function of the four model parameters, spectra and cospectra are reproduced quite well, but horizontal temperature fluxes are slightly underestimated in stable conditions. In moderately unstable stratification, our model reproduces spectra only up to a scale ˜ 1 km. The model also overestimates coherences for vertical separations, but is less severe in unstable than in stable cases.
Mueller, Christina J; White, Corey N; Kuchinke, Lars
2017-11-27
The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.
NASA Astrophysics Data System (ADS)
Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven
2017-04-01
Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.
LHC constraints on color octet scalars
NASA Astrophysics Data System (ADS)
Hayreter, Alper; Valencia, German
2017-08-01
We extract constraints on the parameter space of the Manohar and Wise model by comparing the cross sections for dijet, top-pair, dijet-pair, t t ¯t t ¯ and b b ¯b b ¯ productions at the LHC with the strongest available experimental limits from ATLAS or CMS at 8 or 13 TeV. Overall we find mass limits around 1 TeV in the most sensitive regions of parameter space, and lower elsewhere. This is at odds with generic limits for color octet scalars often quoted in the literature where much larger production cross sections are assumed. The constraints that can be placed on coupling constants are typically weaker than those from existing theoretical considerations, with the exception of the parameter ηD.
Ion-acoustic double-layers in a magnetized plasma with nonthermal electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rios, L. A.; Galvão, R. M. O.; Instituto de Física, Universidade de São Paulo, 05508-900 São Paulo
2013-11-15
In the present work we investigate the existence of obliquely propagating ion-acoustic double layers in magnetized two-electron plasmas. The fluid model is used to describe the ion dynamics, and the hot electron population is modeled via a κ distribution function, which has been proved to be appropriate for modeling non-Maxwellian plasmas. A quasineutral condition is assumed to investigate these nonlinear structures, which leads to the formation of double-layers propagating with slow ion-acoustic velocity. The problem is investigated numerically, and the influence of parameters such as nonthermality is discussed.
Propulsion mechanisms for Leidenfrost solids on ratchets.
Baier, Tobias; Dupeux, Guillaume; Herbert, Stefan; Hardt, Steffen; Quéré, David
2013-02-01
We propose a model for the propulsion of Leidenfrost solids on ratchets based on viscous drag due to the flow of evaporating vapor. The model assumes pressure-driven flow described by the Navier-Stokes equations and is mainly studied in lubrication approximation. A scaling expression is derived for the dependence of the propulsive force on geometric parameters of the ratchet surface and properties of the sublimating solid. We show that the model results as well as the scaling law compare favorably with experiments and are able to reproduce the experimentally observed scaling with the size of the solid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, W.J.; Kalasinski, L.A.
In this paper, a generalized logistic regression model for correlated observations is used to analyze epidemiologic data on the frequency of spontaneous abortion among a group of women office workers. The results are compared to those obtained from the use of the standard logistic regression model that assumes statistical independence among all the pregnancies contributed by one woman. In this example, the correlation among pregnancies from the same woman is fairly small and did not have a substantial impact on the magnitude of estimates of parameters of the model. This is due at least partly to the small average numbermore » of pregnancies contributed by each woman.« less
Berezinskii-Kosterlitz-Thouless transition in the time-reversal-symmetric Hofstadter-Hubbard model
NASA Astrophysics Data System (ADS)
Iskin, M.
2018-01-01
Assuming that two-component Fermi gases with opposite artificial magnetic fields on a square optical lattice are well described by the so-called time-reversal-symmetric Hofstadter-Hubbard model, we explore the thermal superfluid properties along with the critical Berezinskii-Kosterlitz-Thouless (BKT) transition temperature in this model over a wide range of its parameters. In particular, since our self-consistent BCS-BKT approach takes the multiband butterfly spectrum explicitly into account, it unveils how dramatically the interband contribution to the phase stiffness dominates the intraband one with an increasing interaction strength for any given magnetic flux.
Seismic quiescence in a frictional earthquake model
NASA Astrophysics Data System (ADS)
Braun, Oleg M.; Peyrard, Michel
2018-04-01
We investigate the origin of seismic quiescence with a generalized version of the Burridge-Knopoff model for earthquakes and show that it can be generated by a multipeaked probability distribution of the thresholds at which contacts break. Such a distribution is not assumed a priori but naturally results from the aging of the contacts. We show that the model can exhibit quiescence as well as enhanced foreshock activity, depending on the value of some parameters. This provides a generic understanding for seismic quiescence, which encompasses earlier specific explanations and could provide a pathway for a classification of faults.
A model of transverse fuel injection applied to the computation of supersonic combustor flow
NASA Technical Reports Server (NTRS)
Rogers, R. C.
1979-01-01
A two-dimensional, nonreacting flow model of the aerodynamic interaction of a transverse hydrogen jet within a supersonic mainstream has been developed. The model assumes profile shapes of mass flux, pressure, flow angle, and hydrogen concentration and produces downstream profiles of the other flow parameters under the constraints of the integrated conservation equations. These profiles are used as starting conditions for an existing finite difference parabolic computer code for the turbulent supersonic combustion of hydrogen. Integrated mixing and flow profile results obtained from the computer code compare favorably with existing data for the supersonic combustion of hydrogen.
Vector-based model of elastic bonds for simulation of granular solids.
Kuzkin, Vitaly A; Asonov, Igor E
2012-11-01
A model (further referred to as the V model) for the simulation of granular solids, such as rocks, ceramics, concrete, nanocomposites, and agglomerates, composed of bonded particles (rigid bodies), is proposed. It is assumed that the bonds, usually representing some additional gluelike material connecting particles, cause both forces and torques acting on the particles. Vectors rigidly connected with the particles are used to describe the deformation of a single bond. The expression for potential energy of the bond and corresponding expressions for forces and torques are derived. Formulas connecting parameters of the model with longitudinal, shear, bending, and torsional stiffnesses of the bond are obtained. It is shown that the model makes it possible to describe any values of the bond stiffnesses exactly; that is, the model is applicable for the bonds with arbitrary length/thickness ratio. Two different calibration procedures depending on bond length/thickness ratio are proposed. It is shown that parameters of the model can be chosen so that under small deformations the bond is equivalent to either a Bernoulli-Euler beam or a Timoshenko beam or short cylinder connecting particles. Simple analytical expressions, relating parameters of the V model with geometrical and mechanical characteristics of the bond, are derived. Two simple examples of computer simulation of thin granular structures using the V model are given.
Structural identifiability analysis of a cardiovascular system model.
Pironet, Antoine; Dauby, Pierre C; Chase, J Geoffrey; Docherty, Paul D; Revie, James A; Desaive, Thomas
2016-05-01
The six-chamber cardiovascular system model of Burkhoff and Tyberg has been used in several theoretical and experimental studies. However, this cardiovascular system model (and others derived from it) are not identifiable from any output set. In this work, two such cases of structural non-identifiability are first presented. These cases occur when the model output set only contains a single type of information (pressure or volume). A specific output set is thus chosen, mixing pressure and volume information and containing only a limited number of clinically available measurements. Then, by manipulating the model equations involving these outputs, it is demonstrated that the six-chamber cardiovascular system model is structurally globally identifiable. A further simplification is made, assuming known cardiac valve resistances. Because of the poor practical identifiability of these four parameters, this assumption is usual. Under this hypothesis, the six-chamber cardiovascular system model is structurally identifiable from an even smaller dataset. As a consequence, parameter values computed from limited but well-chosen datasets are theoretically unique. This means that the parameter identification procedure can safely be performed on the model from such a well-chosen dataset. Thus, the model may be considered suitable for use in diagnosis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Bayes factors for the linear ballistic accumulator model of decision-making.
Evans, Nathan J; Brown, Scott D
2018-04-01
Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.
NASA Astrophysics Data System (ADS)
Kump, P.; Vogel-Mikuš, K.
2018-05-01
Two fundamental-parameter (FP) based models for quantification of 2D elemental distribution maps of intermediate-thick biological samples by synchrotron low energy μ-X-ray fluorescence spectrometry (SR-μ-XRF) are presented and applied to the elemental analysis in experiments with monochromatic focused photon beam excitation at two low energy X-ray fluorescence beamlines—TwinMic, Elettra Sincrotrone Trieste, Italy, and ID21, ESRF, Grenoble, France. The models assume intermediate-thick biological samples composed of measured elements, the sources of the measurable spectral lines, and by the residual matrix, which affects the measured intensities through absorption. In the first model a fixed residual matrix of the sample is assumed, while in the second model the residual matrix is obtained by the iteration refinement of elemental concentrations and an adjusted residual matrix. The absorption of the incident focused beam in the biological sample at each scanned pixel position, determined from the output of a photodiode or a CCD camera, is applied as a control in the iteration procedure of quantification.
Zare, Yasser; Rhim, Sungsoo; Garmabi, Hamid; Rhee, Kyong Yop
2018-04-01
The networks of nanoparticles in nanocomposites cause solid-like behavior demonstrating a constant storage modulus at low frequencies. This study examines the storage modulus of poly (lactic acid)/poly (ethylene oxide)/carbon nanotubes (CNT) nanocomposites. The experimental data of the storage modulus in the plateau regions are obtained by a frequency sweep test. In addition, a simple model is developed to predict the constant storage modulus assuming the properties of the interphase regions and the CNT networks. The model calculations are compared with the experimental results, and the parametric analyses are applied to validate the predictability of the developed model. The calculations properly agree with the experimental data at all polymer and CNT concentrations. Moreover, all parameters acceptably modulate the constant storage modulus. The percentage of the networked CNT, the modulus of networks, and the thickness and modulus of the interphase regions directly govern the storage modulus of nanocomposites. The outputs reveal the important roles of the interphase properties in the storage modulus. Copyright © 2018 Elsevier Ltd. All rights reserved.
Parameter Estimation for Thurstone Choice Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vojnovic, Milan; Yun, Seyoung
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-01-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage. PMID:9718328
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-09-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.
Bayesian Analysis of Non-Gaussian Long-Range Dependent Processes
NASA Astrophysics Data System (ADS)
Graves, Timothy; Watkins, Nicholas; Franzke, Christian; Gramacy, Robert
2013-04-01
Recent studies [e.g. the Antarctic study of Franzke, J. Climate, 2010] have strongly suggested that surface temperatures exhibit long-range dependence (LRD). The presence of LRD would hamper the identification of deterministic trends and the quantification of their significance. It is well established that LRD processes exhibit stochastic trends over rather long periods of time. Thus, accurate methods for discriminating between physical processes that possess long memory and those that do not are an important adjunct to climate modeling. As we briefly review, the LRD idea originated at the same time as H-selfsimilarity, so it is often not realised that a model does not have to be H-self similar to show LRD [e.g. Watkins, GRL Frontiers, 2013]. We have used Markov Chain Monte Carlo algorithms to perform a Bayesian analysis of Auto-Regressive Fractionally-Integrated Moving-Average ARFIMA(p,d,q) processes, which are capable of modeling LRD. Our principal aim is to obtain inference about the long memory parameter, d, with secondary interest in the scale and location parameters. We have developed a reversible-jump method enabling us to integrate over different model forms for the short memory component. We initially assume Gaussianity, and have tested the method on both synthetic and physical time series. Many physical processes, for example the Faraday Antarctic time series, are significantly non-Gaussian. We have therefore extended this work by weakening the Gaussianity assumption, assuming an alpha-stable distribution for the innovations, and performing joint inference on d and alpha. Such a modified FARIMA(p,d,q) process is a flexible, initial model for non-Gaussian processes with long memory. We will present a study of the dependence of the posterior variance of the memory parameter d on the length of the time series considered. This will be compared with equivalent error diagnostics for other measures of d.
NASA Astrophysics Data System (ADS)
Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato
2017-12-01
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.
Predictions from star formation in the multiverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bousso, Raphael; Leichenauer, Stefan
2010-03-15
We compute trivariate probability distributions in the landscape, scanning simultaneously over the cosmological constant, the primordial density contrast, and spatial curvature. We consider two different measures for regulating the divergences of eternal inflation, and three different models for observers. In one model, observers are assumed to arise in proportion to the entropy produced by stars; in the others, they arise at a fixed time (5 or 10x10{sup 9} years) after star formation. The star formation rate, which underlies all our observer models, depends sensitively on the three scanning parameters. We employ a recently developed model of star formation in themore » multiverse, a considerable refinement over previous treatments of the astrophysical and cosmological properties of different pocket universes. For each combination of observer model and measure, we display all single and bivariate probability distributions, both with the remaining parameter(s) held fixed and marginalized. Our results depend only weakly on the observer model but more strongly on the measure. Using the causal diamond measure, the observed parameter values (or bounds) lie within the central 2{sigma} of nearly all probability distributions we compute, and always within 3{sigma}. This success is encouraging and rather nontrivial, considering the large size and dimension of the parameter space. The causal patch measure gives similar results as long as curvature is negligible. If curvature dominates, the causal patch leads to a novel runaway: it prefers a negative value of the cosmological constant, with the smallest magnitude available in the landscape.« less
Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L
2017-01-01
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
The massive halos of spiral galaxies
NASA Technical Reports Server (NTRS)
Zaritsky, Dennis; White, Simon D. M.
1994-01-01
We use a sample of satellite galaxies to demonstrate the existence of extended massive dark halos around spiral galaxies. Isolated spirals with rotation velocities near 250 km/s have a typical halo mass within 200 kpc of 1.5-2.6 x 10(exp 12) solar mass (90% confidence range for H(sub 0) = 75 km/s/Mpc). This result is most easily derived using standard mass estimator techniques, but such techniques do not account for the strong observational selection effects in the sample, nor for the extended mass distributions that the data imply. These complications can be addressed using scale-free models similar to those previously employed to study binary galaxies. When satellite velocities are assumed isotropic, both methods imply massive and extended halos. However, the derived masses depend sensitively on the assumed shape of satellite orbits. Furthermore, both methods ignore the fact that many of the satellites in the sample have orbital periods comparable to the Hubble time. The orbital phases of such satellites cannot be random, and their distribution in radius cannot be freely adjusted; rather these properties reflect ongoing infall onto the outer halos of their primaries. We use detailed dynamical models for halo formation to evaluate these problems, and we devise a maximum likelihood technique for estimating the parameters of such models from the data. The most strongly constrained parameter is the mass within 200-300 kpc, giving the confidence limits quoted above. The eccentricity, e, of satellite orbits is also strongly constrained, 0.50 less than e less than 0.88 at 90% confidence, implying a near-isotropic distribution of satellite velocities. The cosmic density parameter in the vicinity of our isolated halos exceeds 0.13 at 90% confidence, with preferred values exceeding 0.3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Jason W.; Linscott, Ethan; Shporer, Avi, E-mail: jwbarnes@uidaho.edu
We model the asymmetry of the KOI-13.01 transit lightcurve assuming a gravity-darkened rapidly rotating host star in order to constrain the system's spin-orbit alignment and transit parameters. We find that our model can reproduce the Kepler lightcurve for KOI-13.01 with a sky-projected alignment of {lambda} = 23 Degree-Sign {+-} 4 Degree-Sign and with the star's north pole tilted away from the observer by 48 Degree-Sign {+-} 4 Degree-Sign (assuming M{sub *} = 2.05 M{sub Sun }). With both these determinations, we calculate that the net misalignment between this planet's orbit normal and its star's rotational pole is 56 Degree-Sign {+-}more » 4 Degree-Sign . Degeneracies in our geometric interpretation also allow a retrograde spin-orbit angle of 124 Degree-Sign {+-} 4 Degree-Sign . This is the first spin-orbit measurement to come from gravity darkening and is one of only a few measurements of the full (not just the sky-projected) spin-orbit misalignment of an extrasolar planet. We also measure accurate transit parameters incorporating stellar oblateness and gravity darkening: R{sub *} 1.756 {+-} 0.014 R{sub Sun }, R{sub p} = 1.445 {+-} 0.016 R{sub Jup}, and i = 85.{sup 0}9 {+-} 0.{sup 0}4. The new lower planetary radius falls within the planetary mass regime for plausible interior models for the transiting body. A simple initial calculation shows that KOI-13.01's circular orbit is apparently inconsistent with the Kozai mechanism having driven its spin-orbit misalignment; planet-planet scattering and stellar spin migration remain viable mechanisms. Future Kepler data will improve the precision of the KOI-13.01 transit lightcurve, allowing more precise determination of transit parameters and the opportunity to use the Photometric Rossiter-McLaughlin effect to resolve the prograde/retrograde orbit determination degeneracy.« less
NASA Astrophysics Data System (ADS)
Kirschner, A.; Tskhakaya, D.; Brezinsek, S.; Borodin, D.; Romazanov, J.; Ding, R.; Eksaeva, A.; Linsmeier, Ch
2018-01-01
Main processes of plasma-wall interaction and impurity transport in fusion devices and their impact on the availability of the devices are presented and modelling tools, in particular the three-dimensional Monte-Carlo code ERO, are introduced. The capability of ERO is demonstrated on the example of tungsten erosion and deposition modelling. The dependence of tungsten deposition on plasma temperature and density is studied by simulations with a simplified geometry assuming (almost) constant plasma parameters. The amount of deposition increases with increasing electron temperature and density. Up to 100% of eroded tungsten can be promptly deposited near to the location of erosion at very high densities (˜1 × 1014 cm-3 expected e.g. in the divertor of ITER). The effect of the sheath characteristics on tungsten prompt deposition is investigated by using particle-in-cell (PIC) simulations to spatially resolve the plasma parameters inside the sheath. Applying PIC data instead of non-resolved sheath leads in general to smaller tungsten deposition, which is mainly due to a density and temperature decrease towards the surface within the sheath. Two-dimensional tungsten erosion/deposition simulations, assuming symmetry in toroidal direction but poloidally spatially varying plasma parameter profiles, have been carried out for the JET divertor. The simulations reveal, similar to experimental findings, that tungsten gross erosion is dominated in H-mode plasmas by the intra-ELM phases. However, due to deposition, the net tungsten erosion can be similar within intra- and inter-ELM phases if the inter-ELM electron temperature is high enough. Also, the simulated deposition fraction of about 84% in between ELMs is in line with spectroscopic observations from which a lower limit of 50% has been estimated.
NASA Astrophysics Data System (ADS)
Xu, Zhuocan; Mace, Jay; Avalone, Linnea; Wang, Zhien
2015-04-01
The extreme variability of ice particle habits in precipitating clouds affects our understanding of these cloud systems in every aspect (i.e. radiation transfer, dynamics, precipitation rate, etc) and largely contributes to the uncertainties in the model representation of related processes. Ice particle mass-dimensional power law relationships, M=a*(D ^ b), are commonly assumed in models and retrieval algorithms, while very little knowledge exists regarding the uncertainties of these M-D parameters in real-world situations. In this study, we apply Optimal Estimation (OE) methodology to infer ice particle mass-dimensional relationship from ice particle size distributions and bulk water contents independently measured on board the University of Wyoming King Air during the Colorado Airborne Multi-Phase Cloud Study (CAMPS). We also utilize W-band radar reflectivity obtained on the same platform (King Air) offering a further constraint to this ill-posed problem (Heymsfield et al. 2010). In addition to the values of retrieved M-D parameters, the associated uncertainties are conveniently acquired in the OE framework, within the limitations of assumed Gaussian statistics. We find, given the constraints provided by the bulk water measurement and in situ radar reflectivity, that the relative uncertainty of mass-dimensional power law prefactor (a) is approximately 80% and the relative uncertainty of exponent (b) is 10-15%. With this level of uncertainty, the forward model uncertainty in radar reflectivity would be on the order of 4 dB or a factor of approximately 2.5 in ice water content. The implications of this finding are that inferences of bulk water from either remote or in situ measurements of particle spectra cannot be more certain than this when the mass-dimensional relationships are not known a priori which is almost never the case.
NASA Astrophysics Data System (ADS)
Bratchikov, A. N.; Glukhov, I. P.
1992-02-01
An analysis is made of a theoretical model of an interference fiber channel for transmission of microwave signals. It is assumed that the channel consists of a multimode fiber waveguide with a step or graded refractive-index profile. A typical statistic of a longitudinal distribution of inhomogeneities is also assumed. Calculations are reported of the interference losses, the spectral profile of the output radio signal, the signal/noise ratio in the channel, and of the dependences of these parameters on: the type, diameter, and the length of the multimode fiber waveguide; the spectral width of the radiation source; the frequency offset between the interfering optical signals.
Mood states determine the degree of task shielding in dual-task performance.
Zwosta, Katharina; Hommel, Bernhard; Goschke, Thomas; Fischer, Rico
2013-01-01
Current models of multitasking assume that dual-task performance and the degree of multitasking are affected by cognitive control strategies. In particular, cognitive control is assumed to regulate the amount of shielding of the prioritised task from crosstalk from the secondary task. We investigated whether and how task shielding is influenced by mood states. Participants were exposed to two short film clips, one inducing high and one inducing low arousal, of either negative or positive content. Negative mood led to stronger shielding of the prioritised task (i.e., less crosstalk) than positive mood, irrespective of arousal. These findings support the assumption that emotional states determine the parameters of cognitive control and play an important role in regulating dual-task performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, B.
1994-12-31
This paper describes an elastic-plastic fracture mechanics (EPFM) study of shallow weld-toe cracks. Two limiting crack configurations, plane strain edge crack and semi-circular surface crack in fillet welded T-butt plate joint, were analyzed using the finite element method. Crack depth ranging from 2 to 40% of plate thickness were considered. The elastic-plastic analysis, assuming power-law hardening relationship and Mises yield criterion, was based on incremental plasticity theory. Tension and bending loads applied were monotonically increased to a level causing relatively large scale yielding at the crack tip. Effects of weld-notch geometry and ductile material modeling on prediction of fracture mechanicsmore » characterizing parameter were assessed. It was found that the weld-notch effect reduces and the effect of material modeling increases as crack depth increases. Material modeling is less important than geometric modeling in analysis of very shallow cracks but is more important for relatively deeper cracks, e.g. crack depth more than 20% of thickness. The effect of material modeling can be assessed using a simplified structural model. Weld magnification factors derived assuming linear elastic conditions can be applied to EPFM characterization.« less
NASA Technical Reports Server (NTRS)
Bird, P.; Baumgardner, J.
1984-01-01
To determine the correct fault rheology of the Transverse Ranges area of California, a new finite element to represent faults and a mangle drag element are introduced into a set of 63 simulation models of anelastic crustal strain. It is shown that a slip rate weakening rheology for faults is not valid in California. Assuming that mantle drag effects on the crust's base are minimal, the optimal coefficient of friction in the seismogenic portion of the fault zones is 0.4-0.6 (less than Byerly's law assumed to apply elsewhere). Depending on how the southern California upper mantle seismic velocity anomaly is interpreted, model results are improved or degraded. It is found that the location of the mantle plate boundary is the most important secondary parameter, and that the best model is either a low-stress model (fault friction = 0.3) or a high-stress model (fault friction = 0.85), each of which has strong mantel drag. It is concluded that at least the fastest moving faults in southern California have a low friction coefficient (approximtely 0.3) because they contain low strength hydrated clay gouges throughout the low-temperature seismogenic zone.
Hilltop supernatural inflation and gravitino problem
NASA Astrophysics Data System (ADS)
Kohri, Kazunori; Lin, Chia-Min
2010-11-01
In this paper, we explore the parameter space of hilltop supernatural inflation model and show the regime within which there is no gravitino problem even if we consider both thermal and nonthermal production mechanisms. We make plots for the allowed reheating temperature as a function of gravitino mass by constraints from big-bang nucleosynthesis. We also plot the constraint when gravitino is assumed to be stable and plays the role of dark matter.
The Changeable Block Distance System Analysis
NASA Astrophysics Data System (ADS)
Lewiński, Andrzej; Toruń, Andrzej
The paper treats about efficiency analysis in Changeable Block Distance (CBD) System connected with wireless positioning and control of train. The analysis is based on modeling of typical ERTMS line and comparison with actual and future traffic. The calculations are related to assumed parameters of railway traffic corresponding to real time - table of distance Psary - Góra Włodowska from CMK line equipped in classic, ETCS Level 1 and ETCS with CBD systems.
Two Back Stress Hardening Models in Rate Independent Rigid Plastic Deformation
NASA Astrophysics Data System (ADS)
Yun, Su-Jin
In the present work, the constitutive relations based on the combination of two back stresses are developed using the Armstrong-Frederick, Phillips and Ziegler’s type hardening rules. Various evolutions of the kinematic hardening parameter can be obtained by means of a simple combination of back stress rate using the rule of mixtures. Thus, a wide range of plastic deformation behavior can be depicted depending on the dominant back stress evolution. The ultimate back stress is also determined for the present combined kinematic hardening models. Since a kinematic hardening rule is assumed in the finite deformation regime, the stress rate is co-rotated with respect to the spin of substructure obtained by incorporating the plastic spin concept. A comparison of the various co-rotational rates is also included. Assuming rigid plasticity, the continuum body consists of the elastic deformation zone and the plastic deformation zone to form a hybrid finite element formulation. Then, the plastic deformation behavior is investigated under various loading conditions with an assumption of the J2 deformation theory. The plastic deformation localization turns out to be strongly dependent on the description of back stress evolution and its associated hardening parameters. The analysis for the shear deformation with fixed boundaries is carried out to examine the deformation localization behavior and the evolution of state variables.
Vogl, Claus; Das, Aparup; Beaumont, Mark; Mohanty, Sujata; Stephan, Wolfgang
2003-11-01
Population subdivision complicates analysis of molecular variation. Even if neutrality is assumed, three evolutionary forces need to be considered: migration, mutation, and drift. Simplification can be achieved by assuming that the process of migration among and drift within subpopulations is occurring fast compared to mutation and drift in the entire population. This allows a two-step approach in the analysis: (i) analysis of population subdivision and (ii) analysis of molecular variation in the migrant pool. We model population subdivision using an infinite island model, where we allow the migration/drift parameter Theta to vary among populations. Thus, central and peripheral populations can be differentiated. For inference of Theta, we use a coalescence approach, implemented via a Markov chain Monte Carlo (MCMC) integration method that allows estimation of allele frequencies in the migrant pool. The second step of this approach (analysis of molecular variation in the migrant pool) uses the estimated allele frequencies in the migrant pool for the study of molecular variation. We apply this method to a Drosophila ananassae sequence data set. We find little indication of isolation by distance, but large differences in the migration parameter among populations. The population as a whole seems to be expanding. A population from Bogor (Java, Indonesia) shows the highest variation and seems closest to the species center.
Transient response of an active nonlinear sandwich piezolaminated plate
NASA Astrophysics Data System (ADS)
Oveisi, Atta; Nestorović, Tamara
2017-04-01
In this paper, the dynamic modelling and active vibration control of a piezolaminated plate with geometrical nonlinearities are investigated using a semi-analytical approach. For active vibration control purposes, the core orthotropic elastic layer is assumed to be perfectly bonded with two piezo-layers on its top and bottom surfaces which act as sensor and actuator, respectively. In the modelling procedure, the piezo-layers are assumed to be connected via a proportional derivative (PD) feedback control law. Hamilton's principle is employed to acquire the strong form of the dynamic equation in terms of additional higher order strain expressions by means of von Karman strain-displacement correlation. The obtained nonlinear partial differential equation (NPDE) is converted to a system of nonlinear ordinary differential equations (NODEs) by engaging Galerkin method and using the orthogonality of shape functions for the simply supported boundary conditions. Then, the resulting system of NODEs is solved numerically by employing the built-in Mathematica function, "NDSolve". Next, the vibration attenuation performance is evaluated and sensitivity of the closed-loop system is investigated for several control parameters and the external disturbance parameters. The proposed solution in open loop configuration is validated by finite element (FE) package ABAQUS both in the spatial domain and for the time-/frequency-dependent response.
Individual heterogeneity and identifiability in capture-recapture models
Link, W.A.
2004-01-01
Individual heterogeneity in detection probabilities is a far more serious problem for capture-recapture modeling than has previously been recognized. In this note, I illustrate that population size is not an identifiable parameter under the general closed population mark-recapture model Mh. The problem of identifiability is obvious if the population includes individuals with pi = 0, but persists even when it is assumed that individual detection probabilities are bounded away from zero. Identifiability may be attained within parametric families of distributions for pi, but not among parametric families of distributions. Consequently, in the presence of individual heterogeneity in detection probability, capture-recapture analysis is strongly model dependent.
Stefanutti, Luca; Robusto, Egidio; Vianello, Michelangelo; Anselmi, Pasquale
2013-06-01
A formal model is proposed that decomposes the implicit association test (IAT) effect into three process components: stimuli discrimination, automatic association, and termination criterion. Both response accuracy and reaction time are considered. Four independent and parallel Poisson processes, one for each of the four label categories of the IAT, are assumed. The model parameters are the rate at which information accrues on the counter of each process and the amount of information that is needed before a response is given. The aim of this study is to present the model and an illustrative application in which the process components of a Coca-Pepsi IAT are decomposed.
NASA Technical Reports Server (NTRS)
Venable, D. D.
1980-01-01
A radiative transfer computer model was developed to characterize the total flux of chlorophyll a fluoresced or backscattered photons when laser radiation is incident on turbid water that contains a non-homogeneous suspension of inorganic sediments and phytoplankton. The radiative transfer model is based on the Monte Carlo technique and assumes that: (1) the aquatic medium can be represented by a stratified concentration profile; and (2) that appropriate optical parameters can be defined for each layer. The model was designed to minimize the required computer resources and run time. Results are presented for an anacystis marinus culture.
Graphical Models for Ordinal Data
Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji
2014-01-01
A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267
Development of a parameter optimization technique for the design of automatic control systems
NASA Technical Reports Server (NTRS)
Whitaker, P. H.
1977-01-01
Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.
Fault Detection of Bearing Systems through EEMD and Optimization Algorithm
Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan
2017-01-01
This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772
Uncertainty in dual permeability model parameters for structured soils.
Arora, B; Mohanty, B P; McGuire, J T
2012-01-01
Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface ( K sa ) and macropore tortuosity ( l f ) but also of other parameters of the matrix and macropore domains.
Uncertainty in dual permeability model parameters for structured soils
NASA Astrophysics Data System (ADS)
Arora, B.; Mohanty, B. P.; McGuire, J. T.
2012-01-01
Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface (Ksa) and macropore tortuosity (lf) but also of other parameters of the matrix and macropore domains.
Kim, Eun Sook; Wang, Yan
2017-01-01
Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691
Elaboration of the α-model derived from the BCS theory of superconductivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, David C.
2013-10-14
The single-band α-model of superconductivity (Padamsee et al 1973 J. Low Temp. Phys. 12 387) is a popular model that was adapted from the single-band Bardeen–Cooper–Schrieffer (BCS) theory of superconductivity mainly to allow fits to electronic heat capacity versus temperature T data that deviate from the BCS prediction. The model assumes that the normalized superconducting order parameter Δ(T)/Δ(0) and therefore the normalized London penetration depth λL(T)/λL(0) are the same as in BCS theory, calculated using the BCS value αBCS ≈ 1.764 of α ≡ Δ(0)/kBTc, where kB is The single-band α-model of superconductivity (Padamsee et al 1973 J. Low Temp.more » Phys. 12 387) is a popular model that was adapted from the single-band Bardeen–Cooper–Schrieffer (BCS) theory of superconductivity mainly to allow fits to electronic heat capacity versus temperature T data that deviate from the BCS prediction. The model assumes that the normalized superconducting order parameter Δ(T)/Δ(0) and therefore the normalized London penetration depth λL(T)/λL(0) are the same as in BCS theory, calculated using the BCS value αBCS ≈ 1.764 of α ≡ Δ(0)/kBTc, where kB is Boltzmann's constant and Tc is the superconducting transition temperature. On the other hand, to calculate the electronic free energy, entropy, heat capacity and thermodynamic critical field versus T, the α-model takes α to be an adjustable parameter. Here we write the BCS equations and limiting behaviors for the superconducting state thermodynamic properties explicitly in terms of α, as needed for calculations within the α-model, and present plots of the results versus T and α that are compared with the respective BCS predictions. Mechanisms such as gap anisotropy and strong coupling that can cause deviations of the thermodynamics from the BCS predictions, especially the heat capacity jump at Tc, are considered. Extensions of the α-model that have appeared in the literature, such as the two-band model, are also discussed. Tables of values of Δ(T)/Δ(0), the normalized London parameter Λ(T)/Λ(0) and λL(T)/λL(0) calculated from the BCS theory using α = αBCS are provided, which are the same in the α-model by assumption. Tables of values of the entropy, heat capacity and thermodynamic critical field versus T for seven values of α, including αBCS, are also presented.« less
Modeling of the Multiparameter Inverse Task of Transient Thermography
NASA Technical Reports Server (NTRS)
Plotnikov, Y. A.
1998-01-01
Transient thermography employs preheated surface temperature variations caused by delaminations, cracks, voids, corroded regions, etc. Often, it is enough to detect these changes to declare a defect in a workpiece. It is also desirable to obtain additional information about the defect from the thermal response. The planar size, depth, and thermal resistance of the detected defects are the parameters of interest. In this paper a digital image processing technique is applied to simulated thermal responses in order to obtain the geometry of the inclusion-type defects in a flat panel. A three-dimensional finite difference model in Cartesian coordinates is used for the numerical simulations. Typical physical properties of polymer graphite composites are assumed. Using different informative parameters of the thermal response for depth estimation is discussed.
On the formalism of dark energy accretion onto black- and worm-holes
NASA Astrophysics Data System (ADS)
Martín-Moruno, Prado
2008-01-01
In this work a general formalism for the accretion of dark energy onto astronomical objects, black holes and wormholes, is considered. It is shown that in models with four dimensions or more, any singularity with a divergence in the Hubble parameter may be avoided by a big trip, if it is assumed that there is no coupling between the bulk and this accreting object. If this is not the case in more than four dimensions, the evolution of the cosmological object depends on the particular model.
Axion induced oscillating electric dipole moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, Christopher T.
In this study, the axion electromagnetic anomaly induces an oscillating electric dipole for any magnetic dipole. This is a low energy theorem which is a consequence of the space-time dependent cosmic background field of the axion. The electron will acquire an oscillating electric dipole of frequency m a and strength ~ 10-32 e-cm, within four orders of magnitude of the present standard model DC limit, and two orders of magnitude above the nucleon, assuming standard axion model and dark matter parameters. This may suggest sensitive new experimental venues for the axion dark matter search.
NASA Astrophysics Data System (ADS)
Riabkov, Dmitri
Compartment modeling of dynamic medical image data implies that the concentration of the tracer over time in a particular region of the organ of interest is well-modeled as a convolution of the tissue response with the tracer concentration in the blood stream. The tissue response is different for different tissues while the blood input is assumed to be the same for different tissues. The kinetic parameters characterizing the tissue responses can be estimated by blind identification methods. These algorithms use the simultaneous measurements of concentration in separate regions of the organ; if the regions have different responses, the measurement of the blood input function may not be required. In this work it is shown that the blind identification problem has a unique solution for two-compartment model tissue response. For two-compartment model tissue responses in dynamic cardiac MRI imaging conditions with gadolinium-DTPA contrast agent, three blind identification algorithms are analyzed here to assess their utility: Eigenvector-based Algorithm for Multichannel Blind Deconvolution (EVAM), Cross Relations (CR), and Iterative Quadratic Maximum Likelihood (IQML). Comparisons of accuracy with conventional (not blind) identification techniques where the blood input is known are made as well. The statistical accuracies of estimation for the three methods are evaluated and compared for multiple parameter sets. The results show that the IQML method gives more accurate estimates than the other two blind identification methods. A proof is presented here that three-compartment model blind identification is not unique in the case of only two regions. It is shown that it is likely unique for the case of more than two regions, but this has not been proved analytically. For the three-compartment model the tissue responses in dynamic FDG PET imaging conditions are analyzed with the blind identification algorithms EVAM and Separable variables Least Squares (SLS). A method of identification that assumes that FDG blood input in the brain can be modeled as a function of time and several parameters (IFM) is analyzed also. Nonuniform sampling SLS (NSLS) is developed due to the rapid change of the FDG concentration in the blood during the early postinjection stage. Comparisons of accuracy of EVAM, SLS, NSLS and IFM identification techniques are made.
Fieselmann, Andreas; Dennerlein, Frank; Deuerling-Zheng, Yu; Boese, Jan; Fahrig, Rebecca; Hornegger, Joachim
2011-06-21
Filtered backprojection is the basis for many CT reconstruction tasks. It assumes constant attenuation values of the object during the acquisition of the projection data. Reconstruction artifacts can arise if this assumption is violated. For example, contrast flow in perfusion imaging with C-arm CT systems, which have acquisition times of several seconds per C-arm rotation, can cause this violation. In this paper, we derived and validated a novel spatio-temporal model to describe these kinds of artifacts. The model separates the temporal dynamics due to contrast flow from the scan and reconstruction parameters. We introduced derivative-weighted point spread functions to describe the spatial spread of the artifacts. The model allows prediction of reconstruction artifacts for given temporal dynamics of the attenuation values. Furthermore, it can be used to systematically investigate the influence of different reconstruction parameters on the artifacts. We have shown that with optimized redundancy weighting function parameters the spatial spread of the artifacts around a typical arterial vessel can be reduced by about 70%. Finally, an inversion of our model could be used as the basis for novel dynamic reconstruction algorithms that further minimize these artifacts.
Tug of war of molecular motors: the effects of uneven load sharing
NASA Astrophysics Data System (ADS)
Bouzat, Sebastián; Falo, Fernando
2011-12-01
We analyze theoretically the problem of cargo transport along microtubules by motors of two species with opposite polarities. We consider two different one-dimensional models previously developed in the literature: a quite widespread model which assumes equal force sharing, here referred to as the mean field model (MFM), and a stochastic model (SM) which considers individual motor-cargo links. We find that in generic situations, the MFM predicts larger cargo mean velocity, smaller mean run time and less frequent reversions than the SM. These phenomena are found to be the consequences of the load sharing assumptions and can be interpreted in terms of the probabilities of the different motility states. We also explore the influence of the viscosity in both models and the role of the stiffness of the motor-cargo links within the SM. Our results show that the mean cargo velocity is independent of the stiffness, while the mean run time decreases with such a parameter. We explore the case of symmetric forward and backward motors considering kinesin-1 parameters, and the problem of transport by kinesin-1 and cytoplasmic dyneins considering two different sets of parameters previously proposed for dyneins.
NASA Astrophysics Data System (ADS)
Smith, B. D.; White, J.; Kress, W. H.; Clark, B. R.; Barlow, J.
2016-12-01
Hydrogeophysical surveys have become an integral part of understanding hydrogeological frameworks used in groundwater models. Regional models cover a large area where water well data is, at best, scattered and irregular. Since budgets are finite, priorities must be assigned to select optimal areas for geophysical surveys. For airborne electromagnetic (AEM) geophysical surveys, optimization of mapping depth and line spacing needs to take in account the objectives of the groundwater models. The approach discussed here uses a first-order, second-moment (FOSM) uncertainty analyses which assumes an approximate linear relation between model parameters and observations. This assumption allows FOSM analyses to be applied to estimate the value of increased parameter knowledge to reduce forecast uncertainty. FOSM is used to facilitate optimization of yet-to-be-completed geophysical surveying to reduce model forecast uncertainty. The main objective of geophysical surveying is assumed to estimate values and spatial variation in hydrologic parameters (i.e. hydraulic conductivity) as well as map lower permeability layers that influence the spatial distribution of recharge flux. The proposed data worth analysis was applied to Mississippi Embayment Regional Aquifer Study (MERAS) which is being updated. The objective of MERAS is to assess the ground-water availability (status and trends) of the Mississippi embayment aquifer system. The study area covers portions of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. The active model grid covers approximately 70,000 square miles, and incorporates some 6,000 miles of major rivers and over 100,000 water wells. In the FOSM analysis, a dense network of pilot points was used to capture uncertainty in hydraulic conductivity and recharge. To simulate the effect of AEM flight lines, the prior uncertainty for hydraulic conductivity and recharge pilots along potential flight lines was reduced. The FOSM forecast uncertainty estimates were then recalculated and compared to the base forecast uncertainty estimates. The resulting reduction in forecast uncertainty is a measure of the effect on the model from the AEM survey. Iterations through this process, results in optimization of flight line location.
Systematic simulations of modified gravity: chameleon models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brax, Philippe; Davis, Anne-Christine; Li, Baojiu
2013-04-01
In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference withmore » the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.« less
C-field cosmological models: revisited
NASA Astrophysics Data System (ADS)
Yadav, Anil Kumar; Tawfiq Ali, Ahmad; Ray, Saibal; Rahaman, Farook; Hossain Sardar, Iftikar
2016-12-01
We investigate plane symmetric spacetime filled with perfect fluid in the C-field cosmology of Hoyle and Narlikar. A new class of exact solutions has been obtained by considering the creation field C as a function of time only. To get the deterministic solution, it has been assumed that the rate of creation of matter-energy density is proportional to the strength of the existing C-field energy density. Several physical aspects and geometrical properties of the models are discussed in detail, especially showing that some of our solutions of C-field cosmology are free from singularity in contrast to the Big Bang cosmology. A comparative study has been carried out between two models, one singular and the other nonsingular, by contrasting the behaviour of the physical parameters. We note that the model in a unique way represents both the features of the accelerating as well as decelerating universe depending on the parameters and thus seems to provide glimpses of the oscillating or cyclic model of the universe without invoking any other agent or theory in allowing cyclicity.
Enrichment in a stoichiometric model of two producers and one consumer.
Lin, Laurence Hao-Ran; Peckham, Bruce B; Stech, Harlan W; Pastor, John
2012-01-01
We consider a stoichiometric population model of two producers and one consumer. Stoichiometry can be thought of as the tracking of food quality in addition to food quantity. Our model assumes a reduced rate of conversion of biomass from producer to consumer when food quality is low. The model is open for carbon but closed for nutrient. The introduction of the second producer, which competes with the first, leads to new equilibria, new limit cycles, and new bifurcations. The focus of this paper is on the bifurcations which are the result of enrichment. The primary parameters we vary are the growth rates of both producers. Secondary variable parameters are the total nutrients in the system, and the producer nutrient uptake rates. The possible equilibria are: no-life, one-producer, coexistence of both producers, the consumer coexisting with either producer, and the consumer coexisting with both producers. We observe limit cycles in the latter three coexistence combinations. Bifurcation diagrams along with corresponding representative time series summarize the behaviours observed for this model.
The effect of friction in the hold down post spherical bearings on hold down post loads
NASA Technical Reports Server (NTRS)
Richardson, James A.
1990-01-01
The effect of friction at the connection of the Solid Rocket Booster (SRB) aft skirt and the mobile launch platform (MLP) hold down posts was analyzed. A simplified model of the shuttle response during the Space Shuttle Main Engine (SSME) buildup was constructed. The model included the effect of stick-slip friction for the rotation of the skirt about the spherical bearing. Current finite element models assume the joint is completely frictionless in rotation and therefore no moment is transferred between the skirt and the hold down posts. The model was partially verified against test data and preliminary parameter studies were performed. The parameter studies indicated that the coefficient of friction strongly influenced the moment on the hold down posts. The coefficient of friction had little effect on hold down post vertical loads, however. Further calibration of the model is necessary before the effect of friction on the hold down post horizontal loads can be analyzed.
NASA Astrophysics Data System (ADS)
Aizawa, Hirohito; Kuroki, Kazuhiko
2018-03-01
We present a first-principles band calculation for the quasi-one-dimensional (Q1D) organic superconductor (TMTSF) 2ClO4 . An effective tight-binding model with the TMTSF molecule to be regarded as the site is derived from a calculation based on maximally localized Wannier orbitals. We apply a two-particle self-consistent (TPSC) analysis by using a four-site Hubbard model, which is composed of the tight-binding model and an onsite (intramolecular) repulsive interaction, which serves as a variable parameter. We assume that the pairing mechanism is mediated by the spin fluctuation, and the sign of the superconducting gap changes between the inner and outer Fermi surfaces, which correspond to a d -wave gap function in a simplified Q1D model. With the parameters we adopt, the critical temperature for superconductivity estimated by the TPSC approach is approximately 1 K, which is consistent with experiment.
Fine-structure constant constraints on dark energy. II. Extending the parameter space
NASA Astrophysics Data System (ADS)
Martins, C. J. A. P.; Pinho, A. M. M.; Carreira, P.; Gusart, A.; López, J.; Rocha, C. I. S. A.
2016-01-01
Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α , are a powerful probe of new physics. Recently these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, were used to constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ , to the electromagnetic sector) the α variation. One caveat of these analyses was that it was based on fiducial models where the dark energy equation of state was described by a single parameter (effectively its present day value, w0). Here we relax this assumption and study broader dark energy model classes, including the Chevallier-Polarski-Linder and early dark energy parametrizations. Even in these extended cases we find that the current data constrains the coupling ζ at the 1 0-6 level and w0 to a few percent (marginalizing over other parameters), thus confirming the robustness of earlier analyses. On the other hand, the additional parameters are typically not well constrained. We also highlight the implications of our results for constraints on violations of the weak equivalence principle and improvements to be expected from forthcoming measurements with high-resolution ultrastable spectrographs.
NASA Astrophysics Data System (ADS)
Alkharji, Mohammed N.
Most fracture characterization methods provide a general description of the fracture parameters as part of the reservoirs parameters; the fracture interaction and geometry within the reservoir is given less attention. T-Matrix and Linear Slip effective medium fracture models are implemented to invert the elastic tensor for the parameters and geometries of the fractures within the reservoir. The fracture inverse problem has an ill-posed, overdetermined, underconstrained rank-deficit system of equations. Least-squares inverse methods are used to solve the problem. A good starting initial model for the parameters is a key factor in the reliability of the inversion. Most methods assume that the starting parameters are close to the solution to avoid inaccurate local minimum solutions. The prior knowledge of the fracture parameters and their geometry is not available. We develop a hybrid, enumerative and Gauss-Newton, method that estimates the fracture parameters and geometry from the elastic tensor with no prior knowledge of the initial parameter values. The fracture parameters are separated into two groups. The first group contains the fracture parameters with no prior information, and the second group contains the parameters with known prior information. Different models are generated from the first group parameters by sampling the solution space over a predefined range of possible solutions for each parameter. Each model generated by the first group is fixed and used as a starting model to invert for the second group of parameters using the Gauss-Newton method. The least-squares residual between the observed elastic tensor and the estimated elastic tensor is calculated for each model. The model parameters that yield the least-squares residual corresponds to the correct fracture reservoir parameters and geometry. Two synthetic examples of fractured reservoirs with oil and gas saturations were inverted with no prior information about the fracture properties. The results showed that the hybrid algorithm successfully predicted the fracture parametrization, geometry, and the fluid content within the modeled reservoir. The method was also applied on an elastic tensor extracted from the Weyburn field in Saskatchewan, Canada. The solution suggested no presence of fractures but only a VTI system caused by the shale layering in the targeted reservoir, this interpretation is supported by other Weyburn field data.
Sensitivity analysis for the coupling of a subglacial hydrology model with a 3D ice-sheet model.
NASA Astrophysics Data System (ADS)
Bertagna, L.; Perego, M.; Gunzburger, M.; Hoffman, M. J.; Price, S. F.
2017-12-01
When studying the movement of ice sheets, one of the most important factors that influence the velocity of the ice is the amount of friction against the bedrock. Usually, this is modeled by a friction coefficient that may depend on the bed geometry and other quantities, such as the temperature and/or water pressure at the ice-bedrock interface. These quantities are often assumed to be known (either by indirect measurements or by means of parameter estimation) and constant in time. Here, we present a 3D computational model for the simulation of the ice dynamics which incorporates a 2D model proposed by Hewitt (2011) for the subglacial water pressure. The hydrology model is fully coupled with the Blatter-Pattyn model for the ice sheet flow, as the subglacial water pressure appears in the expression for the ice friction coefficient, and the ice velocity appears as a source term in the hydrology model. We will present results on real geometries, and perform a sensitivity analysis with respect to the hydrology model parameters.
Modelling of hydrogen conditioning, retention and release in Tore Supra
NASA Astrophysics Data System (ADS)
Grisolia, C.; Horton, L. D.; Ehrenberg, J. K.
1995-04-01
A model based on a local mixing model has been previously developed at JET to explain the recovery of tritium after the first PTE experiment. This model is extended by a 0D plasma particle balance model and is applied to data from Tore Supra wall saturation experiments. With only two free parameters, representing the diffusion of hydrogen atoms and the volume recombination process between hydrogen atoms into molecules, the model can reproduce experimental data. The time evolution of the after-shot outgassing and the integral amount of particles recovered after the shot (assuming 13 m 2 of interacting surfaces between plasma and walls) are in good agreement with the experimental observations. The same set of parameters allows the model to simulate after-shot outgassing of five consecutive discharges. However, the model fails to predict the observed saturation of the walls by the plasma. Results from helium glow discharge (HeGD) can only be partially described. Good agreement with the experimental hydrogen release and its time evolution during HeGD is observed, but the model fails to describe the stability of a saturated graphite wall.
Simple model of surface roughness for binary collision sputtering simulations
NASA Astrophysics Data System (ADS)
Lindsey, Sloan J.; Hobler, Gerhard; Maciążek, Dawid; Postawa, Zbigniew
2017-02-01
It has been shown that surface roughness can strongly influence the sputtering yield - especially at glancing incidence angles where the inclusion of surface roughness leads to an increase in sputtering yields. In this work, we propose a simple one-parameter model (the "density gradient model") which imitates surface roughness effects. In the model, the target's atomic density is assumed to vary linearly between the actual material density and zero. The layer width is the sole model parameter. The model has been implemented in the binary collision simulator IMSIL and has been evaluated against various geometric surface models for 5 keV Ga ions impinging an amorphous Si target. To aid the construction of a realistic rough surface topography, we have performed MD simulations of sequential 5 keV Ga impacts on an initially crystalline Si target. We show that our new model effectively reproduces the sputtering yield, with only minor variations in the energy and angular distributions of sputtered particles. The success of the density gradient model is attributed to a reduction of the reflection coefficient - leading to increased sputtering yields, similar in effect to surface roughness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, M. Vargas dos; Reis, R.R.R.; Waga, I., E-mail: vargas@if.ufrj.br, E-mail: ribamar@if.ufrj.br, E-mail: ioav@if.ufrj.br
2016-02-01
We revisit the kink-like parametrization of the deceleration parameter q(z) [1], which considers a transition, at redshift z{sub t}, from cosmic deceleration to acceleration. In this parametrization the initial, at z >> z{sub t}, value of the q-parameter is q{sub i}, its final, z=−1, value is q{sub f} and the duration of the transition is parametrized by τ. By assuming a flat space geometry we obtain constraints on the free parameters of the model using recent data from type Ia supernovae (SN Ia), baryon acoustic oscillations (BAO), cosmic microwave background (CMB) and the Hubble parameter H(z). The use of H(z) data introducesmore » an explicit dependence of the combined likelihood on the present value of the Hubble parameter H{sub 0}, allowing us to explore the influence of different priors when marginalizing over this parameter. We also study the importance of the CMB information in the results by considering data from WMAP7, WMAP9 (Wilkinson Microwave Anisotropy Probe—7 and 9 years) and Planck 2015. We show that the contours and best fit do not depend much on the different CMB data used and that the considered new BAO data is responsible for most of the improvement in the results. Assuming a flat space geometry, q{sub i}=1/2 and expressing the present value of the deceleration parameter q{sub 0} as a function of the other three free parameters, we obtain z{sub t}=0.67{sup +0.10}{sub −0.08}, τ=0.26{sup +0.14}{sub −0.10} and q{sub 0}=−0.48{sup +0.11}{sub −0.13}, at 68% of confidence level, with an uniform prior over H{sub 0}. If in addition we fix q{sub f}=−1, as in flat ΛCDM, DGP and Chaplygin quartessence that are special models described by our parametrization, we get z{sub t}=0.66{sup +0.03}{sub −0.04}, τ=0.33{sup +0.04}{sub −0.04} and q{sub 0}=−0.54{sup +0.05}{sub −0.07}, in excellent agreement with flat ΛCDM for which τ=1/3. We also obtain for flat wCDM, another dark energy model described by our parametrization, the constraint on the equation of state parameter −1.22 < w < −0.78 at more than 99% confidence level.« less
Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects
Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.
2015-01-01
The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2016-01-01
Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.
NASA Astrophysics Data System (ADS)
Mansoori Kermani, Maryam; Dehestani, Maryam
2018-06-01
We modeled a one-dimensional actuator including the Casimir and electrostatic forces perturbed by an external force with fractional damping. The movable electrode was assumed to oscillate by an anharmonic elastic force originated from Murrell-Mottram or Lippincott potential. The nonlinear equations have been solved via the Adomian decomposition method. The behavior of the displacement of the electrode from equilibrium position, its velocity and acceleration were described versus time. Also, the changes of the displacement have been investigated according to the frequency of the external force and the voltage of the electrostatic force. The convergence of the Adomian method and the effect of the orders of expansion on the displacement versus time, frequency, and voltage were discussed. The pull-in parameter was obtained and compared with the other models in the literature. This parameter was described versus the equilibrium position and anharmonicity constant.
Cracking on anisotropic neutron stars
NASA Astrophysics Data System (ADS)
Setiawan, A. M.; Sulaksono, A.
2017-07-01
We study the effect of cracking of a local anisotropic neutron star (NS) due to small density fluctuations. It is assumed that the neutron star core consists of leptons, nucleons and hyperons. The relativistic mean field model is used to describe the core of equation of state (EOS). For the crust, we use the EOS introduced by Miyatsu et al. [1]. Furthermore, two models are used to describe pressure anisotropic in neutron star matter. One is proposed by Doneva-Yazadjiev (DY) [2] and the other is proposed by Herrera-Barreto (HB) [3]. The anisotropic parameter of DY and HB models are adjusted in order the predicted maximum mass compatible to the mass of PSR J1614-2230 [4] and PSR J0348+0432 [5]. We have found that cracking can potentially present in the region close to the neutron star surface. The instability due cracking is quite sensitive to the NS mass and anisotropic parameter used.
NASA Astrophysics Data System (ADS)
Quimque, Mark Tristan J.; Jimenez, Marvin C.; Acas, Meg Ina S.; Indoc, Danrelle Keth L.; Gomez, Enjelyn C.; Tabuñag, Jenny Syl D.
2017-01-01
Manganese is a common contaminant in drinking water along with other metal pollutants. This paper investigates the use of chitin, extracted from crab shells obtained as restaurant throwaway, as an adsorbent in removing manganese ions from aqueous medium. In particular, this aims to optimize the adsorption parameters and look into the kinetics of the process. The adsorption experiments done in this study employed the batch equilibration method. In the optimization, the following parameters were considered: pH and concentration of Mn (II) sorbate solution, particle size and dosage of adsorbent chitin, and adsorbent-adsorbate contact time. At the optimal condition, the order of the adsorption reaction was estimated using kinetic models which describes the process best. It was found out that the adsorption of aqueous Mn (II) ions onto chitin obeys the pseudo-second order model. This model assumes that the adsorption occurred via chemisorption
Cosmology and accelerator tests of strongly interacting dark matter
Berlin, Asher; Blinov, Nikita; Gori, Stefania; ...
2018-03-23
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
Local Infrasound Variability Related to In Situ Atmospheric Observation
NASA Astrophysics Data System (ADS)
Kim, Keehoon; Rodgers, Arthur; Seastrand, Douglas
2018-04-01
Local infrasound is widely used to constrain source parameters of near-surface events (e.g., chemical explosions and volcanic eruptions). While atmospheric conditions are critical to infrasound propagation and source parameter inversion, local atmospheric variability is often ignored by assuming homogeneous atmospheres, and their impact on the source inversion uncertainty has never been accounted for due to the lack of quantitative understanding of infrasound variability. We investigate atmospheric impacts on local infrasound propagation by repeated explosion experiments with a dense acoustic network and in situ atmospheric measurement. We perform full 3-D waveform simulations with local atmospheric data and numerical weather forecast model to quantify atmosphere-dependent infrasound variability and address the advantage and restriction of local weather data/numerical weather model for sound propagation simulation. Numerical simulations with stochastic atmosphere models also showed nonnegligible influence of atmospheric heterogeneity on infrasound amplitude, suggesting an important role of local turbulence.
Cosmology and accelerator tests of strongly interacting dark matter
NASA Astrophysics Data System (ADS)
Berlin, Asher; Blinov, Nikita; Gori, Stefania; Schuster, Philip; Toro, Natalia
2018-03-01
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experiments such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. We also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.
Cosmology and accelerator tests of strongly interacting dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berlin, Asher; Blinov, Nikita; Gori, Stefania
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
NASA Astrophysics Data System (ADS)
Mansoori Kermani, Maryam; Dehestani, Maryam
2018-03-01
We modeled a one-dimensional actuator including the Casimir and electrostatic forces perturbed by an external force with fractional damping. The movable electrode was assumed to oscillate by an anharmonic elastic force originated from Murrell-Mottram or Lippincott potential. The nonlinear equations have been solved via the Adomian decomposition method. The behavior of the displacement of the electrode from equilibrium position, its velocity and acceleration were described versus time. Also, the changes of the displacement have been investigated according to the frequency of the external force and the voltage of the electrostatic force. The convergence of the Adomian method and the effect of the orders of expansion on the displacement versus time, frequency, and voltage were discussed. The pull-in parameter was obtained and compared with the other models in the literature. This parameter was described versus the equilibrium position and anharmonicity constant.
Nakasaki, Kiyohiko; Ohtaki, Akihito
2002-01-01
Using dog food as a model of the organic waste that comprises composting raw material, the degradation pattern of organic materials was examined by continuously measuring the quantity of CO2 evolved during the composting process in both batch and fed-batch operations. A simple numerical model was made on the basis of three suppositions for describing the organic matter decomposition in the batch operation. First, a certain quantity of carbon in the dog food was assumed to be recalcitrant to degradation in the composting reactor within the retention time allowed. Second, it was assumed that the decomposition rate of carbon is proportional to the quantity of easily degradable carbon, that is, the carbon recalcitrant to degradation was subtracted from the total carbon remaining in the dog food. Third, a certain lag time is assumed to occur before the start of active decomposition of organic matter in the dog food; this lag corresponds to the time required for microorganisms to proliferate and become active. It was then ascertained that the decomposition pattern for the organic matter in the dog food during the fed-batch operation could be predicted by the numerical model with the parameters obtained from the batch operation. This numerical model was modified so that the change in dry weight of composting materials could be obtained. The modified model was found suitable for describing the organic matter decomposition pattern in an actual fed-batch composting operation of the garbage obtained from a restaurant, approximately 10 kg d(-1) loading for 60 d.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry
2017-07-01
Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF of Schaap et al. (2001) applied to the SoilGrids1km data set of Hengl et al. (2014). The example data set is provided at a global resolution of 0.25° at https://doi.org/10.1594/PANGAEA.870605.
Heat transfer in porous medium embedded with vertical plate: Non-equilibrium approach - Part B
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quadir, G. A., E-mail: Irfan-magami@Rediffmail.com, E-mail: gaquadir@gmail.com; Badruddin, Irfan Anjum
2016-06-08
This work is continuation of the paper Part A. Due to large number of results, the paper is divided into two section with section-A (Part A) discussing the effect of various parameters such as heat transfer coefficient parameter, thermal conductivity ratio etc. on streamlines and isothermal lines. Section-B highlights the heat transfer characteristics in terms of Nusselt number The Darcy model is employed to simulate the flow inside the medium. It is assumed that the heat transfer takes place by convection and radiation. The governing partial differential equations are converted into non-dimensional form and solved numerically using finite element method.
Parameter identifiability of linear dynamical systems
NASA Technical Reports Server (NTRS)
Glover, K.; Willems, J. C.
1974-01-01
It is assumed that the system matrices of a stationary linear dynamical system were parametrized by a set of unknown parameters. The question considered here is, when can such a set of unknown parameters be identified from the observed data? Conditions for the local identifiability of a parametrization are derived in three situations: (1) when input/output observations are made, (2) when there exists an unknown feedback matrix in the system and (3) when the system is assumed to be driven by white noise and only output observations are made. Also a sufficient condition for global identifiability is derived.
NASA Astrophysics Data System (ADS)
Bianchi Janetti, Emanuela; Riva, Monica; Guadagnini, Alberto
2017-04-01
We perform a variance-based global sensitivity analysis to assess the impact of the uncertainty associated with (a) the spatial distribution of hydraulic parameters, e.g., hydraulic conductivity, and (b) the conceptual model adopted to describe the system on the characterization of a regional-scale aquifer. We do so in the context of inverse modeling of the groundwater flow system. The study aquifer lies within the provinces of Bergamo and Cremona (Italy) and covers a planar extent of approximately 785 km2. Analysis of available sedimentological information allows identifying a set of main geo-materials (facies/phases) which constitute the geological makeup of the subsurface system. We parameterize the conductivity field following two diverse conceptual schemes. The first one is based on the representation of the aquifer as a Composite Medium. In this conceptualization the system is composed by distinct (five, in our case) lithological units. Hydraulic properties (such as conductivity) in each unit are assumed to be uniform. The second approach assumes that the system can be modeled as a collection of media coexisting in space to form an Overlapping Continuum. A key point in this model is that each point in the domain represents a finite volume within which each of the (five) identified lithofacies can be found with a certain volumetric percentage. Groundwater flow is simulated with the numerical code MODFLOW-2005 for each of the adopted conceptual models. We then quantify the relative contribution of the considered uncertain parameters, including boundary conditions, to the total variability of the piezometric level recorded in a set of 40 monitoring wells by relying on the variance-based Sobol indices. The latter are derived numerically for the investigated settings through the use of a model-order reduction technique based on the polynomial chaos expansion approach.
Cross Sections, relic abundance, and detection rates for neutralino dark matter
NASA Technical Reports Server (NTRS)
Griest, Kim
1988-01-01
Neutralino annihilation and elastic scattering cross sections are derived which differ in important ways from previous work. These are applied to relic abundance calculations and to direct detection of neutralino dark matter from the galactic halo. Assuming the neutralino to be the lightest supersymmetric particle and that it is less massive than the Z sup 0, we find relic densities of neutralinos greater than 4 percent of critical density for almost all values of the supersymmetric parameters. We constrain the parameter space by using results from PETRA (chargino mass less than 23 GeV) and ASP, and then assuming a critical density of neutralinos, display event rates in a cryogenic detector for a variety of models. A new term implies spin independent elastic scattering even for those majorana particles and inclusion of propagator momenta increases detection rates by 10 to 300 percent for pure photinos. Z sup 0-squark interference leads to very low detection rates for some values of the parameters. The new term in the elastic cross section dominates for heavy, mostly spinless materials and mitigates the negative interference cancellations in light materials; except for the pure photino or pure higgsinos cases where it does not contribute. In general, the rates can be substantially different from the pure photino and pure higgsino special cases usually considered.
Relative effects of survival and reproduction on the population dynamics of emperor geese
Schmutz, Joel A.; Rockwell, Robert F.; Petersen, Margaret R.
1997-01-01
Populations of emperor geese (Chen canagica) in Alaska declined sometime between the mid-1960s and the mid-1980s and have increased little since. To promote recovery of this species to former levels, managers need to know how much their perturbations of survival and/or reproduction would affect population growth rate (λ). We constructed an individual-based population model to evaluate the relative effect of altering mean values of various survival and reproductive parameters on λ and fall age structure (AS, defined as the proportion of juv), assuming additive rather than compensatory relations among parameters. Altering survival of adults had markedly greater relative effects on λ than did equally proportionate changes in either juvenile survival or reproductive parameters. We found the opposite pattern for relative effects on AS. Due to concerns about bias in the initial parameter estimates used in our model, we used 5 additional sets of parameter estimates with this model structure. We found that estimates of survival based on aerial survey data gathered each fall resulted in models that corresponded more closely to independent estimates of λ than did models that used mark-recapture estimates of survival. This disparity suggests that mark-recapture estimates of survival are biased low. To further explore how parameter estimates affected estimates of λ, we used values of survival and reproduction found in other goose species, and we examined the effect of an hypothesized correlation between an individual's clutch size and the subsequent survival of her young. The rank order of parameters in their relative effects on λ was consistent for all 6 parameter sets we examined. The observed variation in relative effects on λ among the 6 parameter sets is indicative of how relative effects on λ may vary among goose populations. With this knowledge of the relative effects of survival and reproductive parameters on λ, managers can make more informed decisions about which parameters to influence through management or to target for future study.
The Relationship Between School Holidays and Transmission of Influenza in England and Wales
Jackson, Charlotte; Vynnycky, Emilia; Mangtani, Punam
2016-01-01
Abstract School closure is often considered as an influenza control measure, but its effects on transmission are poorly understood. We used 2 approaches to estimate how school holidays affect the contact parameter (the per capita rate of contact sufficient for infection transmission) for influenza using primary care data from England and Wales (1967–2000). Firstly, we fitted an age-structured susceptible-infectious-recovered model to each year's data to estimate the proportional change in the contact parameter during school holidays as compared with termtime. Secondly, we calculated the percentage difference in the contact parameter between holidays and termtime from weekly values of the contact parameter, estimated directly from simple mass-action models. Estimates were combined using random-effects meta-analysis, where appropriate. From fitting to the data, the difference in the contact parameter among children aged 5–14 years during holidays as compared with termtime ranged from a 36% reduction to a 17% increase; estimates were too heterogeneous for meta-analysis. Based on the simple mass-action model, the contact parameter was 17% (95% confidence interval: 10, 25) lower during holidays than during termtime. Results were robust to the assumed proportions of infections that were reported and individuals who were susceptible when the influenza season started. We conclude that school closure may reduce transmission during influenza outbreaks. PMID:27744384
Lindley frailty model for a class of compound Poisson processes
NASA Astrophysics Data System (ADS)
Kadilar, Gamze Özel; Ata, Nihal
2013-10-01
The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.
The Impact of Cell Density and Mutations in a Model of Multidrug Resistance in Solid Tumors
Greene, James; Lavi, Orit; Gottesman, Michael M.; Levy, Doron
2016-01-01
In this paper we develop a mathematical framework for describing multidrug resistance in cancer. To reflect the complexity of the underlying interplay between cancer cells and the therapeutic agent, we assume that the resistance level is a continuous parameter. Our model is written as a system of integro-differential equations that are parametrized by the resistance level. This model incorporates the cell-density and mutation dependence. Analysis and simulations of the model demonstrate how the dynamics evolves to a selection of one or more traits corresponding to different levels of resistance. The emerging limit distribution with nonzero variance is the desirable modeling outcome as it represents tumor heterogeneity. PMID:24553772
Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul
2017-02-01
Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Estimation of primordial spectrum with post-WMAP 3-year data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafieloo, Arman; Souradeep, Tarun
2008-07-15
In this paper we implement an improved (error-sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the Wilkinson Microwave Anisotropy Probe (WMAP) 3 year data to determine the primordial power spectrum assuming different points in the cosmological parameter space for a flat {lambda}CDM cosmological model. We also present the preliminary results of the cosmological parameter estimation by assuming a free form of the primordial spectrum, for a reasonably large volume of the parameter space. The recovered spectrum for a considerably large number of the points in the cosmological parameter space has a likelihood far better than a 'bestmore » fit' power law spectrum up to {delta}{chi}{sub eff}{sup 2}{approx_equal}-30. We use discrete wavelet transform (DWT) for smoothing the raw recovered spectrum from the binned data. The results obtained here reconfirm and sharpen the conclusion drawn from our previous analysis of the WMAP 1st year data. A sharp cut off around the horizon scale and a bump after the horizon scale seem to be a common feature for all of these reconstructed primordial spectra. We have shown that although the WMAP 3 year data prefers a lower value of matter density for a power law form of the primordial spectrum, for a free form of the spectrum, we can get a very good likelihood to the data for higher values of matter density. We have also shown that even a flat cold dark matter model, allowing a free form of the primordial spectrum, can give a very high likelihood fit to the data. Theoretical interpretation of the results is open to the cosmology community. However, this work provides strong evidence that the data retains discriminatory power in the cosmological parameter space even when there is full freedom in choosing the primordial spectrum.« less
Ensemble Forecasting of Coronal Mass Ejections Using the WSA-ENLIL with CONED Model
NASA Technical Reports Server (NTRS)
Emmons, D.; Acebal, A.; Pulkkinen, A.; Taktakishvili, A.; MacNeice, P.; Odstricil, D.
2013-01-01
The combination of the Wang-Sheeley-Arge (WSA) coronal model, ENLIL heliospherical model version 2.7, and CONED Model version 1.3 (WSA-ENLIL with CONED Model) was employed to form ensemble forecasts for 15 halo coronal mass ejections (halo CMEs). The input parameter distributions were formed from 100 sets of CME cone parameters derived from the CONED Model. The CONED Model used image processing along with the bootstrap approach to automatically calculate cone parameter distributions from SOHO/LASCO imagery based on techniques described by Pulkkinen et al. (2010). The input parameter distributions were used as input to WSA-ENLIL to calculate the temporal evolution of the CMEs, which were analyzed to determine the propagation times to the L1 Lagrangian point and the maximum Kp indices due to the impact of the CMEs on the Earth's magnetosphere. The Newell et al. (2007) Kp index formula was employed to calculate the maximum Kp indices based on the predicted solar wind parameters near Earth assuming two magnetic field orientations: a completely southward magnetic field and a uniformly distributed clock-angle in the Newell et al. (2007) Kp index formula. The forecasts for 5 of the 15 events had accuracy such that the actual propagation time was within the ensemble average plus or minus one standard deviation. Using the completely southward magnetic field assumption, 10 of the 15 events contained the actual maximum Kp index within the range of the ensemble forecast, compared to 9 of the 15 events when using a uniformly distributed clock angle.
Validating a two-high-threshold measurement model for confidence rating data in recognition.
Bröder, Arndt; Kellen, David; Schütz, Julia; Rohrmeier, Constanze
2013-01-01
Signal Detection models as well as the Two-High-Threshold model (2HTM) have been used successfully as measurement models in recognition tasks to disentangle memory performance and response biases. A popular method in recognition memory is to elicit confidence judgements about the presumed old/new status of an item, allowing for the easy construction of ROCs. Since the 2HTM assumes fewer latent memory states than response options are available in confidence ratings, the 2HTM has to be extended by a mapping function which models individual rating scale usage. Unpublished data from 2 experiments in Bröder and Schütz (2009) validate the core memory parameters of the model, and 3 new experiments show that the response mapping parameters are selectively affected by manipulations intended to affect rating scale use, and this is independent of overall old/new bias. Comparisons with SDT show that both models behave similarly, a case that highlights the notion that both modelling approaches can be valuable (and complementary) elements in a researcher's toolbox.
Non-stationary noise estimation using dictionary learning and Gaussian mixture models
NASA Astrophysics Data System (ADS)
Hughes, James M.; Rockmore, Daniel N.; Wang, Yang
2014-02-01
Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.; Dudley, Kenneth
2003-01-01
A simple method is presented to estimate the complex dielectric constants of individual layers of a multilayer composite material. Using the MatLab Optimization Tools simple MatLab scripts are written to search for electric properties of individual layers so as to match the measured and calculated S-parameters. A single layer composite material formed by using materials such as Bakelite, Nomex Felt, Fiber Glass, Woven Composite B and G, Nano Material #0, Cork, Garlock, of different thicknesses are tested using the present approach. Assuming the thicknesses of samples unknown, the present approach is shown to work well in estimating the dielectric constants and the thicknesses. A number of two layer composite materials formed by various combinations of above individual materials are tested using the present approach. However, the present approach could not provide estimate values close to their true values when the thicknesses of individual layers were assumed to be unknown. This is attributed to the difficulty in modelling the presence of airgaps between the layers while doing the measurement of S-parameters. A few examples of three layer composites are also presented.
NASA Astrophysics Data System (ADS)
Jha, Mayank Shekhar; Dauphin-Tanguy, G.; Ould-Bouamama, B.
2016-06-01
The paper's main objective is to address the problem of health monitoring of system parameters in Bond Graph (BG) modeling framework, by exploiting its structural and causal properties. The system in feedback control loop is considered uncertain globally. Parametric uncertainty is modeled in interval form. The system parameter is undergoing degradation (prognostic candidate) and its degradation model is assumed to be known a priori. The detection of degradation commencement is done in a passive manner which involves interval valued robust adaptive thresholds over the nominal part of the uncertain BG-derived interval valued analytical redundancy relations (I-ARRs). The latter forms an efficient diagnostic module. The prognostics problem is cast as joint state-parameter estimation problem, a hybrid prognostic approach, wherein the fault model is constructed by considering the statistical degradation model of the system parameter (prognostic candidate). The observation equation is constructed from nominal part of the I-ARR. Using particle filter (PF) algorithms; the estimation of state of health (state of prognostic candidate) and associated hidden time-varying degradation progression parameters is achieved in probabilistic terms. A simplified variance adaptation scheme is proposed. Associated uncertainties which arise out of noisy measurements, parametric degradation process, environmental conditions etc. are effectively managed by PF. This allows the production of effective predictions of the remaining useful life of the prognostic candidate with suitable confidence bounds. The effectiveness of the novel methodology is demonstrated through simulations and experiments on a mechatronic system.
Krissansen-Totton, Joshua; Catling, David C
2017-05-22
The relative influences of tectonics, continental weathering and seafloor weathering in controlling the geological carbon cycle are unknown. Here we develop a new carbon cycle model that explicitly captures the kinetics of seafloor weathering to investigate carbon fluxes and the evolution of atmospheric CO 2 and ocean pH since 100 Myr ago. We compare model outputs to proxy data, and rigorously constrain model parameters using Bayesian inverse methods. Assuming our forward model is an accurate representation of the carbon cycle, to fit proxies the temperature dependence of continental weathering must be weaker than commonly assumed. We find that 15-31 °C (1σ) surface warming is required to double the continental weathering flux, versus 3-10 °C in previous work. In addition, continental weatherability has increased 1.7-3.3 times since 100 Myr ago, demanding explanation by uplift and sea-level changes. The average Earth system climate sensitivity is K (1σ) per CO 2 doubling, which is notably higher than fast-feedback estimates. These conclusions are robust to assumptions about outgassing, modern fluxes and seafloor weathering kinetics.
NASA Technical Reports Server (NTRS)
Bakuckas, J. G., Jr.; Johnson, W. S.
1994-01-01
In this research, a methodology to predict damage initiation, damage growth, fatigue life, and residual strength in titanium matrix composites (TMC) is outlined. Emphasis was placed on micromechanics-based engineering approaches. Damage initiation was predicted using a local effective strain approach. A finite element analysis verified the prevailing assumptions made in the formulation of this model. Damage growth, namely, fiber-bridged matrix crack growth, was evaluated using a fiber bridging (FB) model which accounts for thermal residual stresses. This model combines continuum fracture mechanics and micromechanics analyses yielding stress-intensity factor solutions for fiber-bridged matrix cracks. It is assumed in the FB model that fibers in the wake of the matrix crack are idealized as a closure pressure, and an unknown constant frictional shear stress is assumed to act along the debond length of the bridging fibers. This frictional shear stress was used as a curve fitting parameter to the available experimental data. Fatigue life and post-fatigue residual strength were predicted based on the axial stress in the first intact 0 degree fiber calculated using the FB model and a three-dimensional finite element analysis.
Adaptive model-based assistive control for pneumatic direct driven soft rehabilitation robots.
Wilkening, Andre; Ivlev, Oleg
2013-06-01
Assistive behavior and inherent compliance are assumed to be the essential properties for effective robot-assisted therapy in neurological as well as in orthopedic rehabilitation. This paper presents two adaptive model-based assistive controllers for pneumatic direct driven soft rehabilitation robots that are based on separated models of the soft-robot and the patient's extremity, in order to take into account the individual patient's behavior, effort and ability during control, what is assumed to be essential to relearn lost motor functions in neurological and facilitate muscle reconstruction in orthopedic rehabilitation. The high inherent compliance of soft-actuators allows for a general human-robot interaction and provides the base for effective and dependable assistive control. An inverse model of the soft-robot with estimated parameters is used to achieve robot transparency during treatment and inverse adaptive models of the individual patient's extremity allow the controllers to learn on-line the individual patient's behavior and effort and react in a way that assist the patient only as much as needed. The effectiveness of the controllers is evaluated with unimpaired subjects using a first prototype of a soft-robot for elbow training. Advantages and disadvantages of both controllers are analyzed and discussed.
Krissansen-Totton, Joshua; Catling, David C.
2017-01-01
The relative influences of tectonics, continental weathering and seafloor weathering in controlling the geological carbon cycle are unknown. Here we develop a new carbon cycle model that explicitly captures the kinetics of seafloor weathering to investigate carbon fluxes and the evolution of atmospheric CO2 and ocean pH since 100 Myr ago. We compare model outputs to proxy data, and rigorously constrain model parameters using Bayesian inverse methods. Assuming our forward model is an accurate representation of the carbon cycle, to fit proxies the temperature dependence of continental weathering must be weaker than commonly assumed. We find that 15–31 °C (1σ) surface warming is required to double the continental weathering flux, versus 3–10 °C in previous work. In addition, continental weatherability has increased 1.7–3.3 times since 100 Myr ago, demanding explanation by uplift and sea-level changes. The average Earth system climate sensitivity is K (1σ) per CO2 doubling, which is notably higher than fast-feedback estimates. These conclusions are robust to assumptions about outgassing, modern fluxes and seafloor weathering kinetics. PMID:28530231
Royston, Thomas J.; Dai, Zoujun; Chaunsali, Rajesh; Liu, Yifei; Peng, Ying; Magin, Richard L.
2011-01-01
Previous studies of the first author and others have focused on low audible frequency (<1 kHz) shear and surface wave motion in and on a viscoelastic material comprised of or representative of soft biological tissue. A specific case considered has been surface (Rayleigh) wave motion caused by a circular disk located on the surface and oscillating normal to it. Different approaches to identifying the type and coefficients of a viscoelastic model of the material based on these measurements have been proposed. One approach has been to optimize coefficients in an assumed viscoelastic model type to match measurements of the frequency-dependent Rayleigh wave speed. Another approach has been to optimize coefficients in an assumed viscoelastic model type to match the complex-valued frequency response function (FRF) between the excitation location and points at known radial distances from it. In the present article, the relative merits of these approaches are explored theoretically, computationally, and experimentally. It is concluded that matching the complex-valued FRF may provide a better estimate of the viscoelastic model type and parameter values; though, as the studies herein show, there are inherent limitations to identifying viscoelastic properties based on surface wave measurements. PMID:22225067
Quantitative model validation of manipulative robot systems
NASA Astrophysics Data System (ADS)
Kartowisastro, Iman Herwidiana
This thesis is concerned with applying the distortion quantitative validation technique to a robot manipulative system with revolute joints. Using the distortion technique to validate a model quantitatively, the model parameter uncertainties are taken into account in assessing the faithfulness of the model and this approach is relatively more objective than the commonly visual comparison method. The industrial robot is represented by the TQ MA2000 robot arm. Details of the mathematical derivation of the distortion technique are given which explains the required distortion of the constant parameters within the model and the assessment of model adequacy. Due to the complexity of a robot model, only the first three degrees of freedom are considered where all links are assumed rigid. The modelling involves the Newton-Euler approach to obtain the dynamics model, and the Denavit-Hartenberg convention is used throughout the work. The conventional feedback control system is used in developing the model. The system behavior to parameter changes is investigated as some parameters are redundant. This work is important so that the most important parameters to be distorted can be selected and this leads to a new term called the fundamental parameters. The transfer function approach has been chosen to validate an industrial robot quantitatively against the measured data due to its practicality. Initially, the assessment of the model fidelity criterion indicated that the model was not capable of explaining the transient record in term of the model parameter uncertainties. Further investigations led to significant improvements of the model and better understanding of the model properties. After several improvements in the model, the fidelity criterion obtained was almost satisfied. Although the fidelity criterion is slightly less than unity, it has been shown that the distortion technique can be applied in a robot manipulative system. Using the validated model, the importance of friction terms in the model was highlighted with the aid of the partition control technique. It was also shown that the conventional feedback control scheme was insufficient for a robot manipulative system due to high nonlinearity which was inherent in the robot manipulator.
NASA Astrophysics Data System (ADS)
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.
Bayesian parameter estimation for nonlinear modelling of biological pathways.
Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang
2011-01-01
The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.
NASA Astrophysics Data System (ADS)
Berrittella, C.; van Huissteden, J.
2011-10-01
Marine Isotope Stage 3 (MIS 3) interstadials are marked by a sharp increase in the atmospheric methane (CH4) concentration, as recorded in ice cores. Wetlands are assumed to be the major source of this CH4, although several other hypotheses have been advanced. Modelling of CH4 emissions is crucial to quantify CH4 sources for past climates. Vegetation effects are generally highly generalized in modelling past and present-day CH4 fluxes, but should not be neglected. Plants strongly affect the soil-atmosphere exchange of CH4 and the net primary production of the vegetation supplies organic matter as substrate for methanogens. For modelling past CH4 fluxes from northern wetlands, assumptions on vegetation are highly relevant since paleobotanical data indicate large differences in Last Glacial (LG) wetland vegetation composition as compared to modern wetland vegetation. Besides more cold-adapted vegetation, Sphagnum mosses appear to be much less dominant during large parts of the LG than at present, which particularly affects CH4 oxidation and transport. To evaluate the effect of vegetation parameters, we used the PEATLAND-VU wetland CO2/CH4 model to simulate emissions from wetlands in continental Europe during LG and modern climates. We tested the effect of parameters influencing oxidation during plant transport (fox), vegetation net primary production (NPP, parameter symbol Pmax), plant transport rate (Vtransp), maximum rooting depth (Zroot) and root exudation rate (fex). Our model results show that modelled CH4 fluxes are sensitive to fox and Zroot in particular. The effects of Pmax, Vtransp and fex are of lesser relevance. Interactions with water table modelling are significant for Vtransp. We conducted experiments with different wetland vegetation types for Marine Isotope Stage 3 (MIS 3) stadial and interstadial climates and the present-day climate, by coupling PEATLAND-VU to high resolution climate model simulations for Europe. Experiments assuming dominance of one vegetation type (Sphagnum vs. Carex vs. Shrubs) show that Carex-dominated vegetation can increase CH4 emissions by 50% to 78% over Sphagnum-dominated vegetation depending on the modelled climate, while for shrubs this increase ranges from 42% to 72%. Consequently, during the LG northern wetlands may have had CH4 emissions similar to their present-day counterparts, despite a colder climate. Changes in dominant wetland vegetation, therefore, may drive changes in wetland CH4 fluxes, in the past as well as in the future.
Optimizing cosmological surveys in a crowded market
NASA Astrophysics Data System (ADS)
Bassett, Bruce A.
2005-04-01
Optimizing the major next-generation cosmological surveys (such as SNAP, KAOS, etc.) is a key problem given our ignorance of the physics underlying cosmic acceleration and the plethora of surveys planned. We propose a Bayesian design framework which (1) maximizes the discrimination power of a survey without assuming any underlying dark-energy model, (2) finds the best niche survey geometry given current data and future competing experiments, (3) maximizes the cross section for serendipitous discoveries and (4) can be adapted to answer specific questions (such as “is dark energy dynamical?”). Integrated parameter-space optimization (IPSO) is a design framework that integrates projected parameter errors over an entire dark energy parameter space and then extremizes a figure of merit (such as Shannon entropy gain which we show is stable to off-diagonal covariance matrix perturbations) as a function of survey parameters using analytical, grid or MCMC techniques. We discuss examples where the optimization can be performed analytically. IPSO is thus a general, model-independent and scalable framework that allows us to appropriately use prior information to design the best possible surveys.
NASA Astrophysics Data System (ADS)
Asfahani, J.; Tlas, M.
2015-10-01
An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.
NASA Astrophysics Data System (ADS)
Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric
2014-08-01
In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.
An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework
NASA Astrophysics Data System (ADS)
Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong
2016-07-01
This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the inversion framework. The next step of using this framework to study the aerosol information content in GEO-TASO measurements is also discussed.
Demographic estimation methods for plants with unobservable life-states
Kery, M.; Gregg, K.B.; Schaub, M.
2005-01-01
Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous computer algebra methods to identify the parameters that are estimable in principle. As life-states are a prominent feature in plant life cycles, multi state capture-recapture models are a natural framework for analysing population dynamics of plants with dormancy.