Sample records for reasonable parameter values

  1. Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment

    NASA Astrophysics Data System (ADS)

    Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin

    2017-10-01

    Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.

  2. Planning Robot-Control Parameters With Qualitative Reasoning

    NASA Technical Reports Server (NTRS)

    Peters, Stephen F.

    1993-01-01

    Qualitative-reasoning planning algorithm helps to determine quantitative parameters controlling motion of robot. Algorithm regarded as performing search in multidimensional space of control parameters from starting point to goal region in which desired result of robotic manipulation achieved. Makes use of directed graph representing qualitative physical equations describing task, and interacts, at each sampling period, with history of quantitative control parameters and sensory data, to narrow search for reliable values of quantitative control parameters.

  3. Prediction and typicality in multiverse cosmology

    NASA Astrophysics Data System (ADS)

    Azhar, Feraz

    2014-02-01

    In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.

  4. Outdoor ground impedance models.

    PubMed

    Attenborough, Keith; Bashir, Imran; Taherzadeh, Shahram

    2011-05-01

    Many models for the acoustical properties of rigid-porous media require knowledge of parameter values that are not available for outdoor ground surfaces. The relationship used between tortuosity and porosity for stacked spheres results in five characteristic impedance models that require not more than two adjustable parameters. These models and hard-backed-layer versions are considered further through numerical fitting of 42 short range level difference spectra measured over various ground surfaces. For all but eight sites, slit-pore, phenomenological and variable porosity models yield lower fitting errors than those given by the widely used one-parameter semi-empirical model. Data for 12 of 26 grassland sites and for three beech wood sites are fitted better by hard-backed-layer models. Parameter values obtained by fitting slit-pore and phenomenological models to data for relatively low flow resistivity grounds, such as forest floors, porous asphalt, and gravel, are consistent with values that have been obtained non-acoustically. Three impedance models yield reasonable fits to a narrow band excess attenuation spectrum measured at short range over railway ballast but, if extended reaction is taken into account, the hard-backed-layer version of the slit-pore model gives the most reasonable parameter values.

  5. THEORETICAL RESEARCH OF THE OPTICAL SPECTRA AND EPR PARAMETERS FOR Cs2NaYCl6:Dy3+ CRYSTAL

    NASA Astrophysics Data System (ADS)

    Dong, Hui-Ning; Dong, Meng-Ran; Li, Jin-Jin; Li, Deng-Feng; Zhang, Yi

    2013-09-01

    The calculated EPR parameters are in reasonable agreement with the observed values. The important material Cs2NaYCl6 doped with rare earth ions have received much attention because of its excellent optical and magnetic properties. Based on the superposition model, in this paper the crystal field energy levels, the electron paramagnetic resonance parameters g factors of Dy3+ and hyperfine structure constants of 161Dy3+ and 163Dy3+ isotopes in Cs2NaYCl6 crystal are studied by diagonalizing the 42 × 42 energy matrix. In the calculations, the contributions of various admixtures and interactions such as the J-mixing, the mixtures among the states with the same J-value, and the covalence are all considered. The calculated results are in reasonable agreement with the observed values. The results are discussed.

  6. UK audit of analysis of quantitative parameters from renography data generated using a physical phantom.

    PubMed

    Nijran, Kuldip S; Houston, Alex S; Fleming, John S; Jarritt, Peter H; Heikkinen, Jari O; Skrypniuk, John V

    2014-07-01

    In this second UK audit of quantitative parameters obtained from renography, phantom simulations were used in cases in which the 'true' values could be estimated, allowing the accuracy of the parameters measured to be assessed. A renal physical phantom was used to generate a set of three phantom simulations (six kidney functions) acquired on three different gamma camera systems. A total of nine phantom simulations and three real patient studies were distributed to UK hospitals participating in the audit. Centres were asked to provide results for the following parameters: relative function and time-to-peak (whole kidney and cortical region). As with previous audits, a questionnaire collated information on methodology. Errors were assessed as the root mean square deviation from the true value. Sixty-one centres responded to the audit, with some hospitals providing multiple sets of results. Twenty-one centres provided a complete set of parameter measurements. Relative function and time-to-peak showed a reasonable degree of accuracy and precision in most UK centres. The overall average root mean squared deviation of the results for (i) the time-to-peak measurement for the whole kidney and (ii) the relative function measurement from the true value was 7.7 and 4.5%, respectively. These results showed a measure of consistency in the relative function and time-to-peak that was similar to the results reported in a previous renogram audit by our group. Analysis of audit data suggests a reasonable degree of accuracy in the quantification of renography function using relative function and time-to-peak measurements. However, it is reasonable to conclude that the objectives of the audit could not be fully realized because of the limitations of the mechanical phantom in providing true values for renal parameters.

  7. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    PubMed

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.

  8. Simulation tests of the optimization method of Hopfield and Tank using neural networks

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.

    1988-01-01

    The method proposed by Hopfield and Tank for using the Hopfield neural network with continuous valued neurons to solve the traveling salesman problem is tested by simulation. Several researchers have apparently been unable to successfully repeat the numerical simulation documented by Hopfield and Tank. However, as suggested to the author by Adams, it appears that the reason for those difficulties is that a key parameter value is reported erroneously (by four orders of magnitude) in the original paper. When a reasonable value is used for that parameter, the network performs generally as claimed. Additionally, a new method of using feedback to control the input bias currents to the amplifiers is proposed and successfully tested. This eliminates the need to set the input currents by trial and error.

  9. Application of nonlinear least-squares regression to ground-water flow modeling, west-central Florida

    USGS Publications Warehouse

    Yobbi, D.K.

    2000-01-01

    A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.

  10. New procedure for the determination of Hansen solubility parameters by means of inverse gas chromatography.

    PubMed

    Adamska, K; Bellinghausen, R; Voelkel, A

    2008-06-27

    The Hansen solubility parameter (HSP) seems to be a useful tool for the thermodynamic characterization of different materials. Unfortunately, estimation of the HSP values can cause some problems. In this work different procedures by using inverse gas chromatography have been presented for calculation of pharmaceutical excipients' solubility parameter. The new procedure proposed, based on the Lindvig et al. methodology, where experimental data of Flory-Huggins interaction parameter are used, can be a reasonable alternative for the estimation of HSP values. The advantage of this method is that the values of Flory-Huggins interaction parameter chi for all test solutes are used for further calculation, thus diverse interactions between test solute and material are taken into consideration.

  11. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  12. Recommended Parameter Values for GENII Modeling of Radionuclides in Routine Air and Water Releases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Sandra F.; Arimescu, Carmen; Napier, Bruce A.

    The GENII v2 code is used to estimate dose to individuals or populations from the release of radioactive materials into air or water. Numerous parameter values are required for input into this code. User-defined parameters cover the spectrum from chemical data, meteorological data, agricultural data, and behavioral data. This document is a summary of parameter values that reflect conditions in the United States. Reasonable regional and age-dependent data is summarized. Data availability and quality varies. The set of parameters described address scenarios for chronic air emissions or chronic releases to public waterways. Considerations for the special tritium and carbon-14 modelsmore » are briefly addressed. GENIIv2.10.0 is the current software version that this document supports.« less

  13. Principles of parametric estimation in modeling language competition

    PubMed Central

    Zhang, Menghan; Gong, Tao

    2013-01-01

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678

  14. Principles of parametric estimation in modeling language competition.

    PubMed

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  15. The polarization of continuum radiation in sunspots. I - Rayleigh and Thomson scattering

    NASA Technical Reports Server (NTRS)

    Finn, G. D.; Jefferies, J. T.

    1974-01-01

    Expressions are derived for the Stokes parameters of light scattered by a layer of free electrons and hydrogen atoms in a sunspot. A physically reasonable sunspot model was found so that the direction of the calculated linear polarization agrees reasonably with observations. The magnitude of the calculated values of the linear polarization agrees generally with values observed in the continuum at 5830 A. Circular polarization in the continuum also accompanies electron scattering in spot regions; however for commonly accepted values of the longitudinal magnetic field, the predicted circular polarization is much smaller than observed.

  16. The "covariation method" for estimating the parameters of the standard Dynamic Energy Budget model II: Properties and preliminary patterns

    NASA Astrophysics Data System (ADS)

    Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.

    2011-11-01

    The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.

  17. Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.

    PubMed

    Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B

    2005-06-01

    This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.

  18. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less

  19. Oxidation of edible animal fats. Comparison of the performance of different quantification methods and of a proposed new semi-objective colour scale-based method.

    PubMed

    Méndez-Cid, Francisco J; Lorenzo, José M; Martínez, Sidonia; Carballo, Javier

    2017-02-15

    The agreement among the results determined for the main parameters used in the evaluation of the fat auto-oxidation was investigated in animal fats (butter fat, subcutaneous pig back-fat and subcutaneous ham fat). Also, graduated colour scales representing the colour change during storage/ripening were developed for the three types of fat, and the values read in these scales were correlated with the values observed for the different parameters indicating fat oxidation. In general good correlation among the values of the different parameters was observed (e.g. TBA value correlated with the peroxide value: r=0.466 for butter and r=0.898 for back-fat). A reasonable correlation was observed between the values read in the developed colour scales and the values for the other parameters determined (e.g. values of r=0.320 and r=0.793 with peroxide value for butter and back-fat, respectively, and of r=0.767 and r=0.498 with TBA value for back-fat and ham fat, respectively). Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Why anthropic reasoning cannot predict Lambda.

    PubMed

    Starkman, Glenn D; Trotta, Roberto

    2006-11-17

    We revisit anthropic arguments purporting to explain the measured value of the cosmological constant. We argue that different ways of assigning probabilities to candidate universes lead to totally different anthropic predictions. As an explicit example, we show that weighting different universes by the total number of possible observations leads to an extremely small probability for observing a value of Lambda equal to or greater than what we now measure. We conclude that anthropic reasoning within the framework of probability as frequency is ill-defined and that in the absence of a fundamental motivation for selecting one weighting scheme over another the anthropic principle cannot be used to explain the value of Lambda, nor, likely, any other physical parameters.

  1. Optimization and evaluation of metal injection molding by using X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Shidi; Zhang, Ruijie; Qu, Xuanhui, E-mail: quxh@ustb.edu.cn

    2015-06-15

    6061 aluminum alloy and 316L stainless steel green bodies were obtained by using different injection parameters (injection pressure, speed and temperature). After injection process, the green bodies were scanned by X-ray tomography. The projection and reconstruction images show the different kinds of defects obtained by the improper injection parameters. Then, 3D rendering of the Al alloy green bodies was used to demonstrate the spatial morphology characteristics of the serious defects. Based on the scanned and calculated results, it is convenient to obtain the proper injection parameters for the Al alloy. Then, reasons of the defect formation were discussed. During moldmore » filling, the serious defects mainly formed in the case of low injection temperature and high injection speed. According to the gray value distribution of projection image, a threshold gray value was obtained to evaluate whether the quality of green body can meet the desired standard. The proper injection parameters of 316L stainless steel can be obtained efficiently by using the method of analyzing the Al alloy injection. - Highlights: • Different types of defects in green bodies were scanned by using X-ray tomography. • Reasons of the defect formation were discussed. • Optimization of the injection parameters can be simplified greatly by the way of X-ray tomography. • Evaluation standard of the injection process can be obtained by using the gray value distribution of projection image.« less

  2. Theory-restricted resonant x-ray reflectometry of quantum materials

    NASA Astrophysics Data System (ADS)

    Fürsich, Katrin; Zabolotnyy, Volodymyr B.; Schierle, Enrico; Dudy, Lenart; Kirilmaz, Ozan; Sing, Michael; Claessen, Ralph; Green, Robert J.; Haverkort, Maurits W.; Hinkov, Vladimir

    2018-04-01

    The delicate interplay of competing phases in quantum materials is dominated by parameters such as the crystal field potential, the spin-orbit coupling, and, in particular, the electronic correlation strength. Whereas small quantitative variations of the parameter values can thus qualitatively change the material, these values can hitherto hardly be obtained with reasonable precision, be it theoretically or experimentally. Here we propose a solution combining resonant x-ray reflectivity (RXR) with multiplet ligand field theory (MLFT). We first perform ab initio DFT calculations within the MLFT framework to get initial parameter values, which we then use in a fit of the theoretical model to RXR. To validate our method, we apply it to NiO and SrTiO3 and obtain parameter values, which are amended by as much as 20 % compared to the ab initio results. Our approach is particularly useful to investigate topologically trivial and nontrivial correlated insulators, staggered moments in magnetically or orbitally ordered materials, and reconstructed interfaces.

  3. Failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.

    2013-01-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We apply support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicts model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures are determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations are the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

  4. Failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.

    2013-08-01

    Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

  5. A comment on the validity of fragmentation parameters measured in nuclear emulsions. [cosmic ray nuclei

    NASA Technical Reports Server (NTRS)

    Waddington, C. J.

    1978-01-01

    Evidence is reexamined which has been cited as suggesting serious errors in the use of fragmentation parameters appropriate to an airlike medium deduced from measurements made in nuclear emulsions to evaluate corrections for certain effects in balloon-borne observations of cosmic-ray nuclei. Fragmentation parameters for hydrogenlike interactions are calculated and shown to be in overall good agreement with those obtained previously for air. Experimentally measured fragmentation parameters in emulsion are compared with values computed semiempirically, and reasonable agreement is indicated.

  6. Bayesian Hypothesis Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Stephen A.; Sigeti, David E.

    These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ 2 which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H 0.

  7. Brain natriuretic peptide in cardiac surgery--influence of aprotinin.

    PubMed

    Bail, Dorothee H L; Ziemer, Gerhard

    2009-08-14

    This study was designed to examine plasma concentrations of BNP and its correlations to the perioperative course in patients' cardiopulmonary bypass (CPB). Sixty-five patients with coronary artery disease (CAD) undergoing CPB were examined. Pre, intra and postoperative BNP values and hemodynamic parameters were measured. The BNP peak correlated neither with any hemodynamic parameters, with the duration of aortic cross-clamping (AXCL), of the operation or with CBP-time, nor with the use of catecholamines. BNP values were significantly higher with the perioperative use of aprotinin. These results confirm that the metabolism and biological activity of BNP may differ following CPB. An additional reason for increased postoperative BNP values might be the use of aprotinin.

  8. Tribological Properties of PVD Ti/C-N Nanocoatnigs

    NASA Astrophysics Data System (ADS)

    Leitans, A.; Lungevics, J.; Rudzitis, J.; Filipovs, A.

    2017-04-01

    The present paper discusses and analyses tribological properties of various coatings that increase surface wear resistance. Four Ti/C-N nanocoatings with different coating deposition settings are analysed. Tribological and metrological tests on the samples are performed: 2D and 3D parameters of the surface roughness are measured with modern profilometer, and friction coefficient is measured with CSM Instruments equipment. Roughness parameters Ra, Sa, Sz, Str, Sds, Vmp, Vmc and friction coefficient at 6N load are determined during the experiment. The examined samples have many pores, which is the main reason for relatively large values of roughness parameter. A slight wear is identified in all four samples as well; its friction coefficient values range from 0,.21 to 0.29. Wear rate values are not calculated for the investigated coatings, as no expressed tribotracks are detected on the coating surface.

  9. Spatial interpolation of monthly mean air temperature data for Latvia

    NASA Astrophysics Data System (ADS)

    Aniskevich, Svetlana

    2016-04-01

    Temperature data with high spatial resolution are essential for appropriate and qualitative local characteristics analysis. Nowadays the surface observation station network in Latvia consists of 22 stations recording daily air temperature, thus in order to analyze very specific and local features in the spatial distribution of temperature values in the whole Latvia, a high quality spatial interpolation method is required. Until now inverse distance weighted interpolation was used for the interpolation of air temperature data at the meteorological and climatological service of the Latvian Environment, Geology and Meteorology Centre, and no additional topographical information was taken into account. This method made it almost impossible to reasonably assess the actual temperature gradient and distribution between the observation points. During this project a new interpolation method was applied and tested, considering auxiliary explanatory parameters. In order to spatially interpolate monthly mean temperature values, kriging with external drift was used over a grid of 1 km resolution, which contains parameters such as 5 km mean elevation, continentality, distance from the Gulf of Riga and the Baltic Sea, biggest lakes and rivers, population density. As the most appropriate of these parameters, based on a complex situation analysis, mean elevation and continentality was chosen. In order to validate interpolation results, several statistical indicators of the differences between predicted values and the values actually observed were used. Overall, the introduced model visually and statistically outperforms the previous interpolation method and provides a meteorologically reasonable result, taking into account factors that influence the spatial distribution of the monthly mean temperature.

  10. The causal structure of utility conditionals.

    PubMed

    Bonnefon, Jean-François; Sloman, Steven A

    2013-01-01

    The psychology of reasoning is increasingly considering agents' values and preferences, achieving greater integration with judgment and decision making, social cognition, and moral reasoning. Some of this research investigates utility conditionals, ''if p then q'' statements where the realization of p or q or both is valued by some agents. Various approaches to utility conditionals share the assumption that reasoners make inferences from utility conditionals based on the comparison between the utility of p and the expected utility of q. This article introduces a new parameter in this analysis, the underlying causal structure of the conditional. Four experiments showed that causal structure moderated utility-informed conditional reasoning. These inferences were strongly invited when the underlying structure of the conditional was causal, and significantly less so when the underlying structure of the conditional was diagnostic. This asymmetry was only observed for conditionals in which the utility of q was clear, and disappeared when the utility of q was unclear. Thus, an adequate account of utility-informed inferences conditional reasoning requires three components: utility, probability, and causal structure. Copyright © 2012 Cognitive Science Society, Inc.

  11. Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.

    2012-12-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  12. On designing for quality

    NASA Technical Reports Server (NTRS)

    Vajingortin, L. D.; Roisman, W. P.

    1991-01-01

    The problem of ensuring the required quality of products and/or technological processes often becomes more difficult due to the fact that there is not general theory of determining the optimal sets of value of the primary factors, i.e., of the output parameters of the parts and units comprising an object and ensuring the correspondence of the object's parameters to the quality requirements. This is the main reason for the amount of time taken to finish complex vital article. To create this theory, one has to overcome a number of difficulties and to solve the following tasks: the creation of reliable and stable mathematical models showing the influence of the primary factors on the output parameters; finding a new technique of assigning tolerances for primary factors with regard to economical, technological, and other criteria, the technique being based on the solution of the main problem; well reasoned assignment of nominal values for primary factors which serve as the basis for creating tolerances. Each of the above listed tasks is of independent importance. An attempt is made to give solutions for this problem. The above problem dealing with quality ensuring an mathematically formalized aspect is called the multiple inverse problem.

  13. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  14. Models based on value and probability in health improve shared decision making.

    PubMed

    Ortendahl, Monica

    2008-10-01

    Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.

  15. Mathematical Model of Three Species Food Chain Interaction with Mixed Functional Response

    NASA Astrophysics Data System (ADS)

    Ws, Mada Sanjaya; Mohd, Ismail Bin; Mamat, Mustafa; Salleh, Zabidin

    In this paper, we study mathematical model of ecology with a tritrophic food chain composed of a classical Lotka-Volterra functional response for prey and predator, and a Holling type-III functional response for predator and super predator. There are two equilibrium points of the system. In the parameter space, there are passages from instability to stability, which are called Hopf bifurcation points. For the first equilibrium point, it is possible to find bifurcation points analytically and to prove that the system has periodic solutions around these points. Furthermore the dynamical behaviors of this model are investigated. Models for biologically reasonable parameter values, exhibits stable, unstable periodic and limit cycles. The dynamical behavior is found to be very sensitive to parameter values as well as the parameters of the practical life. Computer simulations are carried out to explain the analytical findings.

  16. Modeling of laser-induced ionization of solid dielectrics for ablation simulations: role of effective mass

    NASA Astrophysics Data System (ADS)

    Gruzdev, Vitaly

    2010-11-01

    Modeling of laser-induced ionization and heating of conduction-band electrons by laser radiation frequently serves as a basis for simulations supporting experimental studies of laser-induced ablation and damage of solid dielectrics. Together with band gap and electron-particle collision rate, effective electron mass is one of material parameters employed for the ionization modeling. Exact value of the effective mass is not known for many materials frequently utilized in experiments, e.g., fused silica and glasses. Because of that reason, value of the effective mass is arbitrary varied around "reasonable values" for the ionization modeling. In fact, it is utilized as a fitting parameter to fit experimental data on dependence of ablation or damage threshold on laser parameters. In this connection, we study how strong is the influence of variations of the effective mass on the value of conduction-band electron density. We consider influence of the effective mass on the photo-ionization rate and rate of impact ionization. In particular, it is shown that the photo-ionization rate can vary by 2-4 orders of magnitude with variation of effective mass by 50%. Impact ionization shows a much weaker dependence on effective mass, but it significantly enhances the variations of seed-electron density produced by the photo-ionization. Utilizing those results, we demonstrate that variation of effective mass by 50% produces variations of conduction-band electron density by 6 orders of magnitude. In this connection, we discuss the general issues of the current models of laser-induced ionization.

  17. An Investigation into Performance Modelling of a Small Gas Turbine Engine

    DTIC Science & Technology

    2012-10-01

    b = Combustor part load constant f = Fuel to mass flow ratio or scale factor h = Enthalpy F = Force P = Pressure T = Temperature W = Mass flow...HP engine performance parameters[5,6] Parameter Condition (ISA, SLS) Value Thrust 108000 rpm 230 N Pressure Ratio 108000 rpm 4 Mass Flow Rate...system. The reasons for removing the electric starter were to ensure uniform flow through the bell- mouth for mass flow rate measurement, eliminate a

  18. SCS-CN parameter determination using rainfall-runoff data in heterogeneous watersheds - the two-CN system approach

    NASA Astrophysics Data System (ADS)

    Soulis, K. X.; Valiantzas, J. D.

    2012-03-01

    The Soil Conservation Service Curve Number (SCS-CN) approach is widely used as a simple method for predicting direct runoff volume for a given rainfall event. The CN parameter values corresponding to various soil, land cover, and land management conditions can be selected from tables, but it is preferable to estimate the CN value from measured rainfall-runoff data if available. However, previous researchers indicated that the CN values calculated from measured rainfall-runoff data vary systematically with the rainfall depth. Hence, they suggested the determination of a single asymptotic CN value observed for very high rainfall depths to characterize the watersheds' runoff response. In this paper, the hypothesis that the observed correlation between the calculated CN value and the rainfall depth in a watershed reflects the effect of soils and land cover spatial variability on its hydrologic response is being tested. Based on this hypothesis, the simplified concept of a two-CN heterogeneous system is introduced to model the observed CN-rainfall variation by reducing the CN spatial variability into two classes. The behaviour of the CN-rainfall function produced by the simplified two-CN system is approached theoretically, it is analysed systematically, and it is found to be similar to the variation observed in natural watersheds. Synthetic data tests, natural watersheds examples, and detailed study of two natural experimental watersheds with known spatial heterogeneity characteristics were used to evaluate the method. The results indicate that the determination of CN values from rainfall runoff data using the proposed two-CN system approach provides reasonable accuracy and it over performs the previous methods based on the determination of a single asymptotic CN value. Although the suggested method increases the number of unknown parameters to three (instead of one), a clear physical reasoning for them is presented.

  19. Lateral and longitudinal stability and control parameters for the space shuttle discovery as determined from flight test data

    NASA Technical Reports Server (NTRS)

    Suit, William T.; Schiess, James R.

    1988-01-01

    The Discovery vehicle was found to have longitudinal and lateral aerodynamic characteristics similar to those of the Columbia and Challenger vehicles. The values of the lateral and longitudinal parameters are compared with the preflight data book. The lateral parameters showed the same trends as the data book. With the exception of C sub l sub Beta for Mach numbers greater than 15, C sub n sub delta r for Mach numbers greater than 2 and for Mach numbers less than 1.5, where the variation boundaries were not well defined, ninety percent of the extracted values of the lateral parameters fell within the predicted variations. The longitudinal parameters showed more scatter, but scattered about the preflight predictions. With the exception of the Mach 1.5 to .5 region of the flight envelope, the preflight predictions seem a reasonable representation of the Shuttle aerodynamics. The models determined accounted for ninety percent of the actual flight time histories.

  20. Image parameters for maturity determination of a composted material containing sewage sludge

    NASA Astrophysics Data System (ADS)

    Kujawa, S.; Nowakowski, K.; Tomczak, R. J.; Boniecki, P.; Dach, J.

    2013-07-01

    Composting is one of the best methods for management of sewage sludge. In a reasonably conducted composting process it is important to early identify the moment in which a material reaches the young compost stage. The objective of this study was to determine parameters contained in images of composted material's samples that can be used for evaluation of the degree of compost maturity. The study focused on two types of compost: containing sewage sludge with corn straw and sewage sludge with rapeseed straw. The photographing of the samples was carried out on a prepared stand for the image acquisition using VIS, UV-A and mixed (VIS + UV-A) light. In the case of UV-A light, three values of the exposure time were assumed. The values of 46 parameters were estimated for each of the images extracted from the photographs of the composted material's samples. Exemplary averaged values of selected parameters obtained from the images of the composted material in the following sampling days were presented. All of the parameters obtained from the composted material's images are the basis for preparation of training, validation and test data sets necessary in development of neural models for classification of the young compost stage.

  1. Happiness Inequality: How Much Is Reasonable?

    ERIC Educational Resources Information Center

    Gandelman, Nestor; Porzecanski, Rafael

    2013-01-01

    We compute the Gini indexes for income, happiness and various simulated utility levels. Due to decreasing marginal utility of income, happiness inequality should be lower than income inequality. We find that happiness inequality is about half that of income inequality. To compute the utility levels we need to assume values for a key parameter that…

  2. Structural and elastic properties of AIBIIIC 2 VI semiconductors

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Singh, Bhanu P.

    2018-01-01

    The plane wave pseudo-potential method within density functional theory has been used to calculate the structural and elastic properties of AIBIIIC 2 VI semiconductors. The electronic band structure, density of states, lattice constants (a and c), internal parameter (u), tetragonal distortion (η), energy gap (Eg), and bond lengths of the A-C (dAC) and B-C (dBC) bonds in AIBIIIC 2 VI semiconductors have been calculated. The values of elastic constants (Cij), bulk modulus (B), shear modulus (G), Young's modulus (Y), Poisson's ratio (υ), Zener anisotropy factor (A), Debye temperature (ϴD) and G/B ratio have also been calculated. The values of all 15 parameters of CuTlS2 and CuTlSe2 compounds, and 8 parameters of 20 compounds of AIBIIIC 2 VI family, except AgInS2 and AgInSe2, have been calculated for the first time. Reasonably good agreement has been obtained between the calculated, reported and available experimental values.

  3. Some observed seasonal changes in extratropical general circulation: A study in terms of vorticity. [seasonal migrations of extra tropical frontal jet streams

    NASA Technical Reports Server (NTRS)

    Srivatsangam, S.; Reiter, E. R.

    1973-01-01

    Extratropical eddy distributions in four months typical of the four seasons are treated in terms of temporal mean and temporal r.m.s. values of the geostrophic relative vorticity. The geographical distributions of these parameters at the 300 mb level show that the arithmetic mean fields are highly biased representatives of the extratropical eddy distributions. The zonal arithmetic means of these parameters are also presented. These show that the zonal-and-time mean relative vorticity is but a small fraction of the zonal mean of the temporal r.m.s. relative vorticity, K. The reasons for considering the r.m.s. values as the temporal normal values of vorticity in the extratropics are given in considerable detail. The parameter K is shown to be of considerable importance in locating the extratropical frontal jet streams (EFJ) in time-and-zonal average distributions. The study leads to an understanding of the seasonal migrations of the EFJ which have not been explored until now.

  4. Convergence properties of η → 3π decays in chiral perturbation theory

    NASA Astrophysics Data System (ADS)

    Kolesár, Marián; Novotný, Jiří

    2017-01-01

    The convergence of the decay widths and some of the Dalitz plot parameters of the decay η → 3π seems problematic in low energy QCD. In the framework of resummed chiral perturbation theory, we explore the question of compatibility of experimental data with a reasonable convergence of a carefully defined chiral series. By treating the uncertainties in the higher orders statistically, we numerically generate a large set of theoretical predictions, which are then confronted with experimental information. In the case of the decay widths, the experimental values can be reconstructed for a reasonable range of the free parameters and thus no tension is observed, in spite of what some of the traditional calculations suggest. The Dalitz plot parameters a and d can be described very well too. When the parameters b and α are concerned, we find a mild tension for the whole range of the free parameters, at less than 2σ C.L. This can be interpreted in two ways - either some of the higher order corrections are indeed unexpectedly large or there is a specific configuration of the remainders, which is, however, not completely improbable.

  5. Graphical User Interface for Simulink Integrated Performance Analysis Model

    NASA Technical Reports Server (NTRS)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  6. Investigating the relationship between a soils classification and the spatial parameters of a conceptual catchment-scale hydrological model

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.; Lilly, A.

    2001-10-01

    There are now many examples of hydrological models that utilise the capabilities of Geographic Information Systems to generate spatially distributed predictions of behaviour. However, the spatial variability of hydrological parameters relating to distributions of soils and vegetation can be hard to establish. In this paper, the relationship between a soil hydrological classification Hydrology of Soil Types (HOST) and the spatial parameters of a conceptual catchment-scale model is investigated. A procedure involving inverse modelling using Monte-Carlo simulations on two catchments is developed to identify relative values for soil related parameters of the DIY model. The relative values determine the internal variability of hydrological processes as a function of the soil type. For three out of the four soil parameters studied, the variability between HOST classes was found to be consistent across two catchments when tested independently. Problems in identifying values for the fourth 'fast response distance' parameter have highlighted a potential limitation with the present structure of the model. The present assumption that this parameter can be related simply to soil type rather than topography appears to be inadequate. With the exclusion of this parameter, calibrated parameter sets from one catchment can be converted into equivalent parameter sets for the alternate catchment on the basis of their HOST distributions, to give a reasonable simulation of flow. Following further testing on different catchments, and modifications to the definition of the fast response distance parameter, the technique provides a methodology whereby it is possible to directly derive spatial soil parameters for new catchments.

  7. SCS-CN parameter determination using rainfall-runoff data in heterogeneous watersheds. The two-CN system approach

    NASA Astrophysics Data System (ADS)

    Soulis, K. X.; Valiantzas, J. D.

    2011-10-01

    The Soil Conservation Service Curve Number (SCS-CN) approach is widely used as a simple method for predicting direct runoff volume for a given rainfall event. The CN values can be estimated by being selected from tables. However, it is more accurate to estimate the CN value from measured rainfall-runoff data (assumed available) in a watershed. Previous researchers indicated that the CN values calculated from measured rainfall-runoff data vary systematically with the rainfall depth. They suggested the determination of a single asymptotic CN value observed for very high rainfall depths to characterize the watersheds' runoff response. In this paper, the novel hypothesis that the observed correlation between the calculated CN value and the rainfall depth in a watershed reflects the effect of the inevitable presence of soil-cover complex spatial variability along watersheds is being tested. Based on this hypothesis, the simplified concept of a two-CN heterogeneous system is introduced to model the observed CN-rainfall variation by reducing the CN spatial variability into two classes. The behavior of the CN-rainfall function produced by the proposed two-CN system concept is approached theoretically, it is analyzed systematically, and it is found to be similar to the variation observed in natural watersheds. Synthetic data tests, natural watersheds examples, and detailed study of two natural experimental watersheds with known spatial heterogeneity characteristics were used to evaluate the method. The results indicate that the determination of CN values from rainfall runoff data using the proposed two-CN system approach provides reasonable accuracy and it over performs the previous original method based on the determination of a single asymptotic CN value. Although the suggested method increases the number of unknown parameters to three (instead of one), a clear physical reasoning for them is presented.

  8. Three-dimensional whole-brain perfusion quantification using pseudo-continuous arterial spin labeling MRI at multiple post-labeling delays: accounting for both arterial transit time and impulse response function.

    PubMed

    Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M

    2014-02-01

    Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Basal glycogenolysis in mouse skeletal muscle: in vitro model predicts in vivo fluxes

    NASA Technical Reports Server (NTRS)

    Lambeth, Melissa J.; Kushmerick, Martin J.; Marcinek, David J.; Conley, Kevin E.

    2002-01-01

    A previously published mammalian kinetic model of skeletal muscle glycogenolysis, consisting of literature in vitro parameters, was modified by substituting mouse specific Vmax values. The model demonstrates that glycogen breakdown to lactate is under ATPase control. Our criteria to test whether in vitro parameters could reproduce in vivo dynamics was the ability of the model to fit phosphocreatine (PCr) and inorganic phosphate (Pi) dynamic NMR data from ischemic basal mouse hindlimbs and predict biochemically-assayed lactate concentrations. Fitting was accomplished by optimizing four parameters--the ATPase rate coefficient, fraction of activated glycogen phosphorylase, and the equilibrium constants of creatine kinase and adenylate kinase (due to the absence of pH in the model). The optimized parameter values were physiologically reasonable, the resultant model fit the [PCr] and [Pi] timecourses well, and the model predicted the final measured lactate concentration. This result demonstrates that additional features of in vivo enzyme binding are not necessary for quantitative description of glycogenolytic dynamics.

  10. Method for Calculating the Optical Diffuse Reflection Coefficient for the Ocular Fundus

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.; Kugeiko, M. M.

    2016-07-01

    We have developed a method for calculating the optical diffuse reflection coefficient for the ocular fundus, taking into account multiple scattering of light in its layers (retina, epithelium, choroid) and multiple refl ection of light between layers. The method is based on the formulas for optical "combination" of the layers of the medium, in which the optical parameters of the layers (absorption and scattering coefficients) are replaced by some effective values, different for cases of directional and diffuse illumination of the layer. Coefficients relating the effective optical parameters of the layers and the actual values were established based on the results of a Monte Carlo numerical simulation of radiation transport in the medium. We estimate the uncertainties in retrieval of the structural and morphological parameters for the fundus from its diffuse reflectance spectrum using our method. We show that the simulated spectra correspond to the experimental data and that the estimates of the fundus parameters obtained as a result of solving the inverse problem are reasonable.

  11. Evaluation of fatty proportion in fatty liver using least squares method with constraints.

    PubMed

    Li, Xingsong; Deng, Yinhui; Yu, Jinhua; Wang, Yuanyuan; Shamdasani, Vijay

    2014-01-01

    Backscatter and attenuation parameters are not easily measured in clinical applications due to tissue inhomogeneity in the region of interest (ROI). A least squares method(LSM) that fits the echo signal power spectra from a ROI to a 3-parameter tissue model was used to get attenuation coefficient imaging in fatty liver. Since fat's attenuation value is higher than normal liver parenchyma, a reasonable threshold was chosen to evaluate the fatty proportion in fatty liver. Experimental results using clinical data of fatty liver illustrate that the least squares method can get accurate attenuation estimates. It is proved that the attenuation values have a positive correlation with the fatty proportion, which can be used to evaluate the syndrome of fatty liver.

  12. Characteristics of middle and upper tropospheric clouds as deduced from rawinsonde data

    NASA Technical Reports Server (NTRS)

    Starr, D. D. O.; Cox, S. K.

    1982-01-01

    The static environment of middle and upper tropospheric clouds is characterized. Computed relative humidity with respect to ice is used to diagnose the presence of cloud layer. The deduced seasonal mean cloud cover estimates based on this technique are shown to be reasonable. The cases are stratified by season and pressure thickness, and the dry static stability, vertical wind speed shear, and Richardson number are computed for three layers for each case. Mean values for each parameter are presented for each stratification and layer. The relative frequency of occurrence of various structures is presented for each stratification. The observed values of each parameter and the observed structure of each parameter are quite variable. Structures corresponding to any of a number of different conceptual models may be found. Moist adiabatic conditions are not commonly observed and the stratification based on thickness yields substantially different results for each group.

  13. Electron-phonon interaction in the binary superconductor lutetium carbide LuC2 via first-principles calculations

    NASA Astrophysics Data System (ADS)

    Dilmi, S.; Saib, S.; Bouarissa, N.

    2018-06-01

    Structural, electronic, electron-phonon coupling and superconducting properties of the intermetallic compound LuC2 are investigated by means of ab initio pseudopotential plane wave method within the generalized gradient approximation. The calculated equilibrium lattice parameters yielded a very good accord with experiment. There is no imaginary phonon frequency in the whole Brillouin zone supporting thus the dynamical stability in the material of interest. The average electron-phonon coupling parameter is found to be 0.59 indicating thus a weak-coupling BCS superconductor. Using a reasonable value of μ* = 0.12 for the effective Coulomb repulsion parameter, the superconducting critical temperature Tc is found to be 3.324 which is in excellent agreement with the experimental value of 3.33 K. The effect of the spin-orbit coupling on the superconducting properties of the material of interest has been examined and found to be weak.

  14. Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback

    NASA Astrophysics Data System (ADS)

    Bruni, Renato; Celani, Fabio

    2016-10-01

    The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.

  15. Small field models with gravitational wave signature supported by CMB data

    PubMed Central

    Brustein, Ramy

    2018-01-01

    We study scale dependence of the cosmic microwave background (CMB) power spectrum in a class of small, single-field models of inflation which lead to a high value of the tensor to scalar ratio. The inflaton potentials that we consider are degree 5 polynomials, for which we precisely calculate the power spectrum, and extract the cosmological parameters: the scalar index ns, the running of the scalar index nrun and the tensor to scalar ratio r. We find that for non-vanishing nrun and for r as small as r = 0.001, the precisely calculated values of ns and nrun deviate significantly from what the standard analytic treatment predicts. We study in detail, and discuss the probable reasons for such deviations. As such, all previously considered models (of this kind) are based upon inaccurate assumptions. We scan the possible values of potential parameters for which the cosmological parameters are within the allowed range by observations. The 5 parameter class is able to reproduce all of the allowed values of ns and nrun for values of r that are as high as 0.001. Subsequently this study at once refutes previous such models built using the analytical Stewart-Lyth term, and revives the small field brand, by building models that do yield an appreciable r while conforming to known CMB observables. PMID:29795608

  16. Can the Equivalent Sphere Model Approximate Organ Doses in Space?

    NASA Technical Reports Server (NTRS)

    Lin, Zi-Wei

    2007-01-01

    For space radiation protection it is often useful to calculate dose or dose,equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to. simulate the BFO dose. However, many previous studies have concluded that a 5cm sphere gives very different dose values from the exact BFO values. One study [1] . concludes that a 9 cm sphere is a reasonable approximation for BFO'doses in solar particle event environments. In this study we use a deterministic radiation transport [2] to investigate the reason behind these observations and to extend earlier studies. We take different space radiation environments, including seven galactic cosmic ray environments and six large solar particle events, and calculate the dose and dose equivalent in the skin, eyes and BFO using their thickness distribution functions from the CAM (Computerized Anatomical Man) model [3] The organ doses have been evaluated with a water or aluminum shielding of an areal density from 0 to 20 g/sq cm. We then compare with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we address why the equivalent sphere model is not a good approximation in some cases. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin. For galactic cosmic rays environments, the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of the eye or the skin, but is unacceptable for the dose of the eye or the skin. The ranges of the radius parameters are also being investigated, and the BFO radius parameters are found to be significantly, larger than 5 cm in all cases, consistent with the conclusion of an earlier study [I]. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and I I cm for the BFO, 3.7 to 4.8 cm for the eye, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose.

  17. Simulated discharge trends indicate robustness of hydrological models in a changing climate

    NASA Astrophysics Data System (ADS)

    Addor, Nans; Nikolova, Silviya; Seibert, Jan

    2016-04-01

    Assessing the robustness of hydrological models under contrasted climatic conditions should be part any hydrological model evaluation. Robust models are particularly important for climate impact studies, as models performing well under current conditions are not necessarily capable of correctly simulating hydrological perturbations caused by climate change. A pressing issue is the usually assumed stationarity of parameter values over time. Modeling experiments using conceptual hydrological models revealed that assuming transposability of parameters values in changing climatic conditions can lead to significant biases in discharge simulations. This raises the question whether parameter values should to be modified over time to reflect changes in hydrological processes induced by climate change. Such a question denotes a focus on the contribution of internal processes (i.e., catchment processes) to discharge generation. Here we adopt a different perspective and explore the contribution of external forcing (i.e., changes in precipitation and temperature) to changes in discharge. We argue that in a robust hydrological model, discharge variability should be induced by changes in the boundary conditions, and not by changes in parameter values. In this study, we explore how well the conceptual hydrological model HBV captures transient changes in hydrological signatures over the period 1970-2009. Our analysis focuses on research catchments in Switzerland undisturbed by human activities. The precipitation and temperature forcing are extracted from recently released 2km gridded data sets. We use a genetic algorithm to calibrate HBV for the whole 40-year period and for the eight successive 5-year periods to assess eventual trends in parameter values. Model calibration is run multiple times to account for parameter uncertainty. We find that in alpine catchments showing a significant increase of winter discharge, this trend can be captured reasonably well with constant parameter values over the whole reference period. Further, preliminary results suggest that some trends in parameter values do not reflect changes in hydrological processes, as reported by others previously, but instead might stem from a modeling artifact related to the parameterization of evapotranspiration, which is overly sensitive to temperature increase. We adopt a trading-space-for-time approach to better understand whether robust relationships between parameter values and forcing can be established, and to critically explore the rationale behind time-dependent parameter values in conceptual hydrological models.

  18. a Comparison Between Two Ols-Based Approaches to Estimating Urban Multifractal Parameters

    NASA Astrophysics Data System (ADS)

    Huang, Lin-Shan; Chen, Yan-Guang

    Multifractal theory provides a new spatial analytical tool for urban studies, but many basic problems remain to be solved. Among various pending issues, the most significant one is how to obtain proper multifractal dimension spectrums. If an algorithm is improperly used, the parameter spectrums will be abnormal. This paper is devoted to investigating two ordinary least squares (OLS)-based approaches for estimating urban multifractal parameters. Using empirical study and comparative analysis, we demonstrate how to utilize the adequate linear regression to calculate multifractal parameters. The OLS regression analysis has two different approaches. One is that the intercept is fixed to zero, and the other is that the intercept is not limited. The results of comparative study show that the zero-intercept regression yields proper multifractal parameter spectrums within certain scale range of moment order, while the common regression method often leads to abnormal multifractal parameter values. A conclusion can be reached that fixing the intercept to zero is a more advisable regression method for multifractal parameters estimation, and the shapes of spectral curves and value ranges of fractal parameters can be employed to diagnose urban problems. This research is helpful for scientists to understand multifractal models and apply a more reasonable technique to multifractal parameter calculations.

  19. On the precise determination of the Tsallis parameters in proton–proton collisions at LHC energies

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, T.; Cleymans, J.; Marques, L.; Mogliacci, S.; Paradza, M. W.

    2018-05-01

    A detailed analysis is presented of the precise values of the Tsallis parameters obtained in p–p collisions for identified particles, pions, kaons and protons at the LHC at three beam energies \\sqrt{s}=0.9,2.76 and 7 TeV. Interpolated data at \\sqrt{s}=5.02 TeV have also been included. It is shown that the Tsallis formula provides reasonably good fits to the p T distributions in p–p collisions at the LHC using three parameters dN/dy, T and q. However, the parameters T and q depend on the particle species and are different for pions, kaons and protons. As a consequence there is no m T scaling and also no universality of the parameters for different particle species.

  20. Simulations of potential future conditions in the cache critical groundwater area, Arkansas

    USGS Publications Warehouse

    Rashid, Haveen M.; Clark, Brian R.; Mahdi, Hanan H.; Rifai, Hanadi S.; Al-Shukri, Haydar J.

    2015-01-01

    A three-dimensional finite-difference model for part of the Mississippi River Valley alluvial aquifer in the Cache Critical Groundwater Area of eastern Arkansas was constructed to simulate potential future conditions of groundwater flow. The objectives of this study were to test different pilot point distributions to find reasonable estimates of aquifer properties for the alluvial aquifer, to simulate flux from rivers, and to demonstrate how changes in pumping rates for different scenarios affect areas of long-term water-level declines over time. The model was calibrated using the parameter estimation code. Additional calibration was achieved using pilot points with regularization and singular value decomposition. Pilot point parameter values were estimated at a number of discrete locations in the study area to obtain reasonable estimates of aquifer properties. Nine pumping scenarios for the years 2011 to 2020 were tested and compared to the simulated water-level heads from 2010. Hydraulic conductivity values from pilot point calibration ranged between 42 and 173 m/d. Specific yield values ranged between 0.19 and 0.337. Recharge rates ranged between 0.00009 and 0.0006 m/d. The model was calibrated using 2,322 hydraulic head measurements for the years 2000 to 2010 from 150 observation wells located in the study area. For all scenarios, the volume of water depleted ranged between 5.7 and 23.3 percent, except in Scenario 2 (minimum pumping rates), in which the volume increased by 2.5 percent.

  1. Form of prior for constrained thermodynamic processes with uncertainty

    NASA Astrophysics Data System (ADS)

    Aneja, Preety; Johal, Ramandeep S.

    2015-05-01

    We consider the quasi-static thermodynamic processes with constraints, but with additional uncertainty about the control parameters. Motivated by inductive reasoning, we assign prior distribution that provides a rational guess about likely values of the uncertain parameters. The priors are derived explicitly for both the entropy-conserving and the energy-conserving processes. The proposed form is useful when the constraint equation cannot be treated analytically. The inference is performed using spin-1/2 systems as models for heat reservoirs. Analytical results are derived in the high-temperatures limit. An agreement beyond linear response is found between the estimates of thermal quantities and their optimal values obtained from extremum principles. We also seek an intuitive interpretation for the prior and the estimated value of temperature obtained therefrom. We find that the prior over temperature becomes uniform over the quantity kept conserved in the process.

  2. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  3. On the radiated EMI current extraction of dc transmission line based on corona current statistical measurements

    NASA Astrophysics Data System (ADS)

    Yi, Yong; Chen, Zhengying; Wang, Liming

    2018-05-01

    Corona-originated discharge of DC transmission lines is the main reason for the radiated electromagnetic interference (EMI) field in the vicinity of transmission lines. A joint time-frequency analysis technique was proposed to extract the radiated EMI current (excitation current) of DC corona based on corona current statistical measurements. A reduced-scale experimental platform was setup to measure the statistical distributions of current waveform parameters of aluminum conductor steel reinforced. Based on the measured results, the peak value, root-mean-square value and average value with 9 kHz and 200 Hz band-with of 0.5 MHz radiated EMI current were calculated by the technique proposed and validated with conventional excitation function method. Radio interference (RI) was calculated based on the radiated EMI current and a wire-to-plate platform was built for the validity of the RI computation results. The reason for the certain deviation between the computations and measurements was detailed analyzed.

  4. A pattern-mixture model approach for handling missing continuous outcome data in longitudinal cluster randomized trials.

    PubMed

    Fiero, Mallorie H; Hsu, Chiu-Hsieh; Bell, Melanie L

    2017-11-20

    We extend the pattern-mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern-mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Heterogeneity in Hydraulic Conductivity and Its Role on the Macroscale Transport of a Solute Plume from a Landfill: From Measurements to a Practical Application of Stochastic Flow and Transport Theory

    NASA Astrophysics Data System (ADS)

    Sudicky, E. A.; Illman, W. A.; Goltz, I. K.; Adams, J. J.; McLaren, R. G.

    2008-12-01

    The spatial variability of hydraulic conductivity in a shallow unconfined aquifer located at North Bay, Ontario composed of glacial-lacustrine and glacial-fluvial sands is examined in exceptional detail and characterized geostatistically. A total of 1878 permeameter measurements were performed at 0.05 m vertical intervals along cores taken from 20 boreholes along two intersecting transect lines. Simultaneous three-dimensional fitting of ln K variogram data to an exponential model yielded geostatistical parameters for the estimation of bulk hydraulic conductivity and solute dispersion parameters. The analysis revealed a ln K variance equal to about 2.0 and three-dimensional anisotropy of the correlation structure of the heterogeneity (λ 1, λ 2 and λ 3 equal to 17.19 m, 7.39 m and 1.0 m, respectively). Effective values of the hydraulic conductivity tensor and the value of the longitudinal macrodispersivity were calculated using the theoretical expressions of Gelhar and Axness (1983). The magnitude of the longitudinal macrodispersivity is reasonably consistent with the observed degree of longitudinal dispersion of the landfill plume along the principal path of migration. The prediction of the transverse dispersion suggests that the transverse-mixing process at the field scale is essentially controlled by local dispersion and diffusion. Variably-saturated 3D flow modeling using the statistically-derived effective hydraulic conductivity tensor allowed a reasonably close calibration to the measured water table and the observed heads at various depths in an array of piezometers. Concomitant transport modeling using the calculated longitudinal macrodispersivity, as well as local-scale values of the transverse dispersion parameters, reasonably predicted the extent and migration rates of the observed contaminant plume that was monitored using a network of multi-level samplers over a period of about 5 years. This study demonstrates that the use of statistically-derived parameters based on stochastic theories results in reliable large-scale 3D flow and transport models for complex hydrogeological systems. This is in agreement with the conclusions reached by Sudicky (1986) at the site of an elaborate tracer test conducted in the aquifer at the Canadian Forces Base Borden. This study represents one of the few attempts at validating stochastic theories of groundwater flow and solute transport in three-dimensions at a site where extensive field data have been collected.

  6. On the remote sensing of cloud properties from satellite infrared sounder data

    NASA Technical Reports Server (NTRS)

    Yeh, H. Y. M.

    1984-01-01

    A method for remote sensing of cloud parameters by using infrared sounder data has been developed on the basis of the parameterized infrared transfer equation applicable to cloudy atmospheres. The method is utilized for the retrieval of the cloud height, amount, and emissivity in 11 micro m region. Numerical analyses and retrieval experiments have been carried out by utilizing the synthetic sounder data for the theoretical study. The sensitivity of the numerical procedures to the measurement and instrument errors are also examined. The retrieved results are physically discussed and numerically compared with the model atmospheres. Comparisons reveal that the recovered cloud parameters agree reasonably well with the pre-assumed values. However, for cases when relatively thin clouds and/or small cloud fractional cover within a field of view are present, the recovered cloud parameters show considerable fluctuations. Experiments on the proposed algorithm are carried out utilizing High Resolution Infrared Sounder (HIRS/2) data of NOAA 6 and TIROS-N. Results of experiments show reasonably good comparisons with the surface reports and GOES satellite images.

  7. Estimating stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Chuan-Xin; Yuan, Yuan; Zhang, Hao-Wei; Shuai, Yong; Tan, He-Ping

    2016-09-01

    Considering features of stellar spectral radiation and sky surveys, we established a computational model for stellar effective temperatures, detected angular parameters and gray rates. Using known stellar flux data in some bands, we estimated stellar effective temperatures and detected angular parameters using stochastic particle swarm optimization (SPSO). We first verified the reliability of SPSO, and then determined reasonable parameters that produced highly accurate estimates under certain gray deviation levels. Finally, we calculated 177 860 stellar effective temperatures and detected angular parameters using data from the Midcourse Space Experiment (MSX) catalog. These derived stellar effective temperatures were accurate when we compared them to known values from literatures. This research makes full use of catalog data and presents an original technique for studying stellar characteristics. It proposes a novel method for calculating stellar effective temperatures and detecting angular parameters, and provides theoretical and practical data for finding information about radiation in any band.

  8. The DPAC Compensation Model: An Introductory Handbook.

    DTIC Science & Technology

    1987-04-01

    introductory and advanced economics courses at the US Air Force Academy, he served for four years as an analyst and action officer in the ...introduces new users to the ACOL framework and provides some guidelines for choosing reasonable values for the four long-run parameters required to run the ...regression coefficients for ACOL and the civilian unemployment rate; for pilots, the number of " new " pilot

  9. Heterogeneity in hydraulic conductivity and its role on the macroscale transport of a solute plume: From measurements to a practical application of stochastic flow and transport theory

    NASA Astrophysics Data System (ADS)

    Sudicky, E. A.; Illman, W. A.; Goltz, I. K.; Adams, J. J.; McLaren, R. G.

    2010-01-01

    The spatial variability of hydraulic conductivity in a shallow unconfined aquifer located at North Bay, Ontario, composed of glacial-lacustrine and glacial-fluvial sands, is examined in exceptional detail and characterized geostatistically. A total of 1878 permeameter measurements were performed at 0.05 m vertical intervals along cores taken from 20 boreholes along two intersecting transect lines. Simultaneous three-dimensional (3-D) fitting of Ln(K) variogram data to an exponential model yielded geostatistical parameters for the estimation of bulk hydraulic conductivity and solute dispersion parameters. The analysis revealed a Ln(K) variance equal to about 2.0 and 3-D anisotropy of the correlation structure of the heterogeneity (λ1, λ2, and λ3 equal to 17.19, 7.39, and 1.0 m, respectively). Effective values of the hydraulic conductivity tensor and the value of the longitudinal macrodispersivity were calculated using the theoretical expressions of Gelhar and Axness (1983). The magnitude of the longitudinal macrodispersivity is reasonably consistent with the observed degree of longitudinal dispersion of the landfill plume along the principal path of migration. Variably saturated 3-D flow modeling using the statistically derived effective hydraulic conductivity tensor allowed a reasonably close prediction of the measured water table and the observed heads at various depths in an array of piezometers. Concomitant transport modeling using the calculated longitudinal macrodispersivity reasonably predicted the extent and migration rates of the observed contaminant plume that was monitored using a network of multilevel samplers over a period of about 5 years. It was further demonstrated that the length of the plume is relatively insensitive to the value of the longitudinal macrodispersivity under the conditions of a steady flow in 3-D and constant source strength. This study demonstrates that the use of statistically derived parameters based on stochastic theories results in reliable large-scale 3-D flow and transport models for complex hydrogeological systems. This is in agreement with the conclusions reached by Sudicky (1986) at the site of an elaborate tracer test conducted in the aquifer at the Canadian Forces Base Borden.

  10. Significance of modeling internal damping in the control of structures

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Inman, D. J.

    1992-01-01

    Several simple systems are examined to illustrate the importance of the estimation of damping parameters in closed-loop system performance and stability. The negative effects of unmodeled damping are particularly pronounced in systems that do not use collocated sensors and actuators. An example is considered for which even the actuators (a tip jet nozzle and flexible hose) for a simple beam produce significant damping which, if ignored, results in a model that cannot yield a reasonable time response using physically meaningful parameter values. It is concluded that correct damping modeling is essential in structure control.

  11. Analysis and Thermodynamic Prediction of Hydrogen Solution in Solid and Liquid Multicomponent Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Anyalebechi, P. N.

    Reported experimentally determined values of hydrogen solubility in liquid and solid Al-H and Al-H-X (where X = Cu, Si, Zn, Mg, Li, Fe or Ti) systems have been critically reviewed and analyzed in terms of Wagner's interaction parameter. An attempt has been made to use Wagner's interaction parameter and statistic linear regression models derived from reported hydrogen solubility limits for binary aluminum alloys to predict the hydrogen solubility limits in liquid and solid (commercial) multicomponent aluminum alloys. Reasons for the observed poor agreement between the predicted and experimentally determined hydrogen solubility limits are discussed.

  12. Prediction of the wetting-induced collapse behaviour using the soil-water characteristic curve

    NASA Astrophysics Data System (ADS)

    Xie, Wan-Li; Li, Ping; Vanapalli, Sai K.; Wang, Jia-Ding

    2018-01-01

    Collapsible soils go through three distinct phases in response to matric suction decrease during wetting: pre-collapse phase, collapse phase and post-collapse phase. It is reasonable and conservative to consider a strain path that includes a pre-collapse phase in which constant volume is maintained and a collapse phase that extends to the final matric suction to be experienced by collapsible soils during wetting. Upon this assumption, a method is proposed for predicting the collapse behaviour due to wetting. To use the proposed method, two parameters, critical suction and collapse rate, are required. The former is the suction value below which significant collapse deformations take place in response to matric suction decease, and the later is the rate at which void ratio reduces with matric suction in the collapse phase. The value of critical suction can be estimated from the water-entry value taking account of both the microstructure characteristics and collapse mechanism of fine-grained collapsible soils; the wetting soil-water characteristic curve thus can be used as a tool. Five sets of data of wetting tests on both compacted and natural collapsible soils reported in the literature were used to validate the proposed method. The critical suction values were estimated from the water-entry value with parameter a that is suggested to vary between 0.10 and 0.25 for compacted soils and to be lower for natural collapsible soils. The results of a field permeation test in collapsible loess soils were also used to validate the proposed method. The relatively good agreement between the measured and estimated collapse deformations suggests that the proposed method can provide reasonable prediction of the collapse behaviour due to wetting.

  13. Robust Diagnosis Method Based on Parameter Estimation for an Interturn Short-Circuit Fault in Multipole PMSM under High-Speed Operation.

    PubMed

    Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo

    2015-11-20

    This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.

  14. Acoustic energy relations in Mudejar-Gothic churches.

    PubMed

    Zamarreño, Teófilo; Girón, Sara; Galindo, Miguel

    2007-01-01

    Extensive objective energy-based parameters have been measured in 12 Mudejar-Gothic churches in the south of Spain. Measurements took place in unoccupied churches according to the ISO-3382 standard. Monoaural objective measures in the 125-4000 Hz frequency range and in their spatial distributions were obtained. Acoustic parameters: clarity C80, definition D50, sound strength G and center time Ts have been deduced using impulse response analysis through a maximum length sequence measurement system in each church. These parameters spectrally averaged according to the most extended criteria in auditoria in order to consider acoustic quality were studied as a function of source-receiver distance. The experimental results were compared with predictions given by classical and other existing theoretical models proposed for concert halls and churches. An analytical semi-empirical model based on the measured values of the C80 parameter is proposed in this work for these spaces. The good agreement between predicted values and experimental data for definition, sound strength, and center time in the churches analyzed shows that the model can be used for design predictions and other purposes with reasonable accuracy.

  15. Development of genetic algorithm-based optimization module in WHAT system for hydrograph analysis and model application

    NASA Astrophysics Data System (ADS)

    Lim, Kyoung Jae; Park, Youn Shik; Kim, Jonggun; Shin, Yong-Chul; Kim, Nam Won; Kim, Seong Joon; Jeon, Ji-Hong; Engel, Bernard A.

    2010-07-01

    Many hydrologic and water quality computer models have been developed and applied to assess hydrologic and water quality impacts of land use changes. These models are typically calibrated and validated prior to their application. The Long-Term Hydrologic Impact Assessment (L-THIA) model was applied to the Little Eagle Creek (LEC) watershed and compared with the filtered direct runoff using BFLOW and the Eckhardt digital filter (with a default BFI max value of 0.80 and filter parameter value of 0.98), both available in the Web GIS-based Hydrograph Analysis Tool, called WHAT. The R2 value and the Nash-Sutcliffe coefficient values were 0.68 and 0.64 with BFLOW, and 0.66 and 0.63 with the Eckhardt digital filter. Although these results indicate that the L-THIA model estimates direct runoff reasonably well, the filtered direct runoff values using BFLOW and Eckhardt digital filter with the default BFI max and filter parameter values do not reflect hydrological and hydrogeological situations in the LEC watershed. Thus, a BFI max GA-Analyzer module (BFI max Genetic Algorithm-Analyzer module) was developed and integrated into the WHAT system for determination of the optimum BFI max parameter and filter parameter of the Eckhardt digital filter. With the automated recession curve analysis method and BFI max GA-Analyzer module of the WHAT system, the optimum BFI max value of 0.491 and filter parameter value of 0.987 were determined for the LEC watershed. The comparison of L-THIA estimates with filtered direct runoff using an optimized BFI max and filter parameter resulted in an R2 value of 0.66 and the Nash-Sutcliffe coefficient value of 0.63. However, L-THIA estimates calibrated with the optimized BFI max and filter parameter increased by 33% and estimated NPS pollutant loadings increased by more than 20%. This indicates L-THIA model direct runoff estimates can be incorrect by 33% and NPS pollutant loading estimation by more than 20%, if the accuracy of the baseflow separation method is not validated for the study watershed prior to model comparison. This study shows the importance of baseflow separation in hydrologic and water quality modeling using the L-THIA model.

  16. Prediction of solubility parameters and miscibility of pharmaceutical compounds by molecular dynamics simulations.

    PubMed

    Gupta, Jasmine; Nunes, Cletus; Vyas, Shyam; Jonnalagadda, Sriramakamal

    2011-03-10

    The objectives of this study were (i) to develop a computational model based on molecular dynamics technique to predict the miscibility of indomethacin in carriers (polyethylene oxide, glucose, and sucrose) and (ii) to experimentally verify the in silico predictions by characterizing the drug-carrier mixtures using thermoanalytical techniques. Molecular dynamics (MD) simulations were performed using the COMPASS force field, and the cohesive energy density and the solubility parameters were determined for the model compounds. The magnitude of difference in the solubility parameters of drug and carrier is indicative of their miscibility. The MD simulations predicted indomethacin to be miscible with polyethylene oxide and to be borderline miscible with sucrose and immiscible with glucose. The solubility parameter values obtained using the MD simulations values were in reasonable agreement with those calculated using group contribution methods. Differential scanning calorimetry showed melting point depression of polyethylene oxide with increasing levels of indomethacin accompanied by peak broadening, confirming miscibility. In contrast, thermal analysis of blends of indomethacin with sucrose and glucose verified general immiscibility. The findings demonstrate that molecular modeling is a powerful technique for determining the solubility parameters and predicting miscibility of pharmaceutical compounds. © 2011 American Chemical Society

  17. Cosmological attractor inflation from the RG-improved Higgs sector of finite gauge theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elizalde, Emilio; Odintsov, Sergei D.; Pozdeeva, Ekaterina O.

    2016-02-01

    The possibility to construct an inflationary scenario for renormalization-group improved potentials corresponding to the Higgs sector of finite gauge models is investigated. Taking into account quantum corrections to the renormalization-group potential which sums all leading logs of perturbation theory is essential for a successful realization of the inflationary scenario, with very reasonable parameter values. The inflationary models thus obtained are seen to be in good agreement with the most recent and accurate observational data. More specifically, the values of the relevant inflationary parameters, n{sub s} and r, are close to the corresponding ones in the R{sup 2} and Higgs-driven inflationmore » scenarios. It is shown that the model here constructed and Higgs-driven inflation belong to the same class of cosmological attractors.« less

  18. Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations

    NASA Astrophysics Data System (ADS)

    Kozak, P.

    2014-12-01

    Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.

  19. Surface pretreatment of plastics with an atmospheric pressure plasma jet - Influence of generator power and kinematics

    NASA Astrophysics Data System (ADS)

    Moritzer, E.; Leister, C.

    2014-05-01

    The industrial use of atmospheric pressure plasmas in the plastics processing industry has increased significantly in recent years. Users of this treatment process have the possibility to influence the target values (e.g. bond strength or surface energy) with the help of kinematic and electrical parameters. Until now, systematic procedures have been used with which the parameters can be adapted to the process or product requirements but only by very time-consuming methods. For this reason, the relationship between influencing values and target values will be examined based on the example of a pretreatment in the bonding process with the help of statistical experimental design. Because of the large number of parameters involved, the analysis is restricted to the kinematic and electrical parameters. In the experimental tests, the following factors are taken as parameters: gap between nozzle and substrate, treatment velocity (kinematic data), voltage and duty cycle (electrical data). The statistical evaluation shows significant relationships between the parameters and surface energy in the case of polypropylene. An increase in the voltage and duty cycle increases the polar proportion of the surface energy, while a larger gap and higher velocity leads to lower energy levels. The bond strength of the overlapping bond is also significantly influenced by the voltage, velocity and gap. The direction of their effects is identical with those of the surface energy. In addition to the kinematic influences of the motion of an atmospheric pressure plasma jet, it is therefore especially important that the parameters for the plasma production are taken into account when designing the pretreatment processes.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moritzer, E., E-mail: elmar.moritzer@ktp.upb.de; Leister, C., E-mail: elmar.moritzer@ktp.upb.de

    The industrial use of atmospheric pressure plasmas in the plastics processing industry has increased significantly in recent years. Users of this treatment process have the possibility to influence the target values (e.g. bond strength or surface energy) with the help of kinematic and electrical parameters. Until now, systematic procedures have been used with which the parameters can be adapted to the process or product requirements but only by very time-consuming methods. For this reason, the relationship between influencing values and target values will be examined based on the example of a pretreatment in the bonding process with the help ofmore » statistical experimental design. Because of the large number of parameters involved, the analysis is restricted to the kinematic and electrical parameters. In the experimental tests, the following factors are taken as parameters: gap between nozzle and substrate, treatment velocity (kinematic data), voltage and duty cycle (electrical data). The statistical evaluation shows significant relationships between the parameters and surface energy in the case of polypropylene. An increase in the voltage and duty cycle increases the polar proportion of the surface energy, while a larger gap and higher velocity leads to lower energy levels. The bond strength of the overlapping bond is also significantly influenced by the voltage, velocity and gap. The direction of their effects is identical with those of the surface energy. In addition to the kinematic influences of the motion of an atmospheric pressure plasma jet, it is therefore especially important that the parameters for the plasma production are taken into account when designing the pretreatment processes.« less

  1. A Characterization of Dynamic Reasoning: Reasoning with Time as Parameter

    ERIC Educational Resources Information Center

    Keene, Karen Allen

    2007-01-01

    Students incorporate and use the implicit and explicit parameter time to support their mathematical reasoning and deepen their understandings as they participate in a differential equations class during instruction on solutions to systems of differential equations. Therefore, dynamic reasoning is defined as developing and using conceptualizations…

  2. Predicting bioactive glass properties from the molecular chemical composition: glass transition temperature.

    PubMed

    O'Donnell, Matthew D

    2011-05-01

    The glass transition temperature (T(g)) of inorganic glasses is an important parameter than can be used to correlate with other glass properties, such as dissolution rate, which governs in vitro and in vivo bioactivity. Seven bioactive glass compositional series reported in the literature (77 in total) were analysed here with T(g) values obtained by a number of different methods: differential thermal analysis, differential scanning calorimetry and dilatometry. An iterative least-squares fitting method was used to correlate T(g) from thermal analysis of these compositions with the levels of individual oxide and fluoride components in the glasses. When all seven series were fitted a reasonable correlation was found between calculated and experimental values (R(2)=0.89). When the two compositional series that were designed in weight percentages (the remaining five were designed in molar percentage) were removed from the model an improved fit was achieved (R(2)=0.97). This study shows that T(g) for a wide range in compositions (e.g. SiO(2) content of 37.3-68.4 mol.%) can be predicted to reasonable accuracy enabling processing parameters to be predicted such as annealing, fibre-drawing and sintering temperatures. Copyright © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  3. An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model

    NASA Astrophysics Data System (ADS)

    Tiernan, E. D.; Hodges, B. R.

    2017-12-01

    The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.

  4. Sensitivity study for a remotely piloted microwave-powered sailplane used as a high-altitude observation

    NASA Technical Reports Server (NTRS)

    Turriziani, R. V.

    1979-01-01

    The sensitivity of several performance characteristics of a proposed design for a microwave-powered, remotely piloted, high-altitude sailplane to changes in independently varied design parameters was investigated. Results were expressed as variations from baseline values of range, final climb altitude and onboard storage of radiated energy. Calculated range decreased with increases in either gross weight or parasite drag coefficient; it also decreased with decreases in lift coefficient, propeller efficiency, or microwave beam density. The sensitivity trends for range and final climb altitude were very similar. The sensitivity trends for stored energy were reversed from those for range, except for decreasing microwave beam density. Some study results for single parameter variations were combined to estimate the effect of the simultaneous variation of several parameters: for two parameters, this appeared to give reasonably accurate results.

  5. Manual editing of automatically recorded data in an anesthesia information management system.

    PubMed

    Wax, David B; Beilin, Yaakov; Hossain, Sabera; Lin, Hung-Mo; Reich, David L

    2008-11-01

    Anesthesia information management systems allow automatic recording of physiologic and anesthetic data. The authors investigated the prevalence of such data modification in an academic medical center. The authors queried their anesthesia information management system database of anesthetics performed in 2006 and tabulated the counts of data points for automatically recorded physiologic and anesthetic parameters as well as the subset of those data that were manually invalidated by clinicians (both with and without alternate values manually appended). Patient, practitioner, data source, and timing characteristics of recorded values were also extracted to determine their associations with editing of various parameters in the anesthesia information management system record. A total of 29,491 cases were analyzed, 19% of which had one or more data points manually invalidated. Among 58 attending anesthesiologists, each invalidated data in a median of 7% of their cases when working as a sole practitioner. A minority of invalidated values were manually appended with alternate values. Pulse rate, blood pressure, and pulse oximetry were the most commonly invalidated parameters. Data invalidation usually resulted in a decrease in parameter variance. Factors independently associated with invalidation included extreme physiologic values, American Society of Anesthesiologists physical status classification, emergency status, timing (phase of the procedure/anesthetic), presence of an intraarterial catheter, resident or certified registered nurse anesthetist involvement, and procedure duration. Editing of physiologic data automatically recorded in an anesthesia information management system is a common practice and results in decreased variability of intraoperative data. Further investigation may clarify the reasons for and consequences of this behavior.

  6. Joining direct and indirect inverse calibration methods to characterize karst, coastal aquifers

    NASA Astrophysics Data System (ADS)

    De Filippis, Giovanna; Foglia, Laura; Giudici, Mauro; Mehl, Steffen; Margiotta, Stefano; Negri, Sergio

    2016-04-01

    Parameter estimation is extremely relevant for accurate simulation of groundwater flow. Parameter values for models of large-scale catchments are usually derived from a limited set of field observations, which can rarely be obtained in a straightforward way from field tests or laboratory measurements on samples, due to a number of factors, including measurement errors and inadequate sampling density. Indeed, a wide gap exists between the local scale, at which most of the observations are taken, and the regional or basin scale, at which the planning and management decisions are usually made. For this reason, the use of geologic information and field data is generally made by zoning the parameter fields. However, pure zoning does not perform well in the case of fairly complex aquifers and this is particularly true for karst aquifers. In fact, the support of the hydraulic conductivity measured in the field is normally much smaller than the cell size of the numerical model, so it should be upscaled to a scale consistent with that of the numerical model discretization. Automatic inverse calibration is a valuable procedure to identify model parameter values by conditioning on observed, available data, limiting the subjective evaluations introduced with the trial-and-error technique. Many approaches have been proposed to solve the inverse problem. Generally speaking, inverse methods fall into two groups: direct and indirect methods. Direct methods allow determination of hydraulic conductivities from the groundwater flow equations which relate the conductivity and head fields. Indirect methods, instead, can handle any type of parameters, independently from the mathematical equations that govern the process, and condition parameter values and model construction on measurements of model output quantities, compared with the available observation data, through the minimization of an objective function. Both approaches have pros and cons, depending also on model complexity. For this reason, a joint procedure is proposed by merging both direct and indirect approaches, thus taking advantage of their strengths, first among them the possibility to get a hydraulic head distribution all over the domain, instead of a zonation. Pros and cons of such an integrated methodology, so far unexplored to the authors' knowledge, are derived after application to a highly heterogeneous karst, coastal aquifer located in southern Italy.

  7. The ethical dimensions of wildlife disease management in an evolutionary context.

    PubMed

    Crozier, Gkd; Schulte-Hostedde, Albrecht I

    2014-08-01

    Best practices in wildlife disease management require robust evolutionary ecological research (EER). This means not only basing management decisions on evolutionarily sound reasoning, but also conducting management in a way that actively contributes to the on-going development of that research. Because good management requires good science, and good science is 'good' science (i.e., effective science is often science conducted ethically), good management therefore also requires practices that accord with sound ethical reasoning. To that end, we propose a two-part framework to assist decision makers to identify ethical pitfalls of wildlife disease management. The first part consists of six values - freedom, fairness, well-being, replacement, reduction, and refinement; these values, developed for the ethical evaluation of EER practices, are also well suited for evaluating the ethics of wildlife disease management. The second part consists of a decision tree to help identify the ethically salient dimensions of wildlife disease management and to guide managers toward ethically responsible practices in complex situations. While ethical reasoning cannot be used to deduce from first principles what practices should be undertaken in every given set of circumstances, it can establish parameters that bound what sorts of practices will be acceptable or unacceptable in certain types of scenarios.

  8. Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2006-01-01

    A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.

  9. Measures of GCM Performance as Functions of Model Parameters Affecting Clouds and Radiation

    NASA Astrophysics Data System (ADS)

    Jackson, C.; Mu, Q.; Sen, M.; Stoffa, P.

    2002-05-01

    This abstract is one of three related presentations at this meeting dealing with several issues surrounding optimal parameter and uncertainty estimation of model predictions of climate. Uncertainty in model predictions of climate depends in part on the uncertainty produced by model approximations or parameterizations of unresolved physics. Evaluating these uncertainties is computationally expensive because one needs to evaluate how arbitrary choices for any given combination of model parameters affects model performance. Because the computational effort grows exponentially with the number of parameters being investigated, it is important to choose parameters carefully. Evaluating whether a parameter is worth investigating depends on two considerations: 1) does reasonable choices of parameter values produce a large range in model response relative to observational uncertainty? and 2) does the model response depend non-linearly on various combinations of model parameters? We have decided to narrow our attention to selecting parameters that affect clouds and radiation, as it is likely that these parameters will dominate uncertainties in model predictions of future climate. We present preliminary results of ~20 to 30 AMIPII style climate model integrations using NCAR's CCM3.10 that show model performance as functions of individual parameters controlling 1) critical relative humidity for cloud formation (RHMIN), and 2) boundary layer critical Richardson number (RICR). We also explore various definitions of model performance that include some or all observational data sources (surface air temperature and pressure, meridional and zonal winds, clouds, long and short-wave cloud forcings, etc...) and evaluate in a few select cases whether the model's response depends non-linearly on the parameter values we have selected.

  10. Application of continuous normal-lognormal bivariate density functions in a sensitivity analysis of municipal solid waste landfill.

    PubMed

    Petrovic, Igor; Hip, Ivan; Fredlund, Murray D

    2016-09-01

    The variability of untreated municipal solid waste (MSW) shear strength parameters, namely cohesion and shear friction angle, with respect to waste stability problems, is of primary concern due to the strong heterogeneity of MSW. A large number of municipal solid waste (MSW) shear strength parameters (friction angle and cohesion) were collected from published literature and analyzed. The basic statistical analysis has shown that the central tendency of both shear strength parameters fits reasonably well within the ranges of recommended values proposed by different authors. In addition, it was established that the correlation between shear friction angle and cohesion is not strong but it still remained significant. Through use of a distribution fitting method it was found that the shear friction angle could be adjusted to a normal probability density function while cohesion follows the log-normal density function. The continuous normal-lognormal bivariate density function was therefore selected as an adequate model to ascertain rational boundary values ("confidence interval") for MSW shear strength parameters. It was concluded that a curve with a 70% confidence level generates a "confidence interval" within the reasonable limits. With respect to the decomposition stage of the waste material, three different ranges of appropriate shear strength parameters were indicated. Defined parameters were then used as input parameters for an Alternative Point Estimated Method (APEM) stability analysis on a real case scenario of the Jakusevec landfill. The Jakusevec landfill is the disposal site of the capital of Croatia - Zagreb. The analysis shows that in the case of a dry landfill the most significant factor influencing the safety factor was the shear friction angle of old, decomposed waste material, while in the case of a landfill with significant leachate level the most significant factor influencing the safety factor was the cohesion of old, decomposed waste material. The analysis also showed that a satisfactory level of performance with a small probability of failure was produced for the standard practice design of waste landfills as well as an analysis scenario immediately after the landfill closure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).

  12. Experimental Modeling of a Formula Student Carbon Composite Nose Cone

    PubMed Central

    Fellows, Neil A.

    2017-01-01

    A numerical impact study is presented on a Formula Student (FS) racing car carbon composite nose cone. The effect of material model and model parameter selection on the numerical deceleration curves is discussed in light of the experimental deceleration data. The models show reasonable correlation in terms of the shape of the deceleration-displacement curves but do not match the peak deceleration values with errors greater that 30%. PMID:28772982

  13. Calibration of a convective parameterization scheme in the WRF model and its impact on the simulation of East Asian summer monsoon precipitation

    DOE PAGES

    Yang, Ben; Zhang, Yaocun; Qian, Yun; ...

    2014-03-26

    Reasonably modeling the magnitude, south-north gradient and seasonal propagation of precipitation associated with the East Asian Summer Monsoon (EASM) is a challenging task in the climate community. In this study we calibrate five key parameters in the Kain-Fritsch convection scheme in the WRF model using an efficient importance-sampling algorithm to improve the EASM simulation. We also examine the impacts of the improved EASM precipitation on other physical process. Our results suggest similar model sensitivity and values of optimized parameters across years with different EASM intensities. By applying the optimal parameters, the simulated precipitation and surface energy features are generally improved.more » The parameters related to downdraft, entrainment coefficients and CAPE consumption time (CCT) can most sensitively affect the precipitation and atmospheric features. Larger downdraft coefficient or CCT decrease the heavy rainfall frequency, while larger entrainment coefficient delays the convection development but build up more potential for heavy rainfall events, causing a possible northward shift of rainfall distribution. The CCT is the most sensitive parameter over wet region and the downdraft parameter plays more important roles over drier northern region. Long-term simulations confirm that by using the optimized parameters the precipitation distributions are better simulated in both weak and strong EASM years. Due to more reasonable simulated precipitation condensational heating, the monsoon circulations are also improved. Lastly, by using the optimized parameters the biases in the retreating (beginning) of Mei-yu (northern China rainfall) simulated by the standard WRF model are evidently reduced and the seasonal and sub-seasonal variations of the monsoon precipitation are remarkably improved.« less

  14. Interspecies scaling of a camptothecin analogue: human predictions for intravenous topotecan using animal data.

    PubMed

    Ahlawat, P; Srinivas, N R

    2008-11-01

    As a class, camptothecin analogues via market entry of topotecan and irinotecan, have shown promise for the treatment of various solid tumours. Topotecan, in particular, was chosen as the substrate for allometric scaling and prediction of human parameter values for both total clearance (CL) and volume of distribution (V(ss)). The availability of published data in mouse, rat, dog, and monkey paved the way for interspecies scaling via allometry. Although it appeared that at a minimum mouse, rat, and dog would reasonably fit in a three-species allometry scale-up, the inclusion of monkey data enabled a better prediction of the human parameter values for total topotecan-e.g., CL: allometric equation: 1.5234W(0.7865); predicted value = 43.04 l h(-1): observed CL = 24-53 l h(-1); V(ss): allometric equation: 1.1939W(1.0208); predicted value = 91.29 litres: observed V(ss) = 66-146 litres. The proximity of the allometric exponent values of CL (0.7885) and V(ss) (1.0208) to the suggested values of 0.75 and 1.00 was not only encouraging, but also confirmed the applicability of interspecies scaling approach for topotecan. The data suggest that allometric scaling approaches with suitable correction factors could potentially be used to predict the human pharmacokinetics of novel CPT analogues prospectively.

  15. 78 FR 59773 - Proposed Information Collection (VA Request for Determination of Reasonable Value) Activity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-27

    ... Request for Determination of Reasonable Value) Activity: Comment Request AGENCY: Veterans Benefits... to determine the reasonable value of properties for guaranteed or direct home loans. DATES: Written... Reasonable Value, VA Form 26-1805 and 26-1805-1. OMB Control Number: 2900-0045. Type of Review: Revision of a...

  16. 75 FR 61249 - Proposed Information Collection (VA Request for Determination of Reasonable Value) Activity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-04

    ... Request for Determination of Reasonable Value) Activity: Comment Request AGENCY: Veterans Benefits... information needed to determine the reasonable value of properties for guaranteed or direct home loans. DATES... Request for Determination of Reasonable Value, VA Form 26-1805 and 26-1805-1. OMB Control Number: 2900...

  17. 75 FR 61858 - Proposed Information Collection (VA Request for Determination of Reasonable Value) Activity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-06

    ... Request for Determination of Reasonable Value) Activity: Comment Request AGENCY: Veterans Benefits... to determine the reasonable value of properties for guaranteed or direct home loans. DATES: Written... Reasonable Value, VA Form 26-1805 and 26-1805-1. OMB Control Number: 2900-0045. Type of Review: Extension of...

  18. Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package

    NASA Astrophysics Data System (ADS)

    Cheng, L.; AghaKouchak, A.; Gilleland, E.

    2013-12-01

    Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.

  19. Characteristics and Impact Factors of Parameter Alpha in the Nonlinear Advection-Aridity Method for Estimating Evapotranspiration at Interannual Scale in the Loess Plateau

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Liu, W.; Ning, T.

    2017-12-01

    Land surface actual evapotranspiration plays a key role in the global water and energy cycles. Accurate estimation of evapotranspiration is crucial for understanding the interactions between the land surface and the atmosphere, as well as for managing water resources. The nonlinear advection-aridity approach was formulated by Brutsaert to estimate actual evapotranspiration in 2015. Subsequently, this approach has been verified, applied and developed by many scholars. The estimation, impact factors and correlation analysis of the parameter alpha (αe) of this approach has become important aspects of the research. According to the principle of this approach, the potential evapotranspiration (ETpo) (taking αe as 1) and the apparent potential evapotranspiration (ETpm) were calculated using the meteorological data of 123 sites of the Loess Plateau and its surrounding areas. Then the mean spatial values of precipitation (P), ETpm and ETpo for 13 catchments were obtained by a CoKriging interpolation algorithm. Based on the runoff data of the 13 catchments, actual evapotranspiration was calculated using the catchment water balance equation at the hydrological year scale (May to April of the following year) by ignoring the change of catchment water storage. Thus, the parameter was estimated, and its relationships with P, ETpm and aridity index (ETpm/P) were further analyzed. The results showed that the general range of annual parameter value was 0.385-1.085, with an average value of 0.751 and a standard deviation of 0.113. The mean annual parameter αe value showed different spatial characteristics, with lower values in northern and higher values in southern. The annual scale parameter linearly related with annual P (R2=0.89) and ETpm (R2=0.49), while it exhibited a power function relationship with the aridity index (R2=0.83). Considering the ETpm is a variable in the nonlinear advection-aridity approach in which its effect has been incorporated, the relationship of precipitation and parameter (αe=1.0×10-3*P+0.301) was developed. The value of αe in this study is lower than those in the published literature. The reason is unclear at this point and yet need further investigation. The preliminary application of the nonlinear advection-aridity approach in the Loess Plateau has shown promising results.

  20. Industrial Test of High Strength Steel Plates Free Boron Q890D Used for Engineering Machinery

    NASA Astrophysics Data System (ADS)

    Dong, Ruifeng; Liu, Zetian; Gao, Jun

    The chemistry composition, process parameters and the test results of Q890D free boron high strength steel plate used for engineering machinery was studied. The 16 40 mm thickness steel plates with good mechanical properties that was yield strength of 930 970 MPa, tensile strength of 978 1017 MPa, elongation of 13.5 15%, the average impact energy value of more than 100 J were developed by improving steel purity, adopting the reasonable controlled rolling and cooling process, using reasonable off-line quenching and tempering process. The test plates have good crack resistance in 60 °C preheat temperature condition because of that there are no any cracks in the surfaces, cross-section and roots of welding joints.

  1. The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide

    PubMed Central

    Folly, Walter Sydney Dutra

    2011-01-01

    Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431

  2. The threshold bias model: a mathematical model for the nomothetic approach of suicide.

    PubMed

    Folly, Walter Sydney Dutra

    2011-01-01

    Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health.

  3. Analysis of a parallelized nonlinear elliptic boundary value problem solver with application to reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.; Smooke, Mitchell D.

    1987-01-01

    A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.

  4. Multi Response Optimization of Process Parameters Using Grey Relational Analysis for Turning of Al-6061

    NASA Astrophysics Data System (ADS)

    Deepak, Doreswamy; Beedu, Rajendra

    2017-08-01

    Al-6061 is one among the most useful material used in manufacturing of products. The major qualities of Aluminium are reasonably good strength, corrosion resistance and thermal conductivity. These qualities have made it a suitable material for various applications. While manufacturing these products, companies strive for reducing the production cost by increasing Material Removal Rate (MRR). Meanwhile, the quality of surface need to be ensured at an acceptable value. This paper aims at bringing a compromise between high MRR and low surface roughness requirement by applying Grey Relational Analysis (GRA). This article presents the selection of controllable parameters like longitudinal feed, cutting speed and depth of cut to arrive at optimum values of MRR and surface roughness (Ra). The process parameters for experiments were selected based on Taguchi’s L9 array with two replications. Grey relation analysis being most suited method for multi response optimization, the same is adopted for the optimization. The result shows that feed rate is the most significant factor that influences MRR and Surface finish.

  5. Diagnostic value of lactate, procalcitonin, ferritin, serum-C-reactive protein, and other biomarkers in bacterial and viral meningitis: A cross-sectional study.

    PubMed

    Sanaei Dashti, Anahita; Alizadeh, Shekoofan; Karimi, Abdullah; Khalifeh, Masoomeh; Shoja, Seyed Abdolmajid

    2017-09-01

    There are many difficulties distinguishing bacterial from viral meningitis that could be reasonably solved using biomarkers. The aim of this study was to evaluate lactate, procalcitonin (PCT), ferritin, serum-CRP (C-reactive protein), and other known biomarkers in differentiating bacterial meningitis from viral meningitis in children.All children aged 28 days to 14 years with suspected meningitis who were admitted to Mofid Children's Hospital, Tehran, between October 2012 and November 2013, were enrolled in this prospective cross-sectional study. Children were divided into 2 groups of bacterial and viral meningitis, based on the results of cerebrospinal fluid (CSF) culture, polymerase chain reaction, and cytochemical profile. Diagnostic values of CSF parameters (ferritin, PCT, absolute neutrophil count [ANC], white blood cell count, and lactate) and serum parameters (PCT, ferritin, CRP, and erythrocyte sedimentation rate [ESR]) were evaluated.Among 50 patients with meningitis, 12 were diagnosed with bacterial meningitis. Concentrations of all markers were significantly different between bacterial and viral meningitis, except for serum (P = .389) and CSF (P = .136) PCT. The best rates of area under the receiver operating characteristic (ROC) curve (AUC) were achieved by lactate (AUC = 0.923) and serum-CRP (AUC = 0.889). The best negative predictive values (NPV) for bacterial meningitis were attained by ANC (100%) and lactate (97.1%).The results of our study suggest that ferritin and PCT are not strong predictive biomarkers. A combination of low CSF lactate, ANC, ESR, and serum-CRP could reasonably rule out the bacterial meningitis.

  6. Diagnostic value of lactate, procalcitonin, ferritin, serum-C-reactive protein, and other biomarkers in bacterial and viral meningitis

    PubMed Central

    Sanaei Dashti, Anahita; Alizadeh, Shekoofan; Karimi, Abdullah; Khalifeh, Masoomeh; Shoja, Seyed Abdolmajid

    2017-01-01

    Abstract There are many difficulties distinguishing bacterial from viral meningitis that could be reasonably solved using biomarkers. The aim of this study was to evaluate lactate, procalcitonin (PCT), ferritin, serum-CRP (C-reactive protein), and other known biomarkers in differentiating bacterial meningitis from viral meningitis in children. All children aged 28 days to 14 years with suspected meningitis who were admitted to Mofid Children's Hospital, Tehran, between October 2012 and November 2013, were enrolled in this prospective cross-sectional study. Children were divided into 2 groups of bacterial and viral meningitis, based on the results of cerebrospinal fluid (CSF) culture, polymerase chain reaction, and cytochemical profile. Diagnostic values of CSF parameters (ferritin, PCT, absolute neutrophil count [ANC], white blood cell count, and lactate) and serum parameters (PCT, ferritin, CRP, and erythrocyte sedimentation rate [ESR]) were evaluated. Among 50 patients with meningitis, 12 were diagnosed with bacterial meningitis. Concentrations of all markers were significantly different between bacterial and viral meningitis, except for serum (P = .389) and CSF (P = .136) PCT. The best rates of area under the receiver operating characteristic (ROC) curve (AUC) were achieved by lactate (AUC = 0.923) and serum-CRP (AUC = 0.889). The best negative predictive values (NPV) for bacterial meningitis were attained by ANC (100%) and lactate (97.1%). The results of our study suggest that ferritin and PCT are not strong predictive biomarkers. A combination of low CSF lactate, ANC, ESR, and serum-CRP could reasonably rule out the bacterial meningitis. PMID:28858084

  7. 75 FR 78808 - Agency Information Collection (VA Request for Determination of Reasonable Value) Activity Under...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-16

    ... Request for Determination of Reasonable Value) Activity Under OMB Review AGENCY: Veterans Benefits... Determination of Reasonable Value VA Form 26- 1805 and 26-1805-1. OMB Control Number: 2900-0045. Type of Review... alterations does not exceed the reasonable value; or if the loan is for repair, alteration, or improvements of...

  8. Sensitivity and specificity of univariate MRI analysis of experimentally degraded cartilage under clinical imaging conditions.

    PubMed

    Lukas, Vanessa A; Fishbein, Kenneth W; Reiter, David A; Lin, Ping-Chang; Schneider, Erika; Spencer, Richard G

    2015-07-01

    To evaluate the sensitivity and specificity of classification of pathomimetically degraded bovine nasal cartilage at 3 Tesla and 37°C using univariate MRI measurements of both pure parameter values and intensities of parameter-weighted images. Pre- and posttrypsin degradation values of T1 , T2 , T2 *, magnetization transfer ratio (MTR), and apparent diffusion coefficient (ADC), and corresponding weighted images, were analyzed. Classification based on the Euclidean distance was performed and the quality of classification was assessed through sensitivity, specificity and accuracy (ACC). The classifiers with the highest accuracy values were ADC (ACC = 0.82 ± 0.06), MTR (ACC = 0.78 ± 0.06), T1 (ACC = 0.99 ± 0.01), T2 derived from a three-dimensional (3D) spin-echo sequence (ACC = 0.74 ± 0.05), and T2 derived from a 2D spin-echo sequence (ACC = 0.77 ± 0.06), along with two of the diffusion-weighted signal intensities (b = 333 s/mm(2) : ACC = 0.80 ± 0.05; b = 666 s/mm(2) : ACC = 0.85 ± 0.04). In particular, T1 values differed substantially between the groups, resulting in atypically high classification accuracy. The second-best classifier, diffusion weighting with b = 666 s/mm(2) , as well as all other parameters evaluated, exhibited substantial overlap between pre- and postdegradation groups, resulting in decreased accuracies. Classification according to T1 values showed excellent test characteristics (ACC = 0.99), with several other parameters also showing reasonable performance (ACC > 0.70). Of these, diffusion weighting is particularly promising as a potentially practical clinical modality. As in previous work, we again find that highly statistically significant group mean differences do not necessarily translate into accurate clinical classification rules. © 2014 Wiley Periodicals, Inc.

  9. Neuromusculoskeletal Model Calibration Significantly Affects Predicted Knee Contact Forces for Walking

    PubMed Central

    Serrancolí, Gil; Kinney, Allison L.; Fregly, Benjamin J.; Font-Llagunes, Josep M.

    2016-01-01

    Though walking impairments are prevalent in society, clinical treatments are often ineffective at restoring lost function. For this reason, researchers have begun to explore the use of patient-specific computational walking models to develop more effective treatments. However, the accuracy with which models can predict internal body forces in muscles and across joints depends on how well relevant model parameter values can be calibrated for the patient. This study investigated how knowledge of internal knee contact forces affects calibration of neuromusculoskeletal model parameter values and subsequent prediction of internal knee contact and leg muscle forces during walking. Model calibration was performed using a novel two-level optimization procedure applied to six normal walking trials from the Fourth Grand Challenge Competition to Predict In Vivo Knee Loads. The outer-level optimization adjusted time-invariant model parameter values to minimize passive muscle forces, reserve actuator moments, and model parameter value changes with (Approach A) and without (Approach B) tracking of experimental knee contact forces. Using the current guess for model parameter values but no knee contact force information, the inner-level optimization predicted time-varying muscle activations that were close to experimental muscle synergy patterns and consistent with the experimental inverse dynamic loads (both approaches). For all the six gait trials, Approach A predicted knee contact forces with high accuracy for both compartments (average correlation coefficient r = 0.99 and root mean square error (RMSE) = 52.6 N medial; average r = 0.95 and RMSE = 56.6 N lateral). In contrast, Approach B overpredicted contact force magnitude for both compartments (average RMSE = 323 N medial and 348 N lateral) and poorly matched contact force shape for the lateral compartment (average r = 0.90 medial and −0.10 lateral). Approach B had statistically higher lateral muscle forces and lateral optimal muscle fiber lengths but lower medial, central, and lateral normalized muscle fiber lengths compared to Approach A. These findings suggest that poorly calibrated model parameter values may be a major factor limiting the ability of neuromusculoskeletal models to predict knee contact and leg muscle forces accurately for walking. PMID:27210105

  10. Upper Bound Radiation Dose Assessment for Military Personnel at McMurdo Station, Antarctica, between 1962 and 1979 (2REV)

    DTIC Science & Technology

    2017-09-30

    veterans, their dependents , the Department of Veterans Affairs (VA), and the Naval Dosimetry Center regarding the VA radiogenic disease claims process. 15...nonexistent (HPS, 2010). Finally, to assist McMurdo Station veterans, their dependents , the VA, and the Naval Dosimetry Center, this report includes...24 h d−1 for 180 d Reasonable assumption 47 Parameter Value Rationale/Reference/Comment Dose coefficients Depends on organ 5-micrometer (µm

  11. Noncritical quadrature squeezing through spontaneous polarization symmetry breaking.

    PubMed

    Garcia-Ferrer, Ferran V; Navarrete-Benlloch, Carlos; de Valcárcel, Germán J; Roldán, Eugenio

    2010-07-01

    We discuss the possibility of generating noncritical quadrature squeezing by spontaneous polarization symmetry breaking. We first consider Type II frequency-degenerate optical parametric oscillators but discard them for a number of reasons. Then we propose a four-wave-mixing cavity, in which the polarization of the output mode is always linear but has an arbitrary orientation. We show that in such a cavity, complete noise suppression in a quadrature of the output field occurs, irrespective of the parameter values.

  12. The possibilities of using scale-selective polarization cartography in diagnostics of myocardium pathologies

    NASA Astrophysics Data System (ADS)

    Ushenko, Yu. A.; Wanchuliak, O. Y.

    2013-06-01

    The optical model of polycrystalline networks of myocardium protein fibrils is presented. The technique of determining the coordinate distribution of polarization azimuth of the points of laser images of myocardium histological sections is suggested. The results of investigating the interrelation between the values of statistical (statistical moments of the 1st-4th order) parameters are presented which characterize distributions of wavelet-coefficients polarization maps of myocardium layers and death reasons.

  13. The A and m coefficients in the Bruun/Dean equilibrium profile equation seen from the Arctic

    USGS Publications Warehouse

    Are, F.; Reimnitz, E.

    2008-01-01

    The Bruun/Dean relation between water depth and distance from the shore with a constant profile shape factor is widely used to describe shoreface profiles in temperate environments. However, it has been shown that the sediment scale parameter (A) and the profile shape factor (m) are interrelated variables. An analysis of 63 Arctic erosional shoreface profiles shows that both coefficients are highly variable. Relative frequency of the average m value is only 16% by the class width 0.1. No other m value frequency exceeds 21%. Therefore, there is insufficient reason to use average m to characterize Arctic shoreface profile shape. The shape of each profile has a definite combination of A and m values. Coefficients A and m show a distinct inverse relationship, as in temperate climate. A dependence of m values on coastal sediment grain size is seen, and m decreases with increasing grain size. With constant m = 0.67, parameter A obtains a dimension unit m1/3. But A equals the water depth in meters 1 m from the water edge. This fact and the variability of parameter m testify that the Bruun/Dean equation is essentially an empirical formula. There is no need to give any measurement unit to parameter A. But the International System of Units (SI) has to be used in applying the Bruun/Dean equation for shoreface profiles. A comparison of the shape of Arctic shoreface profiles with those of temperate environments shows surprising similarity. Therefore, the conclusions reached in this Arctic paper seem to apply also to temperate environments.

  14. Comparative study on the water quality status of Andra reservoir and Denkada anicut constructed on Champavati River, Vizianagaram, India

    NASA Astrophysics Data System (ADS)

    Kumar, G. V. S. R. Pavan; Krishna, K. Rama

    2017-06-01

    The author's present study was carried out for a period of 3 years from 2010 to 2013 to itemize the various physico-chemical parameters, irrigation water quality parameters and heavy metals in Champavathi River waters at Andra reservoir and Denkada anicut. Water samples were collected from the chosen sampling stations of the two reservoirs for every 4 months and analyzed as per APHA standard methods. The results obtained were compared with IS 10500 standards and found to be well within the prescribed values. Though the obtained values were well within the prescribed standard values, it was found that the water quality index, concentration of certain parameters such as calcium, magnesium, sodium and potassium of the waters of Andra reservoir are higher than that of the Denkada anicut, and the concentration of nitrite was found to be higher in the water sample analyzed from Denkada anicut. Except silicon, all the other metals were found to be below the detection limits in the two reservoir waters. The reasons for the same were probed by the authors in the presented study. From the analysis reports, it was found that the water analyzed from the two reservoirs was fit for irrigation, agriculture, industrial and domestic purposes.

  15. Faraday Rotation Measurement with the SMAP Radiometer

    NASA Technical Reports Server (NTRS)

    Le Vine, D. M.; Abraham, S.

    2016-01-01

    Faraday rotation is an issue that needs to be taken into account in remote sensing of parameters such as soil moisture and ocean salinity at L-band. This is especially important for SMAP because Faraday rotation varies with azimuth around the conical scan. SMAP retrieves Faraday rotation in situ using the ratio of the third and second Stokes parameters, a procedure that was demonstrated successfully by Aquarius. This manuscript reports the performance of this algorithm on SMAP. Over ocean the process works reasonably well and results compare favorably with expected values. But over land, the inhomogeneous nature of the scene results in much noisier, and in some cases unreliable estimates of Faraday rotation.

  16. Rendering of HDR content on LDR displays: an objective approach

    NASA Astrophysics Data System (ADS)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  17. Assessing factors that influence deviations between measured and calculated reference evapotranspiration

    NASA Astrophysics Data System (ADS)

    Rodny, Marek; Nolz, Reinhard

    2017-04-01

    Evapotranspiration (ET) is a fundamental component of the hydrological cycle, but challenging to be quantified. Lysimeter facilities, for example, can be installed and operated to determine ET, but they are costly and represent only point measurements. Therefore, lysimeter data are traditionally used to develop, calibrate, and validate models that allow calculating reference evapotranspiration (ET0) based on meteorological data, which can be measured more easily. The standardized form of the well-known FAO Penman-Monteith equation (ASCE-EWRI) is recommended as a standard procedure for estimating ET0 and subsequently plant water requirements. Applied and validated under different climatic conditions, the Penman-Monteith equation is generally known to deliver proper results. On the other hand, several studies documented deviations between measured and calculated ET0 depending on environmental conditions. Potential reasons are, for example, differing or varying surface characteristics of the lysimeter and the location where the weather instruments are placed. Advection of sensible heat (transport of dry and hot air from surrounding areas) might be another reason for deviating ET-values. However, elaborating causal processes is complex and requires comprehensive data of high quality and specific analysis techniques. In order to assess influencing factors, we correlated differences between measured and calculated ET0 with pre-selected meteorological parameters and related system parameters. Basic data were hourly ET0-values from a weighing lysimeter (ET0_lys) with a surface area of 2.85 m2 (reference crop: frequently irrigated grass), weather data (air and soil temperature, relative humidity, air pressure, wind velocity, and solar radiation), and soil water content in different depths. ET0_ref was calculated in hourly time steps according to the standardized procedure after ASCE-EWRI (2005). Deviations between both datasets were calculated as ET0_lys-ET0_ref and separated into positive and negative values. For further interpretation, we calculated daily sums of these values. The respective daily difference (positive or negative) served as independent variable (x) in linear correlation with a selected parameter as dependent variable (y). Quality of correlation was evaluated by means of coefficients of determination (R2). When ET0_lys > ET0_ref, the differences were only weakly correlated with the selected parameters. Hence, the evaluation of the causal processes leading to underestimation of measured hourly ET0 seems to require a more rigorous approach. On the other hand, when ET0_lys < ET0_ref, the differences correlated considerably with the meteorological parameters and related system parameters. Interpreting the particular correlations in detail indicated different (or varying) surface characteristics between the irrigated lysimeter and the nearby (non-irrigated) meteorological station.

  18. Relationship Between Earthquake b-Values and Crustal Stresses in a Young Orogenic Belt

    NASA Astrophysics Data System (ADS)

    Wu, Yih-Min; Chen, Sean Kuanhsiang; Huang, Ting-Chung; Huang, Hsin-Hua; Chao, Wei-An; Koulakov, Ivan

    2018-02-01

    It has been reported that earthquake b-values decrease linearly with the differential stresses in the continental crust and subduction zones. Here we report a regression-derived relation between earthquake b-values and crustal stresses using the Anderson fault parameter (Aϕ) in a young orogenic belt of Taiwan. This regression relation is well established by using a large and complete earthquake catalog for Taiwan. The data set consists of b-values and Aϕ values derived from relocated earthquakes and focal mechanisms, respectively. Our results show that b-values decrease linearly with the Aϕ values at crustal depths with a high correlation coefficient of -0.9. Thus, b-values could be used as stress indicators for orogenic belts. However, the state of stress is relatively well correlated with the surface geological setting with respect to earthquake b-values in Taiwan. Temporal variations in the b-value could constitute one of the main reasons for the spatial heterogeneity of b-values. We therefore suggest that b-values could be highly sensitive to temporal stress variations.

  19. Uncertainty analyses of the calibrated parameter values of a water quality model

    NASA Astrophysics Data System (ADS)

    Rode, M.; Suhr, U.; Lindenschmidt, K.-E.

    2003-04-01

    For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.

  20. The ethical dimensions of wildlife disease management in an evolutionary context

    PubMed Central

    Crozier, GKD; Schulte-Hostedde, Albrecht I

    2014-01-01

    Best practices in wildlife disease management require robust evolutionary ecological research (EER). This means not only basing management decisions on evolutionarily sound reasoning, but also conducting management in a way that actively contributes to the on-going development of that research. Because good management requires good science, and good science is ‘good’ science (i.e., effective science is often science conducted ethically), good management therefore also requires practices that accord with sound ethical reasoning. To that end, we propose a two-part framework to assist decision makers to identify ethical pitfalls of wildlife disease management. The first part consists of six values – freedom, fairness, well-being, replacement, reduction, and refinement; these values, developed for the ethical evaluation of EER practices, are also well suited for evaluating the ethics of wildlife disease management. The second part consists of a decision tree to help identify the ethically salient dimensions of wildlife disease management and to guide managers toward ethically responsible practices in complex situations. While ethical reasoning cannot be used to deduce from first principles what practices should be undertaken in every given set of circumstances, it can establish parameters that bound what sorts of practices will be acceptable or unacceptable in certain types of scenarios. PMID:25469160

  1. Quantitative property-property relationship (QPPR) approach in predicting flotation efficiency of chelating agents as mineral collectors.

    PubMed

    Natarajan, R; Nirdosh, I; Venuvanalingam, P; Ramalingam, M

    2002-07-01

    The QPPR approach has been used to model cupferrons as mineral collectors. Separation efficiencies (Es) of these chelating agents have been correlated with property parameters namely, log P, log Koc, substituent-constant sigma, Mullikan and ESP derived charges using multiple regression analysis. Es of substituted-cupferrons in the flotation of a uranium ore could be predicted within experimental error either by log P or log Koc and an electronic parameter. However, when a halo, methoxy or phenyl substituent was in para to the chelating group, experimental Es was greater than the predicted values. Inclusion of a Boolean type indicative parameter improved significantly the predictability power. This approach has been extended to 2-aminothiophenols that were used to float a zinc ore and the correlations were found to be reasonably good.

  2. Evaluation of exercise capacity after severe stroke using robotics-assisted treadmill exercise: a proof-of-concept study.

    PubMed

    Stoller, O; de Bruin, E D; Schindelholz, M; Schuster, C; de Bie, R A; Hunt, K J

    2013-01-01

    Robotics-assisted treadmill exercise (RATE) with focus on motor recovery has become popular in early post-stroke rehabilitation but low endurance for exercise is highly prevalent in these individuals. This study aimed to develop an exercise testing method using robotics-assisted treadmill exercise to evaluate aerobic capacity after severe stroke. Constant load testing (CLT) based on body weight support (BWS) control, and incremental exercise testing (IET) based on guidance force (GF) control were implemented during RATE. Analyses focussed on step change, step response kinetics, and peak performance parameters of oxygen uptake. Three subjects with severe motor impairment 16-23 days post-stroke were included. CLT yielded reasonable step change values in oxygen uptake, whereas response kinetics of oxygen uptake showed low goodness of fit. Peak performance parameters were not obtained during IET. Exercise testing in post-stroke individuals with severe motor impairments using a BWS control strategy for CLT is deemed feasible and safe. Our approach yielded reasonable results regarding cardiovascular performance parameters. IET based on GF control does not provoke peak cardiovascular performance due to uncoordinated walking patterns. GF control needs further development to optimally demand active participation during RATE. The findings warrant further research regarding the evaluation of exercise capacity after severe stroke.

  3. Investigation of Barium Ferrite, Searching for Soft Magnetic Materials in High Frequency Applications

    NASA Astrophysics Data System (ADS)

    Wu, Shuang; Kanada, Isao; Mewes, Tim; Mewes, Claudia; Mankey, Gary; Ariake, Yusuke; Suzuki, Takao

    Soft ferrites have been extensively and intensively applied for high frequency device applications. Among them, Ba-ferrites substituted by Mn and Ti are particularly attractive as future soft magnetic material candidates for advanced high frequency device applications. However, very little has been known as to the intrinsic magnetic properties, such as damping parameter, which is crucial to develop high frequency devices. In the present study, much effort has been focused on fabrication of single crystal Ba-ferrites and measurements of damping parameter by FMR. Ba-ferrite samples consisted of many grains with various sizes have been prepared. The saturation magnetization and the magnetic anisotropy field of the sample are in reasonable agreement with the values in literature. The resonances positions in the FMR spectra over a wide frequency range also comply with theoretical predictions. However, the complex resonance shapes observed makes it difficult to extract dynamic magnetic property. Possible reasons are the demagnetization field originating from irregular sample shape or existence of multiple grains in the samples. S.W. acknowledges the support under the TDK Scholar Program.

  4. Proton-neutron sdg boson model and spherical-deformed phase transition

    NASA Astrophysics Data System (ADS)

    Otsuka, Takaharu; Sugita, Michiaki

    1988-12-01

    The spherical-deformed phase transition in nuclei is described in terms of the proton-neutron sdg interacting boson model. The sdg hamiltonian is introduced to model the pairing+quadrupole interaction. The phase transition is reproduced in this framework as a function of the boson number in the Sm isotopes, while all parameters in the hamiltonian are kept constant at values reasonable from the shell-model point of view. The sd IBM is derived from this model through the renormalization of g-boson effects.

  5. Installation Restoration Program. Operable Unit B1 Remedial Investigation/Feasibility Study. Appendices

    DTIC Science & Technology

    1993-06-30

    agency (California Health and Safety Code [H&SC], Section 25179.6[a][2]). 1.2.5 Porter-Cologne Water Quality Act and Related Policies The Porter-Cologne...which endangers the comfort, repose, and health and safety of the public. SMAQMD Rule 403 requires that all reasonable precautions be taken not to cause...concentration values on Figures C-1 through C-IO. Differences in soil physical parameters, listed in Table C-2 for different compounds were questiotned

  6. On real statistics of relaxation in gases

    NASA Astrophysics Data System (ADS)

    Kuzovlev, Yu. E.

    2016-02-01

    By example of a particle interacting with ideal gas, it is shown that the statistics of collisions in statistical mechanics at any value of the gas rarefaction parameter qualitatively differ from that conjugated with Boltzmann's hypothetical molecular chaos and kinetic equation. In reality, the probability of collisions of the particle in itself is random. Because of that, the relaxation of particle velocity acquires a power-law asymptotic behavior. An estimate of its exponent is suggested on the basis of simple kinematic reasons.

  7. Acoustic and electric signals from lightning

    NASA Technical Reports Server (NTRS)

    Balachandran, N. K.

    1983-01-01

    Observations of infrasound apparently generated by the collapse of the electrostatic field in the thundercloud, are presented along with electric field measurements and high-frequency thunder signals. The frequency of the infrasound pulse is about 1 Hz and amplitude a few microbars. The observations seem to confirm some of the theoretical predictions of Wilson (1920) and Dessler (1973). The signal is predominated by a compressional phase and seems to be beamed vertically. Calculation of the parameters of the charged region using the infrasound signal give reasonable values.

  8. Robust Means for Estimating Black Carbon-Water Sorption Coefficients of Organic Contaminants in Sediments

    DTIC Science & Technology

    2015-07-01

    multiple lines of reasoning should be used to build confidence in what we believe is going on at any particular site. Hence, it will certainly be more...site depends on its Kd value (Fernandez et al., 2009). Consequently, we need to understand factors that go into the Kd sorption parameter in order...models of sorption to GAC (Kamlet et al. 1985, Luehrs et al. 1996, Poole and Poole 1997, Shih and Gschwend 2009) and multi-walled carbon nanotubes ( MWCNT

  9. Analysis of spirometry results in hospitalized patients aged over 65 years.

    PubMed

    Wróblewska, Izabela; Oleśniewicz, Piotr; Kurpas, Donata; Sołtysik, Mariusz; Błaszczuk, Jerzy

    2015-01-01

    The growing population of the elderly, as well as the occurrence of coexisting diseases and polypharmacy, is the reason why diseases of patients aged $65 years belong to the major issues of the contemporary medicine. Among the most frequent diseases of the elderly, there are respiratory system diseases. They are difficult to diagnose because of the patient group specificity, which is the reason for increased mortality among seniors, caused by underdiagnosis. The study objective was to assess the factors influencing spirometry results in hospitalized patients aged ≥65 years with respiratory system disorders. In the research, 217 (100%) patients aged ≥65 years who underwent spirometry at the Regional Medical Center of the Jelenia Góra Valley Hospital in Poland were analyzed. In the statistical analysis, the STATISTICA 9.1 program, the t-test, the Shapiro-Wilk test, the ANOVA test, and the Scheffé's test were applied. The majority of the patients (59.4%) were treated in the hospital. The most frequent diagnosis was malignant neoplasm (18%). The study showed a statistically significant dependence between the forced vital capacity (FVC), forced expiratory volume in 1 second (FEV1), and FEV1/FVC parameters and the time of hospitalization, as well as between the FVC and FEV1 parameters and the age of patients. The FVC parameter values turned out to be dependent on the main diagnosis. Highest results were noted in patients with the diagnosis of sleep apnea or benign neoplasm. A low FVC index can reflect restrictive ventilation defects, which was supported by the performed analyses. Highest FEV1/FVC values were observed in nonsmokers, which confirms the influence of nicotine addiction on the incidence of respiratory system diseases. The respondents' sex and the established diagnosis statistically significantly influenced the FVC index result, and the diet influenced the FEV1/FVC parameter result.

  10. Analysis of spirometry results in hospitalized patients aged over 65 years

    PubMed Central

    Wróblewska, Izabela; Oleśniewicz, Piotr; Kurpas, Donata; Sołtysik, Mariusz; Błaszczuk, Jerzy

    2015-01-01

    Introduction and objective The growing population of the elderly, as well as the occurrence of coexisting diseases and polypharmacy, is the reason why diseases of patients aged $65 years belong to the major issues of the contemporary medicine. Among the most frequent diseases of the elderly, there are respiratory system diseases. They are difficult to diagnose because of the patient group specificity, which is the reason for increased mortality among seniors, caused by underdiagnosis. The study objective was to assess the factors influencing spirometry results in hospitalized patients aged ≥65 years with respiratory system disorders. Material and methods In the research, 217 (100%) patients aged ≥65 years who underwent spirometry at the Regional Medical Center of the Jelenia Góra Valley Hospital in Poland were analyzed. In the statistical analysis, the STATISTICA 9.1 program, the t-test, the Shapiro–Wilk test, the ANOVA test, and the Scheffé’s test were applied. Results The majority of the patients (59.4%) were treated in the hospital. The most frequent diagnosis was malignant neoplasm (18%). The study showed a statistically significant dependence between the forced vital capacity (FVC), forced expiratory volume in 1 second (FEV1), and FEV1/FVC parameters and the time of hospitalization, as well as between the FVC and FEV1 parameters and the age of patients. The FVC parameter values turned out to be dependent on the main diagnosis. Highest results were noted in patients with the diagnosis of sleep apnea or benign neoplasm. A low FVC index can reflect restrictive ventilation defects, which was supported by the performed analyses. Highest FEV1/FVC values were observed in nonsmokers, which confirms the influence of nicotine addiction on the incidence of respiratory system diseases. Conclusion The respondents’ sex and the established diagnosis statistically significantly influenced the FVC index result, and the diet influenced the FEV1/FVC parameter result. PMID:26170646

  11. Evaluation of the field-effect carrier mobility in single-grain (and polycrystalline) organic semconductors

    NASA Astrophysics Data System (ADS)

    Kwok, H. L.

    2005-08-01

    Mobility in single-grain and polycrystalline organic field-effect transistors (OFETs) is of interest because it affects the performance of these devices. While reasonable values of the hole mobility has been measured in pentacene OFETs, relatively speaking, our understanding of the detailed transport mechanisms is somewhat weak and there is a lack of precise knowledge on the effects of the materials parameters such as the site spacing, the localization length, the rms width of the density of states (DOS), the escape frequency, etc. This work attempts to analyze the materials parameters of pentacene OFETs extracted from data reported in the literature. In this work, we developed a model for the mobility parameter from first principle and extracted the relevant materials parameters. According to our analyses, the transport mechanisms in the OFETs are fairly complex and the electrical properties are dominated by the properties of the trap states. As observed, the single-grain OFETs having smaller values of the rms widths of the DOS (in comparison with the polycrystalline OFETs) also had higher hole mobilities. Our results showed that increasing the gate bias could have a similar but smaller effect. Potentially, increasing the escape frequency is a more effective way to raise the hole mobility and this parameter appears to be affected by changes in the molecular structure and in the degree of "disorder".

  12. Estimated value of insurance premium due to Citarum River flood by using Bayesian method

    NASA Astrophysics Data System (ADS)

    Sukono; Aisah, I.; Tampubolon, Y. R. H.; Napitupulu, H.; Supian, S.; Subiyanto; Sidi, P.

    2018-03-01

    Citarum river flood in South Bandung, West Java Indonesia, often happens every year. It causes property damage, producing economic loss. The risk of loss can be mitigated by following the flood insurance program. In this paper, we discussed about the estimated value of insurance premiums due to Citarum river flood by Bayesian method. It is assumed that the risk data for flood losses follows the Pareto distribution with the right fat-tail. The estimation of distribution model parameters is done by using Bayesian method. First, parameter estimation is done with assumption that prior comes from Gamma distribution family, while observation data follow Pareto distribution. Second, flood loss data is simulated based on the probability of damage in each flood affected area. The result of the analysis shows that the estimated premium value of insurance based on pure premium principle is as follows: for the loss value of IDR 629.65 million of premium IDR 338.63 million; for a loss of IDR 584.30 million of its premium IDR 314.24 million; and the loss value of IDR 574.53 million of its premium IDR 308.95 million. The premium value estimator can be used as neither a reference in the decision of reasonable premium determination, so as not to incriminate the insured, nor it result in loss of the insurer.

  13. Experimental approach to the anion problem in DFT calculation of the partial charge transfer during adsorption at electrochemical interfaces

    NASA Astrophysics Data System (ADS)

    Marichev, V. A.

    2005-08-01

    In DFT calculation of the charge transfer (Δ N), anions pose a special problem since their electron affinities are unknown. There is no method for calculating reasonable values of the absolute electronegativity ( χA) and chemical hardness ( ηA) for ions from data of species themselves. We propose a new approach to the experimental measurement of χA at the condition: Δ N = 0 at which η values may be neglected and χA = χMe. Electrochemical parameters corresponding to this condition may be obtained by the contact electric resistance method during in situ investigation of anion adsorption in the particular system anion-metal.

  14. Volume and mass distribution in selected families of asteroids

    NASA Astrophysics Data System (ADS)

    Wlodarczyk, I.; Leliwa-Kopystynski, J.

    2014-07-01

    Members of five asteroid families (Vesta, Eos, Eunomia, Koronis, and Themis) were identified using the Hierarchical Clustering Method (HCM) for a data set containing 292,003 numbered asteroids. The influence of the choice of the best value of the parameter v_{cut} that controls the distances of asteroids in the proper elements space a, e, i was investigated with a step as small as 1 m/s. Results are given in a set of figures showing the families on the planes (a, e), (a, i), (e, i). Another form for the presentation of results is related to the secular resonances in the asteroids' motion with the giant planets, mostly with Saturn. Relations among asteroid radius, albedo, and absolute magnitude allow us to calculate the volumes of individual members of an asteroid family. After summation, the volumes of the parent bodies of the families were found. This paper presents the possibility and the first results of using a combined method for asteroid family identifications based on the following items: (i) Parameter v_{cut} is established with precision as high as 1 m/s; (ii) the albedo (if available) of the potential members is considered for approving or rejecting the family membership; (iii) a color classification is used for the same purpose as well. Searching for the most reliable parameter values for the family populations was performed by means of a consecutive application of the HCM with increasing parameter v_{cut}. The results are illustrated in the figure. Increasing v_{cut} in steps as small as 1 m/s allowed to observe the computational strength of the HCM: the critical value of the parameter v_{cut} (see the breaking-points of the plots in the figure) separates the assemblage of potential family members from 'an ocean' of background asteroids that are not related to the family. The critical values of v_{cut} vary from 57 m/s for the Vesta family to 92 m/s for the Eos family. If the parameter v_{cut} surpasses its critical value, the number of HCM-discovered family members increases enormously and without any physical reason.

  15. Hemodynamic effect of bypass geometry on intracranial aneurysm: A numerical investigation.

    PubMed

    Kurşun, Burak; Uğur, Levent; Keskin, Gökhan

    2018-05-01

    Hemodynamic analyzes are used in the clinical investigation and treatment of cardiovascular diseases. In the present study, the effect of bypass geometry on intracranial aneurysm hemodynamics was investigated numerically. Pressure, wall shear stress (WSS) and velocity distribution causing the aneurysm to grow and rupture were investigated and the best conditions were tried to be determined in case of bypassing between basilar (BA) and left/right posterior arteries (LPCA/RPCA) for different values of parameters. The finite volume method was used for numerical solutions and calculations were performed with the ANSYS-Fluent software. The SIMPLE algorithm was used to solve the discretized conservation equations. Second Order Upwind method was preferred for finding intermediate point values in the computational domain. As the blood flow velocity changes with time, the blood viscosity value also changes. For this reason, the Carreu model was used in determining the viscosity depending on the velocity. Numerical study results showed that when bypassed, pressure and wall shear stresses reduced in the range of 40-70% in the aneurysm. Numerical results obtained are presented in graphs including the variation of pressure, wall shear stress and velocity streamlines in the aneurysm. Considering the numerical results for all parameter values, it is seen that the most important factors affecting the pressure and WSS values in bypassing are the bypass position on the basilar artery (L b ) and the diameter of the bypass vessel (d). Pressure and wall shear stress reduced in the range of 40-70% in the aneurysm in the case of bypass for all parameters. This demonstrates that pressure and WSS values can be greatly reduced in aneurysm treatment by bypassing in cases where clipping or coil embolization methods can not be applied. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Geochemical Data Package for Performance Assessment Calculations Related to the Savannah River Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplan, Daniel I.

    The Savannah River Site (SRS) disposes of low-level radioactive waste (LLW) and stabilizes high-level radioactive waste (HLW) tanks in the subsurface environment. Calculations used to establish the radiological limits of these facilities are referred to as Performance Assessments (PA), Special Analyses (SA), and Composite Analyses (CA). The objective of this document is to revise existing geochemical input values used for these calculations. This work builds on earlier compilations of geochemical data (2007, 2010), referred to a geochemical data packages. This work is being conducted as part of the on-going maintenance program of the SRS PA programs that periodically updates calculationsmore » and data packages when new information becomes available. Because application of values without full understanding of their original purpose may lead to misuse, this document also provides the geochemical conceptual model, the approach used for selecting the values, the justification for selecting data, and the assumptions made to assure that the conceptual and numerical geochemical models are reasonably conservative (i.e., bias the recommended input values to reflect conditions that will tend to predict the maximum risk to the hypothetical recipient). This document provides 1088 input parameters for geochemical parameters describing transport processes for 64 elements (>740 radioisotopes) potentially occurring within eight subsurface disposal or tank closure areas: Slit Trenches (ST), Engineered Trenches (ET), Low Activity Waste Vault (LAWV), Intermediate Level (ILV) Vaults, Naval Reactor Component Disposal Areas (NRCDA), Components-in-Grout (CIG) Trenches, Saltstone Facility, and Closed Liquid Waste Tanks. The geochemical parameters described here are the distribution coefficient, Kd value, apparent solubility concentration, k s value, and the cementitious leachate impact factor.« less

  17. [Studies on optimizing preparation technics of wumeitougu oral liquid by response surface methodology].

    PubMed

    Yu, Xiao-cui; Liu, Gao-feng; Wang, Xin

    2011-02-01

    To optimize the preparation technics of wumeitougu oral liquid (WTOL) by response surface methodology. Based on the single-factor tests, the times of WTOL extraction, alcohol precipitation concentration and pH value were selected as three factors for box-behnken central composite design. The response surface methodology was used to optimize the parameters of the preparation. Under the condition of extraction time 1.5 h, extraction times 2.772, the relative density 1.12, alcohol precipitation concentration 68.704%, and pH value 5.0, he theory highest content of Asperosaponin VI was up to 549.908 mg/L. Considering the actual situation, the conditions were amended to three extract times, alcohol precipitation concentration 69%, pH value 5.0, and the content of Dipsacaceae VI saponin examined was 548.63 mg/L which was closed to the theoretical value. The optimized preparation technics of WTOL by response surface methodology is reasonable and feasible.

  18. Effects of intravenous temazepam. II. A study of the long-term reproducibility of pharmacokinetics, pharmacodynamics, and concentration-effect parameters.

    PubMed

    van Steveninck, A L; Schoemaker, H C; den Hartigh, J; Pieters, M S; Breimer, D D; Cohen, A F

    1994-05-01

    To evaluate the long-term reproducibility of pharmacokinetic, pharmacodynamic, and concentration-effect parameters after intravenous administration of temazepam. Nine healthy volunteers were studied. Temazepam, 0.4 mg/kg, was infused intravenously for 30 minutes on two occasions 6 months apart. Venous plasma concentrations of temazepam were measured by HPLC in samples obtained between 0 and 24 hours. Pharmacodynamic effects were evaluated up to 8 hours for saccadic peak velocity and electroencephalogram (EEG) beta amplitudes. Subjects' state and trait anxiety were assessed by use of the Spielberger anxiety inventory. Significant correlations between occasions were found for area under the plasma concentration-time curve (AUC) values (r = 0.91; p < 0.01) but not for maximum concentration and half-life. Significant correlations were also found for area under the effect-time curve (AUEC) values of peak velocity (r = 0.88; p < 0.01) but not for peak velocity (r = 0.48; p > 0.05). Significant differences between the slopes of concentration effect plots on different occasions were observed in two subjects for EEG beta and in three subjects for peak velocity, with one subject showing a similar change for both parameters. Trait anxiety scores were higher on the first occasion (33 +/- 7) than on the second occasion (29 +/- 7; p < 0.01). A negative correlation was found between trait anxiety scores and the slopes of concentration-effect plots for peak velocity (r = -0.63; p < 0.01). For AUC and AUEC values the results indicate a reasonable long-term reproducibility of differences between subjects in the pharmacokinetics and pharmacodynamics of temazepam. However, there were limitations to the predictive value of derived concentration-effect parameters.

  19. Q estimation of seismic data using the generalized S-transform

    NASA Astrophysics Data System (ADS)

    Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming

    2016-12-01

    Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.

  20. The dynamical core of the Aeolus 1.0 statistical-dynamical atmosphere model: validation and parameter optimization

    NASA Astrophysics Data System (ADS)

    Totz, Sonja; Eliseev, Alexey V.; Petri, Stefan; Flechsig, Michael; Caesar, Levke; Petoukhov, Vladimir; Coumou, Dim

    2018-02-01

    We present and validate a set of equations for representing the atmosphere's large-scale general circulation in an Earth system model of intermediate complexity (EMIC). These dynamical equations have been implemented in Aeolus 1.0, which is a statistical-dynamical atmosphere model (SDAM) and includes radiative transfer and cloud modules (Coumou et al., 2011; Eliseev et al., 2013). The statistical dynamical approach is computationally efficient and thus enables us to perform climate simulations at multimillennia timescales, which is a prime aim of our model development. Further, this computational efficiency enables us to scan large and high-dimensional parameter space to tune the model parameters, e.g., for sensitivity studies.Here, we present novel equations for the large-scale zonal-mean wind as well as those for planetary waves. Together with synoptic parameterization (as presented by Coumou et al., 2011), these form the mathematical description of the dynamical core of Aeolus 1.0.We optimize the dynamical core parameter values by tuning all relevant dynamical fields to ERA-Interim reanalysis data (1983-2009) forcing the dynamical core with prescribed surface temperature, surface humidity and cumulus cloud fraction. We test the model's performance in reproducing the seasonal cycle and the influence of the El Niño-Southern Oscillation (ENSO). We use a simulated annealing optimization algorithm, which approximates the global minimum of a high-dimensional function.With non-tuned parameter values, the model performs reasonably in terms of its representation of zonal-mean circulation, planetary waves and storm tracks. The simulated annealing optimization improves in particular the model's representation of the Northern Hemisphere jet stream and storm tracks as well as the Hadley circulation.The regions of high azonal wind velocities (planetary waves) are accurately captured for all validation experiments. The zonal-mean zonal wind and the integrated lower troposphere mass flux show good results in particular in the Northern Hemisphere. In the Southern Hemisphere, the model tends to produce too-weak zonal-mean zonal winds and a too-narrow Hadley circulation. We discuss possible reasons for these model biases as well as planned future model improvements and applications.

  1. Optimal insecticide-treated bed-net coverage and malaria treatment in a malaria-HIV co-infection model.

    PubMed

    Mohammed-Awel, Jemal; Numfor, Eric

    2017-03-01

    We propose and study a mathematical model for malaria-HIV co-infection transmission and control, in which malaria treatment and insecticide-treated nets are incorporated. The existence of a backward bifurcation is established analytically, and the occurrence of such backward bifurcation is influenced by disease-induced mortality, insecticide-treated bed-net coverage and malaria treatment parameters. To further assess the impact of malaria treatment and insecticide-treated bed-net coverage, we formulate an optimal control problem with malaria treatment and insecticide-treated nets as control functions. Using reasonable parameter values, numerical simulations of the optimal control suggest the possibility of eliminating malaria and reducing HIV prevalence significantly, within a short time horizon.

  2. Groundwater budgets for Detrital, Hualapai, and Sacramento Valleys, Mohave County, Arizona, 2007-08

    USGS Publications Warehouse

    Garner, Bradley D.; Truini, Margot

    2011-01-01

    Figures 9, 10, and 11 from this report present water budgets for Detritial, Hualapai, and Sacramento Valleys in Northwestern Arizona. These figures show average values for each water-budget component. Uncertainty is discussed but not shown on these report figures. As an aid to readers, these figures have been implemented as interactive, web-based figures here. Water-budget parameters can be varied within reasonable bounds of uncertainty and the effects those changes have on the water budget will be shown as they are varied. This can aid in understanding sensitivity-which parameters most or least affect the water budgets-and also could provide a generally improved sense of the hydrologic cycle represented in these water budgets.

  3. Three-body final state interaction in η → 3π updated

    DOE PAGES

    Guo, P.; Danilkin, I. V.; Fernandez-Ramirez, C.; ...

    2017-06-07

    In view of the recent high-statistic KLOE-2 data for themore » $$\\eta \\to \\pi^+ \\pi^- \\pi^0$$ decay, a new determination of the quark mass double ratio has been done. Our approach relies on a unitary dispersive model that takes into account rescattering effects between three pions. The latter is essential to reproduce the Dalitz plot distribution. A simultaneous description of the KLOE-2 and WASA-at-COSY data is achieved in terms of just two real parameters. From a global fit, we determine $$Q=21.6 \\pm 0.4$$. Here, the predicted slope parameter for the neutral channel $$\\alpha=-0.025\\pm 0.004$$ is in a reasonable agreement with the PDG average value.« less

  4. Nondimensional parameter for conformal grinding: combining machine and process parameters

    NASA Astrophysics Data System (ADS)

    Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.

    1999-11-01

    Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.

  5. Complex Impedance of Fast Optical Transition Edge Sensors up to 30 MHz

    NASA Astrophysics Data System (ADS)

    Hattori, K.; Kobayashi, R.; Numata, T.; Inoue, S.; Fukuda, D.

    2018-03-01

    Optical transition edge sensors (TESs) are characterized by a very fast response, of the order of μs, which is 10^3 times faster than TESs for X-ray and gamma-ray. To extract important parameters associated with the optical TES, complex impedances at high frequencies (> 1 MHz) need to be measured, where the parasitic impedance in the circuit and reflections of electrical signals due to discontinuities in the characteristic impedance of the readout circuits become significant. This prevents the measurements of the current sensitivity β , which can be extracted from the complex impedance. In usual setups, it is hard to build a circuit model taking into account the parasitic impedances and reflections. In this study, we present an alternative method to estimate a transfer function without investigating the details of the entire circuit. Based on this method, the complex impedance up to 30 MHz was measured. The parameters were extracted from the impedance and were compared with other measurements. Using these parameters, we calculated the theoretical limit on an energy resolution and compared it with the measured energy resolution. In this paper, the reasons for the deviation of the measured value from theoretically predicted values will be discussed.

  6. Determination of Earth rotation by the combination of data from different space geodetic systems

    NASA Technical Reports Server (NTRS)

    Archinal, Brent Allen

    1987-01-01

    Formerly, Earth Rotation Parameters (ERP), i.e., polar motion and UTI-UTC values, have been determined using data from only one observational system at a time, or by the combination of parameters previously obtained in such determinations. The question arises as to whether a simultaneous solution using data from several sources would provide an improved determination of such parameters. To pursue this reasoning, fifteen days of observations have been simulated using realistic networks of Lunar Laser Ranging (LLR), Satellite Laser Ranging (SLR) to Lageos, and Very Long Baseline Interferometry (VLBI) stations. A comparison has been done of the accuracy and precision of the ERP obtained from: (1) the individual system solutions, (2) the weighted means of those values, (3) all of the data by means of the combination of the normal equations obtained in 1, and (4) a grand solution with all the data. These simulations show that solutions done by the normal equation combination and grand solution methods provide the best or nearly the best ERP for all the periods considered, but that weighted mean solutions provide nearly the same accuracy and precision. VLBI solutions also provide similar accuracies.

  7. Testing anthropic reasoning for the cosmological constant with a realistic galaxy formation model

    NASA Astrophysics Data System (ADS)

    Sudoh, Takahiro; Totani, Tomonori; Makiya, Ryu; Nagashima, Masahiro

    2017-01-01

    The anthropic principle is one of the possible explanations for the cosmological constant (Λ) problem. In previous studies, a dark halo mass threshold comparable with our Galaxy must be assumed in galaxy formation to get a reasonably large probability of finding the observed small value, P(<Λobs), though stars are found in much smaller galaxies as well. Here we examine the anthropic argument by using a semi-analytic model of cosmological galaxy formation, which can reproduce many observations such as galaxy luminosity functions. We calculate the probability distribution of Λ by running the model code for a wide range of Λ, while other cosmological parameters and model parameters for baryonic processes of galaxy formation are kept constant. Assuming that the prior probability distribution is flat per unit Λ, and that the number of observers is proportional to stellar mass, we find P(<Λobs) = 6.7 per cent without introducing any galaxy mass threshold. We also investigate the effect of metallicity; we find P(<Λobs) = 9.0 per cent if observers exist only in galaxies whose metallicity is higher than the solar abundance. If the number of observers is proportional to metallicity, we find P(<Λobs) = 9.7 per cent. Since these probabilities are not extremely small, we conclude that the anthropic argument is a viable explanation, if the value of Λ observed in our Universe is determined by a probability distribution.

  8. Coupled carbon-water exchange of the Amazon rain forest, I. Model description, parameterization and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Simon, E.; Meixner, F. X.; Ganzeveld, L.; Kesselmeier, J.

    2005-04-01

    Detailed one-dimensional multilayer biosphere-atmosphere models, also referred to as CANVEG models, are used for more than a decade to describe coupled water-carbon exchange between the terrestrial vegetation and the lower atmosphere. Within the present study, a modified CANVEG scheme is described. A generic parameterization and characterization of biophysical properties of Amazon rain forest canopies is inferred using available field measurements of canopy structure, in-canopy profiles of horizontal wind speed and radiation, canopy albedo, soil heat flux and soil respiration, photosynthetic capacity and leaf nitrogen as well as leaf level enclosure measurements made on sunlit and shaded branches of several Amazonian tree species during the wet and dry season. The sensitivity of calculated canopy energy and CO2 fluxes to the uncertainty of individual parameter values is assessed. In the companion paper, the predicted seasonal exchange of energy, CO2, ozone and isoprene is compared to observations.

    A bi-modal distribution of leaf area density with a total leaf area index of 6 is inferred from several observations in Amazonia. Predicted light attenuation within the canopy agrees reasonably well with observations made at different field sites. A comparison of predicted and observed canopy albedo shows a high model sensitivity to the leaf optical parameters for near-infrared short-wave radiation (NIR). The predictions agree much better with observations when the leaf reflectance and transmission coefficients for NIR are reduced by 25-40%. Available vertical distributions of photosynthetic capacity and leaf nitrogen concentration suggest a low but significant light acclimation of the rain forest canopy that scales nearly linearly with accumulated leaf area.

    Evaluation of the biochemical leaf model, using the enclosure measurements, showed that recommended parameter values describing the photosynthetic light response, have to be optimized. Otherwise, predicted net assimilation is overestimated by 30-50%. Two stomatal models have been tested, which apply a well established semi-empirical relationship between stomatal conductance and net assimilation. Both models differ in the way they describe the influence of humidity on stomatal response. However, they show a very similar performance within the range of observed environmental conditions. The agreement between predicted and observed stomatal conductance rates is reasonable. In general, the leaf level data suggests seasonal physiological changes, which can be reproduced reasonably well by assuming increased stomatal conductance rates during the wet season, and decreased assimilation rates during the dry season.

    The sensitivity of the predicted canopy fluxes of energy and CO2 to the parameterization of canopy structure, the leaf optical parameters, and the scaling of photosynthetic parameters is relatively low (1-12%), with respect to parameter uncertainty. In contrast, modifying leaf model parameters within their uncertainty range results in much larger changes of the predicted canopy net fluxes (5-35%).

  9. Coupled carbon-water exchange of the Amazon rain forest, I. Model description, parameterization and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Simon, E.; Meixner, F. X.; Ganzeveld, L.; Kesselmeier, J.

    2005-09-01

    Detailed one-dimensional multilayer biosphere-atmosphere models, also referred to as CANVEG models, are used for more than a decade to describe coupled water-carbon exchange between the terrestrial vegetation and the lower atmosphere. Within the present study, a modified CANVEG scheme is described. A generic parameterization and characterization of biophysical properties of Amazon rain forest canopies is inferred using available field measurements of canopy structure, in-canopy profiles of horizontal wind speed and radiation, canopy albedo, soil heat flux and soil respiration, photosynthetic capacity and leaf nitrogen as well as leaf level enclosure measurements made on sunlit and shaded branches of several Amazonian tree species during the wet and dry season. The sensitivity of calculated canopy energy and CO2 fluxes to the uncertainty of individual parameter values is assessed. In the companion paper, the predicted seasonal exchange of energy, CO2, ozone and isoprene is compared to observations.

    A bi-modal distribution of leaf area density with a total leaf area index of 6 is inferred from several observations in Amazonia. Predicted light attenuation within the canopy agrees reasonably well with observations made at different field sites. A comparison of predicted and observed canopy albedo shows a high model sensitivity to the leaf optical parameters for near-infrared short-wave radiation (NIR). The predictions agree much better with observations when the leaf reflectance and transmission coefficients for NIR are reduced by 25-40%. Available vertical distributions of photosynthetic capacity and leaf nitrogen concentration suggest a low but significant light acclimation of the rain forest canopy that scales nearly linearly with accumulated leaf area.

    Evaluation of the biochemical leaf model, using the enclosure measurements, showed that recommended parameter values describing the photosynthetic light response, have to be optimized. Otherwise, predicted net assimilation is overestimated by 30-50%. Two stomatal models have been tested, which apply a well established semi-empirical relationship between stomatal conductance and net assimilation. Both models differ in the way they describe the influence of humidity on stomatal response. However, they show a very similar performance within the range of observed environmental conditions. The agreement between predicted and observed stomatal conductance rates is reasonable. In general, the leaf level data suggests seasonal physiological changes, which can be reproduced reasonably well by assuming increased stomatal conductance rates during the wet season, and decreased assimilation rates during the dry season.

    The sensitivity of the predicted canopy fluxes of energy and CO2 to the parameterization of canopy structure, the leaf optical parameters, and the scaling of photosynthetic parameters is relatively low (1-12%), with respect to parameter uncertainty. In contrast, modifying leaf model parameters within their uncertainty range results in much larger changes of the predicted canopy net fluxes (5-35%).

  10. A comparative study of quantitative assessment with fluorine-18-fluorodeoxyglucose positron-emission tomography and endoscopic ultrasound in oesophageal cancer.

    PubMed

    Borakati, Aditya; Razack, Abdul; Cawthorne, Chris; Roy, Rajarshi; Usmani, Sharjeel; Ahmed, Najeeb

    2018-07-01

    This study aims to assess the correlation between PET/CT and endoscopic ultrasound (EUS) parameters in patients with oesophageal cancer. All patients who had complete PET/CT and EUS staging performed for oesophageal cancer at our centre between 2010 and 2016 were included. Images were retrieved and analysed for a range of parameters including tumour length, volume and position relative to the aortic arch. Seventy patients were included in the main analysis. A strong correlation was found between EUS and PET/CT in the tumour length, the volume and the position of the tumour relative to the aortic arch. Regression modelling showed a reasonable predictive value for PET/CT in calculating EUS parameters, with r higher than 0.585 in some cases. Given the strong correlation between EUS and PET parameters, fluorine-18 fluorodeoxyglucose (F-FDG) PET can provide accurate information on the length and the volume of tumour in patients who either cannot tolerate EUS or have impassable strictures.

  11. Development and system identification of a light unmanned aircraft for flying qualities research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, M.E.; Andrisani, D. II

    This paper describes the design, construction, flight testing and system identification of a light weight remotely piloted aircraft and its use in studying flying qualities in the longitudinal axis. The short period approximation to the longitudinal dynamics of the aircraft was used. Parameters in this model were determined a priori using various empirical estimators. These parameters were then estimated from flight data using a maximum likelihood parameter identification method. A comparison of the parameter values revealed that the stability derivatives obtained from the empirical estimators were reasonably close to the flight test results. However, the control derivatives determined by themore » empirical estimators were too large by a factor of two. The aircraft was also flown to determine how the longitudinal flying qualities of light weight remotely piloted aircraft compared to full size manned aircraft. It was shown that light weight remotely piloted aircraft require much faster short period dynamics to achieve level I flying qualities in an up-and-away flight task.« less

  12. Interpreting the Weibull fitting parameters for diffusion-controlled release data

    NASA Astrophysics Data System (ADS)

    Ignacio, Maxime; Chubynsky, Mykyta V.; Slater, Gary W.

    2017-11-01

    We examine the diffusion-controlled release of molecules from passive delivery systems using both analytical solutions of the diffusion equation and numerically exact Lattice Monte Carlo data. For very short times, the release process follows a √{ t } power law, typical of diffusion processes, while the long-time asymptotic behavior is exponential. The crossover time between these two regimes is determined by the boundary conditions and initial loading of the system. We show that while the widely used Weibull function provides a reasonable fit (in terms of statistical error), it has two major drawbacks: (i) it does not capture the correct limits and (ii) there is no direct connection between the fitting parameters and the properties of the system. Using a physically motivated interpolating fitting function that correctly includes both time regimes, we are able to predict the values of the Weibull parameters which allows us to propose a physical interpretation.

  13. What’s Driving Uncertainty? The Model or the Model Parameters (What’s Driving Uncertainty? The influences of model and model parameters in data analysis)

    DOE PAGES

    Anderson-Cook, Christine Michaela

    2017-03-01

    Here, one of the substantial improvements to the practice of data analysis in recent decades is the change from reporting just a point estimate for a parameter or characteristic, to now including a summary of uncertainty for that estimate. Understanding the precision of the estimate for the quantity of interest provides better understanding of what to expect and how well we are able to predict future behavior from the process. For example, when we report a sample average as an estimate of the population mean, it is good practice to also provide a confidence interval (or credible interval, if youmore » are doing a Bayesian analysis) to accompany that summary. This helps to calibrate what ranges of values are reasonable given the variability observed in the sample and the amount of data that were included in producing the summary.« less

  14. Body Mass Normalization for Lateral Abdominal Muscle Thickness Measurements in Adolescent Athletes.

    PubMed

    Linek, Pawel

    2017-09-01

    To determine the value of allometric parameters for ultrasound measurements of the oblique external (OE), oblique internal (OI), and transversus abdominis (TrA) muscles in adolescent athletes. The allometric parameter is the slope of the linear regression line between the log-transformed body mass and log-transformed muscle size measurement. The study included 114 male adolescent football players between the ages of 10 and 19 years. All individuals with no surgical procedures performed on the trunk area and who had played a sport for at least 2 years were included. A real-time B-mode ultrasound scanner with a linear array transducer was used to obtain images of the lateral abdominal muscles from both sides of the body. A stabilometric platform was used to assess the body mass value. The correlations between body mass and the OE, OI, and TrA muscle thicknesses were r = 0.73, r = 0.79, and r = 0.64, respectively (in all cases, P < .0001). The allometric parameters were 0.77 for the OE, 0.67 for the OI, and 0.61 for the TrA. Using these parameters, no significant correlations were found between body mass and the allometric-scaled thickness of the lateral abdominal muscles. Significant positive correlations exist between body mass and lateral abdominal muscle thickness in adolescent athletes. Therefore, it is reasonable to advise that the values of the allometric parameters for the OE, OI, and TrA muscles obtained in this study should be used, and the allometric-scaled thicknesses of those muscles should be analyzed in future research on adolescent athletes. © 2017 by the American Institute of Ultrasound in Medicine.

  15. ON THE NOTION OF WELL-DEFINED TECTONIC REGIMES FOR TERRESTRIAL PLANETS IN THIS SOLAR SYSTEM AND OTHERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lenardic, A.; Crowley, J. W., E-mail: ajns@rice.edu, E-mail: jwgcrowley@gmail.com

    2012-08-20

    A model of coupled mantle convection and planetary tectonics is used to demonstrate that history dependence can outweigh the effects of a planet's energy content and material parameters in determining its tectonic state. The mantle convection-surface tectonics system allows multiple tectonic modes to exist for equivalent planetary parameter values. The tectonic mode of the system is then determined by its specific geologic and climatic history. This implies that models of tectonics and mantle convection will not be able to uniquely determine the tectonic mode of a terrestrial planet without the addition of historical data. Historical data exists, to variable degrees,more » for all four terrestrial planets within our solar system. For the Earth, the planet with the largest amount of observational data, debate does still remain regarding the geologic and climatic history of Earth's deep past but constraints are available. For planets in other solar systems, no such constraints exist at present. The existence of multiple tectonic modes, for equivalent parameter values, points to a reason why different groups have reached different conclusions regarding the tectonic state of extrasolar terrestrial planets larger than Earth ({sup s}uper-Earths{sup )}. The region of multiple stable solutions is predicted to widen in parameter space for more energetic mantle convection (as would be expected for larger planets). This means that different groups can find different solutions, all potentially viable and stable, using identical models and identical system parameter values. At a more practical level, the results argue that the question of whether extrasolar terrestrial planets will have plate tectonics is unanswerable and will remain so until the temporal evolution of extrasolar planets can be constrained.« less

  16. Development of uncertainty-based work injury model using Bayesian structural equation modelling.

    PubMed

    Chatterjee, Snehamoy

    2014-01-01

    This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.

  17. Determination of the dissociation constants (pKa) of secondary and tertiary amines in organic media by capillary electrophoresis and their role in the electrophoretic mobility order inversion.

    PubMed

    Cantu, Marcelo Delmar; Hillebranda, Sandro; Carrilho, Emanuel

    2005-03-11

    Non-aqueous capillary electrophoresis (NACE) may provide a selectivity enhancement in separations since the analyte dissociation constants (pKa) in organic media are different from those in aqueous solutions. In this work, we have studied the inversion in mobility order observed in the separation of tertiary (imipramine (IMI) and amitryptiline (AMI)) and secondary amines (desipramine (DES) and nortryptiline (NOR)) in water, methanol, and acetonitrile. We have determined the pKa values in those solvents and the variation of dissociation constants with the temperature. From these data, and applying the Van't Hoff equation, we have calculated the thermodynamic parameters deltaH and deltaS. The pKa values found in methanol for DES, NOR, IMI, and AMI were 10.80, 10.79, 10.38, and 10.33, respectively. On the other hand, in acetonitrile an opposite relation was found since the values were 20.60, 20.67, 20.74, and 20.81 for DES, NOR, IMI, and AMI. This is the reason why a migration order inversion is observed in NACE for these solvents. The thermodynamic parameters were evaluated and presented a tendency that can be correlated with that observed for pKa values.

  18. Incorporation of Socio-Economic Features' Ranking in Multicriteria Analysis Based on Ecosystem Services for Marine Protected Area Planning

    PubMed Central

    Portman, Michelle E.; Shabtay-Yanai, Ateret; Zanzuri, Asaf

    2016-01-01

    Developed decades ago for spatial choice problems related to zoning in the urban planning field, multicriteria analysis (MCA) has more recently been applied to environmental conflicts and presented in several documented cases for the creation of protected area management plans. Its application is considered here for the development of zoning as part of a proposed marine protected area management plan. The case study incorporates specially-explicit conservation features while considering stakeholder preferences, expert opinion and characteristics of data quality. It involves the weighting of criteria using a modified analytical hierarchy process. Experts ranked physical attributes which include socio-economically valued physical features. The parameters used for the ranking of (physical) attributes important for socio-economic reasons are derived from the field of ecosystem services assessment. Inclusion of these feature values results in protection that emphasizes those areas closest to shore, most likely because of accessibility and familiarity parameters and because of data biases. Therefore, other spatial conservation prioritization methods should be considered to supplement the MCA and efforts should be made to improve data about ecosystem service values farther from shore. Otherwise, the MCA method allows incorporation of expert and stakeholder preferences and ecosystem services values while maintaining the advantages of simplicity and clarity. PMID:27183224

  19. Incorporation of Socio-Economic Features' Ranking in Multicriteria Analysis Based on Ecosystem Services for Marine Protected Area Planning.

    PubMed

    Portman, Michelle E; Shabtay-Yanai, Ateret; Zanzuri, Asaf

    2016-01-01

    Developed decades ago for spatial choice problems related to zoning in the urban planning field, multicriteria analysis (MCA) has more recently been applied to environmental conflicts and presented in several documented cases for the creation of protected area management plans. Its application is considered here for the development of zoning as part of a proposed marine protected area management plan. The case study incorporates specially-explicit conservation features while considering stakeholder preferences, expert opinion and characteristics of data quality. It involves the weighting of criteria using a modified analytical hierarchy process. Experts ranked physical attributes which include socio-economically valued physical features. The parameters used for the ranking of (physical) attributes important for socio-economic reasons are derived from the field of ecosystem services assessment. Inclusion of these feature values results in protection that emphasizes those areas closest to shore, most likely because of accessibility and familiarity parameters and because of data biases. Therefore, other spatial conservation prioritization methods should be considered to supplement the MCA and efforts should be made to improve data about ecosystem service values farther from shore. Otherwise, the MCA method allows incorporation of expert and stakeholder preferences and ecosystem services values while maintaining the advantages of simplicity and clarity.

  20. Autocorrelated residuals in inverse modelling of soil hydrological processes: a reason for concern or something that can safely be ignored?

    NASA Astrophysics Data System (ADS)

    Scharnagl, Benedikt; Durner, Wolfgang

    2013-04-01

    Models are inherently imperfect because they simplify processes that are themselves imperfectly known and understood. Moreover, the input variables and parameters needed to run a model are typically subject to various sources of error. As a consequence of these imperfections, model predictions will always deviate from corresponding observations. In most applications in soil hydrology, these deviations are clearly not random but rather show a systematic structure. From a statistical point of view, this systematic mismatch may be a reason for concern because it violates one of the basic assumptions made in inverse parameter estimation: the assumption of independence of the residuals. But what are the consequences of simply ignoring the autocorrelation in the residuals, as it is current practice in soil hydrology? Are the parameter estimates still valid even though the statistical foundation they are based on is partially collapsed? Theory and practical experience from other fields of science have shown that violation of the independence assumption will result in overconfident uncertainty bounds and that in some cases it may lead to significantly different optimal parameter values. In our contribution, we present three soil hydrological case studies, in which the effect of autocorrelated residuals on the estimated parameters was investigated in detail. We explicitly accounted for autocorrelated residuals using a formal likelihood function that incorporates an autoregressive model. The inverse problem was posed in a Bayesian framework, and the posterior probability density function of the parameters was estimated using Markov chain Monte Carlo simulation. In contrast to many other studies in related fields of science, and quite surprisingly, we found that the first-order autoregressive model, often abbreviated as AR(1), did not work well in the soil hydrological setting. We showed that a second-order autoregressive, or AR(2), model performs much better in these applications, leading to parameter and uncertainty estimates that satisfy all the underlying statistical assumptions. For theoretical reasons, these estimates are deemed more reliable than those estimates based on the neglect of autocorrelation in the residuals. In compliance with theory and results reported in the literature, our results showed that parameter uncertainty bounds were substantially wider if autocorrelation in the residuals was explicitly accounted for, and also the optimal parameter vales were slightly different in this case. We argue that the autoregressive model presented here should be used as a matter of routine in inverse modeling of soil hydrological processes.

  1. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data

    PubMed Central

    2014-01-01

    Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574

  2. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.

    PubMed

    Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried

    2014-01-01

    Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seljak, Uroš, E-mail: useljak@berkeley.edu

    On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less

  4. The Effects of Forming Parameters on Conical Ring Rolling Process

    PubMed Central

    Meng, Wen; Zhao, Guoqun; Guan, Yanjin

    2014-01-01

    The plastic penetration condition and biting-in condition of a radial conical ring rolling process with a closed die structure on the top and bottom of driven roll, simplified as RCRRCDS, were established. The reasonable value range of mandrel feed rate in rolling process was deduced. A coupled thermomechanical 3D FE model of RCRRCDS process was established. The changing laws of equivalent plastic strain (PEEQ) and temperature distributions with rolling time were investigated. The effects of ring's outer radius growth rate and rolls sizes on the uniformities of PEEQ and temperature distributions, average rolling force, and average rolling moment were studied. The results indicate that the PEEQ at the inner layer and outer layer of rolled ring are larger than that at the middle layer of ring; the temperatures at the “obtuse angle zone” of ring's cross-section are higher than those at “acute angle zone”; the temperature at the central part of ring is higher than that at the middle part of ring's outer surfaces. As the ring's outer radius growth rate increases at its reasonable value ranges, the uniformities of PEEQ and temperature distributions increase. Finally, the optimal values of the ring's outer radius growth rate and rolls sizes were obtained. PMID:25202716

  5. Modeling and predicting the biofilm formation of Salmonella Virchow with respect to temperature and pH.

    PubMed

    Ariafar, M Nima; Buzrul, Sencer; Akçelik, Nefise

    2016-03-01

    Biofilm formation of Salmonella Virchow was monitored with respect to time at three different temperature (20, 25 and 27.5 °C) and pH (5.2, 5.9 and 6.6) values. As the temperature increased at a constant pH level, biofilm formation decreased while as the pH level increased at a constant temperature, biofilm formation increased. Modified Gompertz equation with high adjusted determination coefficient (Radj(2)) and low mean square error (MSE) values produced reasonable fits for the biofilm formation under all conditions. Parameters of the modified Gompertz equation could be described in terms of temperature and pH by use of a second order polynomial function. In general, as temperature increased maximum biofilm quantity, maximum biofilm formation rate and time of acceleration of biofilm formation decreased; whereas, as pH increased; maximum biofilm quantity, maximum biofilm formation rate and time of acceleration of biofilm formation increased. Two temperature (23 and 26 °C) and pH (5.3 and 6.3) values were used up to 24 h to predict the biofilm formation of S. Virchow. Although the predictions did not perfectly match with the data, reasonable estimates were obtained. In principle, modeling and predicting the biofilm formation of different microorganisms on different surfaces under various conditions could be possible.

  6. Evaluation of measurement uncertainty of glucose in clinical chemistry.

    PubMed

    Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y

    2007-04-01

    The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.

  7. Fuzzy Neural Networks for Decision Support in Negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakas, D. P.; Vlachos, D. S.; Simos, T. E.

    There is a large number of parameters which one can take into account when building a negotiation model. These parameters in general are uncertain, thus leading to models which represents them with fuzzy sets. On the other hand, the nature of these parameters makes them very difficult to model them with precise values. During negotiation, these parameters play an important role by altering the outcomes or changing the state of the negotiators. One reasonable way to model this procedure is to accept fuzzy relations (from theory or experience). The action of these relations to fuzzy sets, produce new fuzzy setsmore » which describe now the new state of the system or the modified parameters. But, in the majority of these situations, the relations are multidimensional, leading to complicated models and exponentially increasing computational time. In this paper a solution to this problem is presented. The use of fuzzy neural networks is shown that it can substitute the use of fuzzy relations with comparable results. Finally a simple simulation is carried in order to test the new method.« less

  8. Four-year stability of anthropometric and cardio-metabolic parameters in a prospective cohort of older adults.

    PubMed

    Jackson, Sarah E; van Jaarsveld, Cornelia Hm; Beeken, Rebecca J; Gunter, Marc J; Steptoe, Andrew; Wardle, Jane

    2015-01-01

    To examine the medium-term stability of anthropometric and cardio-metabolic parameters in the general population. Participants were 5160 men and women from the English Longitudinal Study of Ageing (age ≥50 years) assessed in 2004 and 2008. Anthropometric data included height, weight, BMI and waist circumference. Cardio-metabolic parameters included blood pressure, serum lipids (total cholesterol, HDL, LDL, triglycerides), hemoglobin, fasting glucose, fibrinogen and C-reactive protein. Stability of anthropometric variables was high (all intraclass correlations >0.92), although mean values changed slightly (-0.01 kg weight, +1.33 cm waist). Cardio-metabolic parameters showed more variation: correlations ranged from 0.43 (glucose) to 0.81 (HDL). The majority of participants (71-97%) remained in the same grouping relative to established clinical cut-offs. Over a 4-year period, anthropometric and cardio-metabolic parameters showed good stability. These findings suggest that when no means to obtain more recent data exist, a one-time sample will give a reasonable approximation to average levels over the medium-term, although reliability is reduced.

  9. Deriving aerosol parameters from in-situ spectrometer measurements for validation of remote sensing products

    NASA Astrophysics Data System (ADS)

    Riedel, Sebastian; Janas, Joanna; Gege, Peter; Oppelt, Natascha

    2017-10-01

    Uncertainties of aerosol parameters are the limiting factor for atmospheric correction over inland and coastal waters. For validating remote sensing products from these optically complex and spatially inhomogeneous waters the spatial resolution of automated sun photometer networks like AERONET is too coarse and additional measurements on the test site are required. We have developed a method which allows the derivation of aerosol parameters from measurements with any spectrometer with suitable spectral range and resolution. This method uses a pair of downwelling irradiance and sky radiance measurements for the extraction of the turbidity coefficient and aerosol Ångström exponent. The data can be acquired fast and reliable at almost any place during a wide range of weather conditions. A comparison to aerosol parameters measured with a Cimel sun photometer provided by AERONET shows a reasonable agreement for the Ångström exponent. The turbidity coefficient did not agree well with AERONET values due to fit ambiguities, indicating that future research should focus on methods to handle parameter correlations within the underlying model.

  10. Parametrization of free ion levels of four isoelectronic 4f2 systems: Insights into configuration interaction parameters

    NASA Astrophysics Data System (ADS)

    Yeung, Yau Yuen; Tanner, Peter A.

    2013-12-01

    The experimental free ion 4f2 energy level data sets comprising 12 or 13 J-multiplets of La+, Ce2+, Pr3+ and Nd4+ have been fitted by a semiempirical atomic Hamiltonian comprising 8, 10, or 12 freely-varying parameters. The root mean square errors were 16.1, 1.3, 0.3 and 0.3 cm-1, respectively for fits with 10 parameters. The fitted inter-electronic repulsion and magnetic parameters vary linearly with ionic charge, i, but better linear fits are obtained with (4-i)2, although the reason is unclear at present. The two-body configuration interaction parameters α and β exhibit a linear relation with [ΔE(bc)]-1, where ΔE(bc) is the energy difference between the 4f2 barycentre and that of the interacting configuration, namely 4f6p for La+, Ce2+, and Pr3+, and 5p54f3 for Nd4+. The linear fit provides the rationale for the negative value of α for the case of La+, where the interacting configuration is located below 4f2.

  11. Statistical Analyses of Femur Parameters for Designing Anatomical Plates.

    PubMed

    Wang, Lin; He, Kunjin; Chen, Zhengming

    2016-01-01

    Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.

  12. Rhelogical constraints on ridge formation on Icy Satellites

    NASA Astrophysics Data System (ADS)

    Rudolph, M. L.; Manga, M.

    2010-12-01

    The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.

  13. Model for economic evaluation of high energy gas fracturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engi, D.

    1984-05-01

    The HEGF/NPV model has been developed and adapted for interactive microcomputer calculations of the economic consequences of reservoir stimulation by high energy gas fracturing (HEGF) in naturally fractured formations. This model makes use of three individual models: a model of the stimulated reservoir, a model of the gas flow in this reservoir, and a model of the discounted expected net cash flow (net present value, or NPV) associated with the enhanced gas production. Nominal values of the input parameters, based on observed data and reasonable estimates, are used to calculate the initial expected increase in the average daily rate ofmore » production resulting from the Meigs County HEGF stimulation experiment. Agreement with the observed initial increase in rate is good. On the basis of this calculation, production from the Meigs County Well is not expected to be profitable, but the HEGF/NPV model probably provides conservative results. Furthermore, analyses of the sensitivity of the expected NPV to variations in the values of certain reservoir parameters suggest that the use of HEGF stimulation in somewhat more favorable formations is potentially profitable. 6 references, 4 figures, 3 tables.« less

  14. Reducing physical size limits for low-frequency horn loudspeaker systems

    NASA Astrophysics Data System (ADS)

    Honeycutt, Richard Allison

    From 1881 until the present day, many excellent scholars have studied acoustic horns. This dissertation begins by discussing over eighty results of such study. Next, the methods of modeling horn behavior are examined with an emphasis on the prediction of throat impedance. Because of the time constraints in a product-design environment, in which the results of this study may be used, boundary-element and cascaded-section types of analysis were not considered due to their time intensiveness. Of the methods studied, an analytical process based upon Olson's adaptation of Webster's analysis is selected as the most accurate of the rapid methods, although other good methods exist. Reasons and extent of inaccuracy are discussed. The concept of interleaved horn loading is introduced: it involves using two horns of different parameters, fed by a single driver, with a view toward interleaving and thus smoothing the impedance peaks of the separate horns to produce a smoother response. The validity of the technique is demonstrated both theoretically and practically. Then the reactance annulling technique is explained and tested experimentally. It is found to work well, but the exact parameter values involved are not found to be critical. Finally, the considerations involved in building a practical working system are discussed, and a preliminary working model reviewed. Future work could be directed toward finding the optimum parameter values for the two "parallel horns" whose impedances are to be interleaved, as well as the system parameters that determine these optimum values. Also, further experimental investigation or ported loading of the back air chamber would be useful.

  15. Highly-Valued Reasons Muslim Caregivers Choose Evangelical Christian Schools

    ERIC Educational Resources Information Center

    Rumbaugh, Andrew E.

    2009-01-01

    This study investigated what were the most highly-valued reasons among Muslim caregivers for sending their children to Lebanese evangelical Christian schools. Muslim caregivers (N = 1,403) from four Lebanese evangelical Christian schools responded to determine what were the most highly-valued reasons for sending their children to an evangelical…

  16. 38 CFR 36.4339 - Eligibility of loans; reasonable value requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...; reasonable value requirements. 36.4339 Section 36.4339 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... Reporting § 36.4339 Eligibility of loans; reasonable value requirements. (a) Evidence of guaranty or... mortgages pursuant to 38 U.S.C. 3710(d), the loan (including any scheduled deferred interest added to...

  17. 38 CFR 36.4339 - Eligibility of loans; reasonable value requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...; reasonable value requirements. 36.4339 Section 36.4339 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... Reporting § 36.4339 Eligibility of loans; reasonable value requirements. (a) Evidence of guaranty or... mortgages pursuant to 38 U.S.C. 3710(d), the loan (including any scheduled deferred interest added to...

  18. 38 CFR 36.4339 - Eligibility of loans; reasonable value requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...; reasonable value requirements. 36.4339 Section 36.4339 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... Reporting § 36.4339 Eligibility of loans; reasonable value requirements. (a) Evidence of guaranty or... mortgages pursuant to 38 U.S.C. 3710(d), the loan (including any scheduled deferred interest added to...

  19. 38 CFR 36.4339 - Eligibility of loans; reasonable value requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...; reasonable value requirements. 36.4339 Section 36.4339 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... Reporting § 36.4339 Eligibility of loans; reasonable value requirements. (a) Evidence of guaranty or... mortgages pursuant to 38 U.S.C. 3710(d), the loan (including any scheduled deferred interest added to...

  20. 38 CFR 36.4339 - Eligibility of loans; reasonable value requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...; reasonable value requirements. 36.4339 Section 36.4339 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... Reporting § 36.4339 Eligibility of loans; reasonable value requirements. (a) Evidence of guaranty or... mortgages pursuant to 38 U.S.C. 3710(d), the loan (including any scheduled deferred interest added to...

  1. The Peculiar Electrical Properties of the 8-12th September, 2015 Massive Dust Outbreak Over the Levant

    NASA Astrophysics Data System (ADS)

    Yair, Y.; Katz, S.; Price, C.; Ziv, B.; Yaniv, R.

    2016-12-01

    Dust storms over the Levant and Eastern Mediterranean region are common and occur several times a year, depending on synoptic systems and meteorological parameters. Such storms are often accompanied by large electrical charging, most likely due to triboelectric processes (Esposito et al., 2016). The effects of dust storms on atmospheric electricity parameters such as the fair weather electric field (Ez) and current density (Jz) are well documented, but have not been extensively studied for the Levant region. We report new measurements conducted during the massive dust outbreak that occurred over the region in September 08-12, 2015. That event was one of the strongest dust storms on record and engulfed the entire region for 5 consecutive days, from Iraq through Syria, Jordan, Israel, Lebanon, the Eastern Mediterranean Sea, Cyprus and Egypt. Ground-based measurements of Ez and Jz were conducted at the Wise Observatory (WO) in Mizpe-Ramon (30035'N, 34045'E) and at Mt. Hermon (30024'N, 35051'E). The Aerosol Optical Thickness (AOT) obtained from the AERONET station in Sde-Boker reached values up to 4.0. During the dust outbreak very large fluctuations in the electrical parameters were measured at both stations, however remarkable differences were noted. While at the Mt. Hermon station we registered strong positive values of the electric field and current density, the values registered at WO were significantly smaller and more negative. The Mt. Hermon site showed Ez and Jz values fluctuating between -460 and +570 V m-1 and -14.5 and +18 pA m-2 respectively. In contrast, Ez values registered at WO were between -430 and +10 V m-1, while the Jz fluctuated between -6 and +3 pA m-2 .When compared with the February 2015 electrified dust storm reported by Yair et al. (2016), we note substantial differences in the electric parameters variability, amplitude and polarity. The possible reasons for these differences will be discussed.

  2. Basic requirements for a 1000-MW(electric) class tokamak fusion-fission hybrid reactor and its blanket concept

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatayama, Ariyoshi; Ogasawara, Masatada; Yamauchi, Michinori

    1994-08-01

    Plasma size and other basic performance parameters for 1000-MW(electric) power production are calculated with the blanket energy multiplication factor, the M value, as a parameter. The calculational model is base don the International Thermonuclear Experimental Reactor (ITER) physics design guidelines and includes overall plant power flow. Plasma size decreases as the M value increases. However, the improvement in the plasma compactness and other basic performance parameters, such as the total plant power efficiency, becomes saturated above the M = 5 to 7 range. THus, a value in the M = 5 to 7 range is a reasonable choice for 1000-MW(electric)more » hybrids. Typical plasma parameters for 1000-MW(electric) hybrids with a value of M = 7 are a major radius of R = 5.2 m, minor radius of a = 1.7 m, plasma current of I{sub p} = 15 MA, and toroidal field on the axis of B{sub o} = 5 T. The concept of a thermal fission blanket that uses light water as a coolant is selected as an attractive candidate for electricity-producing hybrids. An optimization study is carried out for this blanket concept. The result shows that a compact, simple structure with a uniform fuel composition for the fissile region is sufficient to obtain optimal conditions for suppressing the thermal power increase caused by fuel burnup. The maximum increase in the thermal power is +3.2%. The M value estimated from the neutronics calculations is {approximately}7.0, which is confirmed to be compatible with the plasma requirement. These studies show that it is possible to use a tokamak fusion core with design requirements similar to those of ITER for a 1000-MW(electric) power reactor that uses existing thermal reactor technology for the blanket. 30 refs., 22 figs., 4 tabs.« less

  3. The value of multi ultra high-b-value DWI in grading cerebral astrocytomas and its association with aquaporin-4.

    PubMed

    Tan, Yan; Zhang, Hui; Wang, Xiao-Chun; Qin, Jiang-Bo; Wang, Le

    2018-06-01

    To investigate the value of multi-ultrahigh-b-value diffusion-weighted imaging (UHBV-DWI) in differentiating high-grade astrocytomas (HGAs) from low-grade astrocytomas (LGAs), analyze its association with aquaporin (AQP) expression. 40 astrocytomas divided into LGAs (N = 15) and HGAs (N = 25) were studied. Apparent diffusion coefficient (ADC) and UHBV-ADC values in solid parts and peritumoral edema were compared between LGAs and HGAs groups by the t-test. Using receiver operating characteristic curves to identify the better parameter. Using real time polymerase chain reaction to assess AQP messenger ribonucleic acid (mRNA). Using spearman correlation analysis to assess the correlation of AQP mRNA with each parameter. ADC values in solid parts of HGAs were significantly lower than LGAs (p = 0.02), while UHBV-ADC values of HGAs were significantly higher than LGAs (p < 0.01). Area under the curve (AUC) of UHBV-ADC (0.810) was larger than ADC (0.713), and the area under the curve of UHBV-ADC was significantly higher than that of ADC (p = 0.041). AQP4 mRNA was significantly higher in HGAs than that in LGAs (p < 0.01); there was less AQP9 mRNA and no AQP1 mRNA in LGAs and HGAs groups (p > 0.05); ADC value showed a negative correlation with AQP4 mRNA (r = -0.357; p = 0.024). UHBV-ADC value positively correlated with the AQP4 mRNA (r = 0.646; p < 0.01). UHBV-DWI allowed for a more accurate grading of cerebral astrocytoma than DWI, and UHBV-ADC value may be related with the AQP4 mRNA levels. UHBV-DWI could be of value in the assessment of astrocytoma. Advances in knowledge: UHBV-DWI generated by multi UHBV could have particular value for astrocytoma grading, and the level of AQP4 mRNA might be potentially linked to the change of UHBV-DWI parameter, and we might find the exact reason for the difference of UHBV-ADC between the LGAs and HGAs.

  4. An extended X-Ray absorption fine structure (exafs) study of copper (II) sulphate pentahydrate

    NASA Astrophysics Data System (ADS)

    Joyner, Richard W.

    1980-05-01

    The EXAFS spectrum of copper (II) sulphate pentahydrate has been measured using synchrotron radiation. Comparison with the results of ab initio calculation gives a mean copper-oxygen distance of 1.95 Å, in reasonable agreement with the known value of 1.97 Å. The relation between the EXAFS Debye-Waller factor and thermal parameters measured by neutron diffraction is discussed. Absence in the EXAFS spectrum of evidence for the second-nearest neighbour oxygen atoms, at Cu-O ≈ 2.4 Å, is discussed.

  5. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  6. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  7. Physicochemical properties of some bottled water brands in Alexandria Governorate, Egypt.

    PubMed

    Ibrahim, Hesham Z; Mohammed, Heba A G; Hafez, Afaf M

    2014-08-01

    Many people use bottled water instead of tap water for many reasons such as taste, ease of carrying, and thinking that it is safer than tap water. Irrespective of the reason, bottled water consumption has been steadily growing in the world for the past 30 years. In Egypt, this is still increasing to reach 3.8 l/person/day, despite its high price compared with tap water. The purpose of this study was to evaluate the physicochemical quality of some bottled water brands and to compare the quality with that reported on manufacture's labeling, Egyptian, and International standards. Fourteen bottled water brands were selected from the local markets of Alexandria city. Three bottles from each brand were randomly sampled, making a total sample size of 42 bottles. Sampling occurred between July 2012 and September 2012. Each bottle was analyzed for its physicochemical parameter and the average was calculated for each brand. The results obtained were compared with the Egyptian standard for bottled water, Food and Drug Administration (FDA), and with bottled water labels. In all bottles in the study, pH values ranged between 7.21 and 8.23, conductivity ranged between 195 and 675 μs/cm, and total dissolved solids, sulfate, chloride, and fluoride were within the range specified by the FDA. Calcium concentrations ranged between 2.7373 and 29.2183 mg/l, magnesium concentrations ranged between 5.7886 and 17.6633 mg/l, sodium between 14.5 and 205.8 mg/l, and potassium between 6.5 and 29.8 mg/l. For heavy metals such as iron, zinc, copper, and manganese, all of them were in conformity with the Egyptian standards and FDA, but nickel concentration in 11 brands was higher than the Egyptian standards. Twelve brands were higher than the Egyptian standards in cadmium concentration, but on comparison with FDA there were only five brands exceeding limits. Lead concentrations were out of range for all brands. On comparison with the labeled values, the quality of bottled water was not complying with labeled values. Physicochemical parameters in all bottled water examined brands were consistent with the Egyptian Standard and FDA, except for total dissolved solids, nickel, cadmium, and lead. Statistical analysis showed that there was significant difference (P<0.05) in all parameters tested between different brands. Values on the bottled water labels were not in agreement with analytical results.

  8. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  9. Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Anderson, B. J.; Justus, C. G.; Batts, G. W.

    2001-01-01

    Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.

  10. 10 CFR 603.525 - Value and reasonableness of the recipient's cost sharing contribution.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Value and reasonableness of the recipient's cost sharing contribution. 603.525 Section 603.525 Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Cost Sharing § 603.525 Value and reasonableness of the...

  11. Effect of Different Solar Radiation Data Sources on the Variation of Techno-Economic Feasibility of PV Power System

    NASA Astrophysics Data System (ADS)

    Alghoul, M. A.; Ali, Amer; Kannanaikal, F. V.; Amin, N.; Aljaafar, A. A.; Kadhim, Mohammed; Sopian, K.

    2017-11-01

    The aim of this study is to evaluate the variation in techno-economic feasibility of PV power system under different data sources of solar radiation. HOMER simulation tool is used to predict the techno-economic feasibility parameters of PV power system in Baghdad city, Iraq located at (33.3128° N, 44.3615° E) as a case study. Four data sources of solar radiation, different annual capacity shortages percentage (0, 2.5, 5, and 7.5), and wide range of daily load profile (10-100 kWh/day) are implemented. The analyzed parameters of the techno-economic feasibility are COE (/kWh), PV array power capacity (kW), PV electrical production (kWh/year), No. of batteries and battery lifetime (year). The main results of the study revealed the followings: (1) solar radiation from different data sources caused observed to significant variation in the values of the techno-economic feasibility parameters; therefore, careful attention must be paid to ensure the use of an accurate solar input data; (2) Average solar radiation from different data sources can be recommended as a reasonable input data; (3) it is observed that as the size and of PV power system increases, the effect of different data sources of solar radiation increases and causes significant variation in the values of the techno-economic feasibility parameters.

  12. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    PubMed

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  13. Evaluation of Different Dose-Response Models for High Hydrostatic Pressure Inactivation of Microorganisms

    PubMed Central

    2017-01-01

    Modeling of microbial inactivation by high hydrostatic pressure (HHP) requires a plot of the log microbial count or survival ratio versus time data under a constant pressure and temperature. However, at low pressure and temperature values, very long holding times are needed to obtain measurable inactivation. Since the time has a significant effect on the cost of HHP processing it may be reasonable to fix the time at an appropriate value and quantify the inactivation with respect to pressure. Such a plot is called dose-response curve and it may be more beneficial than the traditional inactivation modeling since short holding times with different pressure values can be selected and used for the modeling of HHP inactivation. For this purpose, 49 dose-response curves (with at least 4 log10 reduction and ≥5 data points including the atmospheric pressure value (P = 0.1 MPa), and with holding time ≤10 min) for HHP inactivation of microorganisms obtained from published studies were fitted with four different models, namely the Discrete model, Shoulder model, Fermi equation, and Weibull model, and the pressure value needed for 5 log10 (P5) inactivation was calculated for all the models above. The Shoulder model and Fermi equation produced exactly the same parameter and P5 values, while the Discrete model produced similar or sometimes the exact same parameter values as the Fermi equation. The Weibull model produced the worst fit (had the lowest adjusted determination coefficient (R2adj) and highest mean square error (MSE) values), while the Fermi equation had the best fit (the highest R2adj and lowest MSE values). Parameters of the models and also P5 values of each model can be useful for the further experimental design of HHP processing and also for the comparison of the pressure resistance of different microorganisms. Further experiments can be done to verify the P5 values at given conditions. The procedure given in this study can also be extended for enzyme inactivation by HHP. PMID:28880255

  14. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  15. Generational differences in American students' reasons for going to college, 1971-2014: The rise of extrinsic motives.

    PubMed

    Twenge, Jean M; Donnelly, Kristin

    2016-01-01

    We examined generational differences in reasons for attending college among a nationally representative sample of college students (N = 8 million) entering college between 1971-2014. We validated the items on reasons for attending college against an established measure of extrinsic and intrinsic values among college students in 2014 (n = 189). Millennials (in college 2000s-2010s) and Generation X (1980s-1990s) valued extrinsic reasons for going to college ("to make more money") more, and anti-extrinsic reasons ("to gain a general education and appreciation of ideas") less than Boomers when they were the same age in the 1960s-1970s. Extrinsic reasons for going to college were higher in years with more income inequality, college enrollment, and extrinsic values. These results mirror previous research finding generational increases in extrinsic values begun by GenX and continued by Millennials, suggesting that more recent generations are more likely to favor extrinsic values in their decision-making.

  16. Electromagnetic Performance Calculation of HTS Linear Induction Motor for Rail Systems

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Fang, Jin; Cao, Junci; Chen, Jie; Shu, Hang; Sheng, Long

    2017-07-01

    According to a high temperature superconducting (HTS) linear induction motor (LIM) designed for rail systems, the influence of electromagnetic parameters and mechanical structure parameters on the electromagnetic horizontal thrust, vertical force of HTS LIM and the maximum vertical magnetic field of HTS windings are analyzed. Through the research on the vertical field of HTS windings, the development regularity of the HTS LIM maximum input current with different stator frequency and different thickness value of the secondary conductive plate is obtained. The theoretical results are of great significance to analyze the stability of HTS LIM. Finally, based on theory analysis, HTS LIM test platform was built and the experiment was carried out with load. The experimental results show that the theoretical analysis is correct and reasonable.

  17. [Application of diffusion tensor imaging in judging infarction time of acute ischemic cerebral infarction].

    PubMed

    Dai, Zhenyu; Chen, Fei; Yao, Lizheng; Dong, Congsong; Liu, Yang; Shi, Haicun; Zhang, Zhiping; Yang, Naizhong; Zhang, Mingsheng; Dai, Yinggui

    2015-08-18

    To evaluate the clinical application value of diffusion tensor imaging (DTI) and diffusion tensor tractography (DTT) in judging infarction time phase of acute ischemic cerebral infarction. To retrospective analysis DTI images of 52 patients with unilateral acute ischemic cerebral infarction (hyper-acute, acute and sub-acute) from the Affiliated Yancheng Hospital of Southeast University Medical College, which diagnosed by clinic and magnetic resonance imaging. Set the regions of interest (ROIs) of infarction lesions, brain tissue close to infarction lesions and corresponding contra (contralateral normal brain tissue) on DTI parameters mapping of fractional anisotropy (FA), volume ratio anisotropy (VRA), average diffusion coefficient (DCavg) and exponential attenuation (Exat), record the parameters values of ROIs and calculate the relative parameters value of infarction lesion to contra. Meanwhile, reconstruct the DTT images based on the seed points (infarction lesion and contra). The study compared each parameter value of infarction lesions, brain tissue close to infarction lesions and corresponding contra, also analysed the differences of relative parameters values in different infarction time phases. The DTT images of acute ischemic cerebral infarction in each time phase could show the manifestation of fasciculi damaged. The DCavg value of cerebral infarction lesions was lower and the Exat value was higher than contra in each infarction time phase (P<0.05). The FA and VRA value of cerebral infarction lesions were reduced than contra only in acute and sub-acute infarction (P<0.05). The FA, VRA and Exat value of brain tissue close to infarction lesions were increased and DCavg value was decreased than contra in hyper-acute infarction (P<0.05). There were no statistic differences of FA, VRA, DCavg and Exat value of brain tissue close to infarction lesions in acute and sub-acute infarction. The relative FA and VRA value of infarction lesion to contra gradually decreased from hyper-acute to sub-acute cerebral infarction (P<0.05), but there were no difference of the relative VRA value between acute and sub-acute cerebral infarction. The relative DCavg value of infarction lesion to contra in hyper-acute infarction than that in acute and sub-acute infarction (P<0.05), however there was also no difference between acute and sub-acute infarction. ROC curve showed the best diagnosis cut off value of relative FA, VRA and DCavg of infarction lesions to contra were 0.852, 0.886 and 0.541 between hyper-acute and acute cerebral infarction, the best diagnosis cut off value of relative FA was 0.595 between acute and sub-acute cerebral infarction, respectively. The FA, VRA, DCavg and Exat value have specific change mode in acute ischemic cerebral infarction of different infarction time phases, which can be combine used in judging infarction time phase of acute ischemic cerebral infarction without clear onset time, thus to help selecting the reasonable treatment protocols.

  18. Efficiently Selecting the Best Web Services

    NASA Astrophysics Data System (ADS)

    Goncalves, Marlene; Vidal, Maria-Esther; Regalado, Alfredo; Yacoubi Ayadi, Nadia

    Emerging technologies and linking data initiatives have motivated the publication of a large number of datasets, and provide the basis for publishing Web services and tools to manage the available data. This wealth of resources opens a world of possibilities to satisfy user requests. However, Web services may have similar functionality and assess different performance; therefore, it is required to identify among the Web services that satisfy a user request, the ones with the best quality. In this paper we propose a hybrid approach that combines reasoning tasks with ranking techniques to aim at the selection of the Web services that best implement a user request. Web service functionalities are described in terms of input and output attributes annotated with existing ontologies, non-functionality is represented as Quality of Services (QoS) parameters, and user requests correspond to conjunctive queries whose sub-goals impose restrictions on the functionality and quality of the services to be selected. The ontology annotations are used in different reasoning tasks to infer service implicit properties and to augment the size of the service search space. Furthermore, QoS parameters are considered by a ranking metric to classify the services according to how well they meet a user non-functional condition. We assume that all the QoS parameters of the non-functional condition are equally important, and apply the Top-k Skyline approach to select the k services that best meet this condition. Our proposal relies on a two-fold solution which fires a deductive-based engine that performs different reasoning tasks to discover the services that satisfy the requested functionality, and an efficient implementation of the Top-k Skyline approach to compute the top-k services that meet the majority of the QoS constraints. Our Top-k Skyline solution exploits the properties of the Skyline Frequency metric and identifies the top-k services by just analyzing a subset of the services that meet the non-functional condition. We report on the effects of the proposed reasoning tasks, the quality of the top-k services selected by the ranking metric, and the performance of the proposed ranking techniques. Our results suggest that the number of services can be augmented by up two orders of magnitude. In addition, our ranking techniques are able to identify services that have the best values in at least half of the QoS parameters, while the performance is improved.

  19. Technique of optimization of minimum temperature driving forces in the heaters of regeneration system of a steam turbine unit

    NASA Astrophysics Data System (ADS)

    Shamarokov, A. S.; Zorin, V. M.; Dai, Fam Kuang

    2016-03-01

    At the current stage of development of nuclear power engineering, high demands on nuclear power plants (NPP), including on their economy, are made. In these conditions, improving the quality of NPP means, in particular, the need to reasonably choose the values of numerous managed parameters of technological (heat) scheme. Furthermore, the chosen values should correspond to the economic conditions of NPP operation, which are postponed usually a considerable time interval from the point of time of parameters' choice. The article presents the technique of optimization of controlled parameters of the heat circuit of a steam turbine plant for the future. Its particularity is to obtain the results depending on a complex parameter combining the external economic and operating parameters that are relatively stable under the changing economic environment. The article presents the results of optimization according to this technique of the minimum temperature driving forces in the surface heaters of the heat regeneration system of the steam turbine plant of a K-1200-6.8/50 type. For optimization, the collector-screen heaters of high and low pressure developed at the OAO All-Russia Research and Design Institute of Nuclear Power Machine Building, which, in the authors' opinion, have the certain advantages over other types of heaters, were chosen. The optimality criterion in the task was the change in annual reduced costs for NPP compared to the version accepted as the baseline one. The influence on the decision of the task of independent variables that are not included in the complex parameter was analyzed. An optimization task was decided using the alternating-variable descent method. The obtained values of minimum temperature driving forces can guide the design of new nuclear plants with a heat circuit, similar to that accepted in the considered task.

  20. Refining value-at-risk estimates using a Bayesian Markov-switching GJR-GARCH copula-EVT model.

    PubMed

    Sampid, Marius Galabe; Hasim, Haslifah M; Dai, Hongsheng

    2018-01-01

    In this paper, we propose a model for forecasting Value-at-Risk (VaR) using a Bayesian Markov-switching GJR-GARCH(1,1) model with skewed Student's-t innovation, copula functions and extreme value theory. A Bayesian Markov-switching GJR-GARCH(1,1) model that identifies non-constant volatility over time and allows the GARCH parameters to vary over time following a Markov process, is combined with copula functions and EVT to formulate the Bayesian Markov-switching GJR-GARCH(1,1) copula-EVT VaR model, which is then used to forecast the level of risk on financial asset returns. We further propose a new method for threshold selection in EVT analysis, which we term the hybrid method. Empirical and back-testing results show that the proposed VaR models capture VaR reasonably well in periods of calm and in periods of crisis.

  1. Evaluation of the physico-chemical, rheological and sensory characteristics of commercially available Frankfurters in Spain and consumer preferences.

    PubMed

    González-Viñas, M A; Caballero, A B; Gallego, I; García Ruiz, A

    2004-08-01

    The physico-chemical, rheological and sensory characteristics of different commercially available Frankfurters were studied. Samples presented values of A(w) and pH from 0.954 to 0.972 and 5.88 to 6.43, respectively. Greater differences were observed in parameters such as fat and salt content, with values ranging from 10.83% to 21.92% and 1.85% to 3.01%, respectively. With regard to total nitrogen, all samples presented values close to 2%. Free-choice profiling and generalised procrustes analysis of the sensory data permitted differentiation between samples and provided information about the attributes responsible for the observed differences. All the frankfurters scored in the moderate range for overall acceptability. Consumers identified reasons for purchasing frankfurters when evaluating the product's packaging. The most important criterion for consumers when purchasing frankfurters was the appetising aspect of the product in the packaging's illustration.

  2. INDUCTIVE SYSTEM HEALTH MONITORING WITH STATISTICAL METRICS

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    2005-01-01

    Model-based reasoning is a powerful method for performing system monitoring and diagnosis. Building models for model-based reasoning is often a difficult and time consuming process. The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS processes nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. In particular, a clustering algorithm forms groups of nominal values for sets of related parameters. This establishes constraints on those parameter values that should hold during nominal operation. During monitoring, IMS provides a statistically weighted measure of the deviation of current system behavior from the established normal baseline. If the deviation increases beyond the expected level, an anomaly is suspected, prompting further investigation by an operator or automated system. IMS has shown potential to be an effective, low cost technique to produce system monitoring capability for a variety of applications. We describe the training and system health monitoring techniques of IMS. We also present the application of IMS to a data set from the Space Shuttle Columbia STS-107 flight. IMS was able to detect an anomaly in the launch telemetry shortly after a foam impact damaged Columbia's thermal protection system.

  3. Scaled effective on-site Coulomb interaction in the DFT+U method for correlated materials

    NASA Astrophysics Data System (ADS)

    Nawa, Kenji; Akiyama, Toru; Ito, Tomonori; Nakamura, Kohji; Oguchi, Tamio; Weinert, M.

    2018-01-01

    The first-principles calculation of correlated materials within density functional theory remains challenging, but the inclusion of a Hubbard-type effective on-site Coulomb term (Ueff) often provides a computationally tractable and physically reasonable approach. However, the reported values of Ueff vary widely, even for the same ionic state and the same material. Since the final physical results can depend critically on the choice of parameter and the computational details, there is a need to have a consistent procedure to choose an appropriate one. We revisit this issue from constraint density functional theory, using the full-potential linearized augmented plane wave method. The calculated Ueff parameters for the prototypical transition-metal monoxides—MnO, FeO, CoO, and NiO—are found to depend significantly on the muffin-tin radius RMT, with variations of more than 2-3 eV as RMT changes from 2.0 to 2.7 aB. Despite this large variation in Ueff, the calculated valence bands differ only slightly. Moreover, we find an approximately linear relationship between Ueff(RMT) and the number of occupied localized electrons within the sphere, and give a simple scaling argument for Ueff; these results provide a rationalization for the large variation in reported values. Although our results imply that Ueff values are not directly transferable among different calculation methods (or even the same one with different input parameters such as RMT), use of this scaling relationship should help simplify the choice of Ueff.

  4. Predicting dredging-associated effects to coral reefs in Apra Harbor, Guam - Part 1: Sediment exposure modeling.

    PubMed

    Gailani, Joseph Z; Lackey, Tahirih C; King, David B; Bryant, Duncan; Kim, Sung-Chan; Shafer, Deborah J

    2016-03-01

    Model studies were conducted to investigate the potential coral reef sediment exposure from dredging associated with proposed development of a deepwater wharf in Apra Harbor, Guam. The Particle Tracking Model (PTM) was applied to quantify the exposure of coral reefs to material suspended by the dredging operations at two alternative sites. Key PTM features include the flexible capability of continuous multiple releases of sediment parcels, control of parcel/substrate interaction, and the ability to efficiently track vast numbers of parcels. This flexibility has facilitated simulating the combined effects of sediment released from clamshell dredging and chiseling within Apra Harbor. Because the rate of material released into the water column by some of the processes is not well understood or known a priori, the modeling approach was to bracket parameters within reasonable ranges to produce a suite of potential results from multiple model runs. Sensitivity analysis to model parameters is used to select the appropriate parameter values for bracketing. Data analysis results include mapping the time series and the maximum values of sedimentation, suspended sediment concentration, and deposition rate. Data were used to quantify various exposure processes that affect coral species in Apra Harbor. The goal of this research is to develop a robust methodology for quantifying and bracketing exposure mechanisms to coral (or other receptors) from dredging operations. These exposure values were utilized in an ecological assessment to predict effects (coral reef impacts) from various dredging scenarios. Copyright © 2015. Published by Elsevier Ltd.

  5. Augmenting epidemiological models with point-of-care diagnostics data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.

    Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less

  6. Augmenting epidemiological models with point-of-care diagnostics data

    DOE PAGES

    Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.; ...

    2016-04-20

    Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less

  7. Explicit chiral symmetry breaking in the Nambu-Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Schüren, C.; Arriola, E. Ruiz; Goeke, K.

    1992-09-01

    We consider a chirally symmetric bosonization of the SU(2) Nambu-Jona-Lasinio model within the Pauli-Villars regularization scheme. Special attention is paid to the way in which chiral symmetry is broken explicitly. The parameters of the model are fixed in the light of chiral perturbation theory by performing a covariant derivative expansion in the presence of external fields. As a by-product we obtain the corresponding low-energy parameters and pion radii as well as some threshold parameters for pion-pion scattering. The nucleon is obtained in terms of the solitonic solutions of the action in the sector with baryon number equal to one. It is found that for a constituent quark mass M ˜ 350 MeV most of the calculated vacuum and pion properties agree reasonably well with the experimental ones and coincide with the region where localized solitons with the right size exist. For this value, however, the scalar and vector pion radii turn out to be very small. A unique determination of the sigma term is proposed, obtaining a value of σ(0) = 41.3 MeV. The scalar nucleon form factor is evaluated in the Breit frame. The extrapolation to the Cheng-Dashen point leads to σ(2 m2) - σ(0) = 7.4 MeV.

  8. Application of the U.S. Geological Survey's precipitation-runoff modeling system to the Prairie Dog Creek basin, southeastern Montana

    USGS Publications Warehouse

    Cary, L.E.

    1984-01-01

    The U.S. Geological Survey 's precipitation-runoff modeling system was tested using 2 year 's data for the daily mode and 17 storms for the storm mode from a basin in southeastern Montana. Two hydrologic response unit delineations were studied. The more complex delineation did not provide superior results. In this application, the optimum numbers of hydrologic response units were 16 and 18 for the two alternatives. The first alternative with 16 units was modified to facilitate interfacing with the storm mode. A parameter subset was defined for the daily mode using sensitivity analysis. Following optimization, the simulated hydrographs approximated the observed hydrograph during the first year, a year of large snowfall. More runoff was simulated than observed during the second year. There was reasonable correspondence between the observed snowpack and the simulated snowpack the first season but poor the second. More soil moisture was withdrawn than was indicated by soil moisture observations. Optimization of parameters in the storm mode resulted in much larger values than originally estimated, commonly larger than published values of the Green and Ampt parameters. Following optimization, variable results were obtained. The results obtained are probably related to inadequate representation of basin infiltration characteristics and to precipitation variability. (USGS)

  9. Detection of {sup 14}N and {sup 35}Cl in cocaine base and hydrochloride using NQR, NMR, and SQUID techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yesinowski, J.P.; Buess, M.L.; Garroway, A.N.

    1995-07-01

    Results from {sup 14}N pure NQR of cocaine in the free base form (cocaine base) yield a nuclear quadrupole coupling constant (NQCC) e{sup 2}Qq/h of 5.0229 ({+-}0.0001) MHz and an asymmetry parameter {eta} of 0.0395 ({+-}0.0001) at 295 K, with corresponding values of 5.0460 ({+-}0.0013) MHz and 0.0353 ({+-}0.0008) at 77 K. Both pure NQR (at 295-77 K) and a superconducting quantum interference device (SQUID) detector (at 4.2 K) were used to measure the very low (<1 MHz) {sup 14}N transition frequencies in cocaine hydrochloride; at 295 K the NQCC is 1.1780 ({+-}0.0014) MHz and the asymmetry parameter is 0.2632more » ({+-}0.0034). Stepping the carrier frequency enables one to obtain a powder pattern without the severe intensity distortions that otherwise arise from finite pulse power. A powder pattern simulation using an NQCC value of 5.027 MHz and an asymmetry parameter {eta} of 0.2 agrees reasonably well with the experimental stepped-frequency spectrum. The use of pure NQR for providing nondestructive, quantitative, and highly specific detection of crystalline compounds is discussed, as are experimental strategies. 31 refs., 8 figs., 1 tab.« less

  10. 29 CFR 531.33 - “Reasonable cost”; “fair value.”

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false âReasonable costâ; âfair value.â 531.33 Section 531.33....33 “Reasonable cost”; “fair value.” (a) Section 3(m) directs the Administrator to determine “the... authorizes him to determine “the fair value” of such facilities for defined classes of employees and in...

  11. Missing-value estimation using linear and non-linear regression with Bayesian gene selection.

    PubMed

    Zhou, Xiaobo; Wang, Xiaodong; Dougherty, Edward R

    2003-11-22

    Data from microarray experiments are usually in the form of large matrices of expression levels of genes under different experimental conditions. Owing to various reasons, there are frequently missing values. Estimating these missing values is important because they affect downstream analysis, such as clustering, classification and network design. Several methods of missing-value estimation are in use. The problem has two parts: (1) selection of genes for estimation and (2) design of an estimation rule. We propose Bayesian variable selection to obtain genes to be used for estimation, and employ both linear and nonlinear regression for the estimation rule itself. Fast implementation issues for these methods are discussed, including the use of QR decomposition for parameter estimation. The proposed methods are tested on data sets arising from hereditary breast cancer and small round blue-cell tumors. The results compare very favorably with currently used methods based on the normalized root-mean-square error. The appendix is available from http://gspsnap.tamu.edu/gspweb/zxb/missing_zxb/ (user: gspweb; passwd: gsplab).

  12. The issue of cavitation number value in studies of water treatment by hydrodynamic cavitation.

    PubMed

    Šarc, Andrej; Stepišnik-Perdih, Tadej; Petkovšek, Martin; Dular, Matevž

    2017-01-01

    Within the last years there has been a substantial increase in reports of utilization of hydrodynamic cavitation in various applications. It has came to our attention that many times the results are poorly repeatable with the main reason being that the researchers put significant emphasis on the value of the cavitation number when describing the conditions at which their device operates. In the present paper we firstly point to the fact that the cavitation number cannot be used as a single parameter that gives the cavitation condition and that large inconsistencies in the reports exist. Then we show experiments where the influences of the geometry, the flow velocity, the medium temperature and quality on the size, dynamics and aggressiveness of cavitation were assessed. Finally we show that there are significant inconsistencies in the definition of the cavitation number itself. In conclusions we propose a number of parameters, which should accompany any report on the utilization of hydrodynamic cavitation, to make it repeatable and to enable faster progress of science and technology development. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Force field-dependent structural divergence revealed during long time simulations of Calbindin d9k.

    PubMed

    Project, Elad; Nachliel, Esther; Gutman, Menachem

    2010-07-15

    The structural and the dynamic features of the Calbindin (CaB) protein in its holo and apo states are compared using molecular dynamics simulations under nine different force fields (FFs) (G43a1, G53a6, Opls-AA, Amber94, Amber99, Amber99p, AmberGS, AmberGSs, and Amber99sb). The results show that most FFs reproduce reasonably well the majority of the experimentally derived features of the CaB protein. However, in several cases, there are significant differences in secondary structure properties, root mean square deviations (RMSDs), root mean square fluctuations (RMSFs), and S(2) order parameters among the various FFs. What is more, in certain cases, these parameters differed from the experimentally derived values. Some of these deviations became noticeable only after 50 ns. A comparison with experimental data indicates that, for CaB, the Amber94 shows overall best agreement with the measured values, whereas several others seem to deviate from both crystal and nuclear magnetic resonance data. Copyright 2009 Wiley Periodicals, Inc.

  14. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    NASA Astrophysics Data System (ADS)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  15. The effect of random matter density perturbations on the large mixing angle solution to the solar neutrino problem

    NASA Astrophysics Data System (ADS)

    Guzzo, M. M.; Holanda, P. C.; Reggiani, N.

    2003-08-01

    The neutrino energy spectrum observed in KamLAND is compatible with the predictions based on the Large Mixing Angle realization of the MSW (Mikheyev-Smirnov-Wolfenstein) mechanism, which provides the best solution to the solar neutrino anomaly. From the agreement between solar neutrino data and KamLAND observations, we can obtain the best fit values of the mixing angle and square difference mass. When doing the fitting of the MSW predictions to the solar neutrino data, it is assumed the solar matter do not have any kind of perturbations, that is, it is assumed the the matter density monothonically decays from the center to the surface of the Sun. There are reasons to believe, nevertheless, that the solar matter density fluctuates around the equilibrium profile. In this work, we analysed the effect on the Large Mixing Angle parameters when the density matter randomically fluctuates around the equilibrium profile, solving the evolution equation in this case. We find that, in the presence of these density perturbations, the best fit values of the mixing angle and the square difference mass assume smaller values, compared with the values obtained for the standard Large Mixing Angle Solution without noise. Considering this effect of the random perturbations, the lowest island of allowed region for KamLAND spectral data in the parameter space must be considered and we call it very-low region.

  16. Anaerobic Degradation of Phthalate Isomers by Methanogenic Consortia

    PubMed Central

    Kleerebezem, Robbert; Pol, Look W. Hulshoff; Lettinga, Gatze

    1999-01-01

    Three methanogenic enrichment cultures, grown on ortho-phthalate, iso-phthalate, or terephthalate were obtained from digested sewage sludge or methanogenic granular sludge. Cultures grown on one of the phthalate isomers were not capable of degrading the other phthalate isomers. All three cultures had the ability to degrade benzoate. Maximum specific growth rates (μSmax) and biomass yields (YXtotS) of the mixed cultures were determined by using both the phthalate isomers and benzoate as substrates. Comparable values for these parameters were found for all three cultures. Values for μSmax and YXtotS were higher for growth on benzoate compared to the phthalate isomers. Based on measured and estimated values for the microbial yield of the methanogens in the mixed culture, specific yields for the phthalate and benzoate fermenting organisms were calculated. A kinetic model, involving three microbial species, was developed to predict intermediate acetate and hydrogen accumulation and the final production of methane. Values for the ratio of the concentrations of methanogenic organisms, versus the phthalate isomer and benzoate fermenting organisms, and apparent half-saturation constants (KS) for the methanogens were calculated. By using this combination of measured and estimated parameter values, a reasonable description of intermediate accumulation and methane formation was obtained, with the initial concentration of phthalate fermenting organisms being the only variable. The energetic efficiency for growth of the fermenting organisms on the phthalate isomers was calculated to be significantly smaller than for growth on benzoate. PMID:10049876

  17. Effects of must concentration techniques on wine isotopic parameters.

    PubMed

    Guyon, Francois; Douet, Christine; Colas, Sebastien; Salagoïty, Marie-Hélène; Medina, Bernard

    2006-12-27

    Despite the robustness of isotopic methods applied in the field of wine control, isotopic values can be slightly influenced by enological practices. For this reason, must concentration technique effects on wine isotopic parameters were studied. The two studied concentration techniques were reverse osmosis (RO) and high-vacuum evaporation (HVE). Samples (must and extracted water) have been collected in various French vineyards. Musts were microfermented at the laboratory, and isotope parameters were determined on the obtained wine. Deuterium and carbon-13 isotope ratios were studied on distilled ethanol by nuclear magnetic resonance (NMR) and isotope ratio mass spectrometry (IRMS), respectively. The oxygen-18 ratio was determined on extracted and wine water using IRMS apparatus. The study showed that the RO technique has a very low effect on isotopic parameters, indicating that this concentration technique does not create any isotopic fractionation, neither at sugar level nor at water level. The effect is notable for must submitted to HVE concentration: water evaporation leads to a modification of the oxygen-18 ratio of the must and, as a consequence, ethanol deuterium concentration is also modified.

  18. Climate Change Impacts on Transportation; Groundwater Elevation, Road Performance, and Robust Adaptation

    NASA Astrophysics Data System (ADS)

    Kirshen, P. H.; Knott, J. F.; Ray, P.; Elshaer, M.; Daniel, J.; Jacobs, J. M.

    2016-12-01

    Transportation climate change vulnerability and adaptation studies have primarily focused on surface-water flooding from sea-level rise (SLR); little attention has been given to the effects of climate change and SLR on groundwater and subsequent impacts on the unbound foundation layers of coastal-road infrastructure. The magnitude of service-life reduction depends on the height of the groundwater in the unbound pavement materials, the pavement structure itself, and the loading. Using a steady-state groundwater model, and a multi-layer elastic pavement evaluation model, the strain changes in the layers can be determined as a function of parameter values and the strain changes translated into failure as measured by number of loading cycles to failure. For a section of a major coastal road in New Hampshire, future changes in sea-level, precipitation, temperature, land use, and groundwater pumping are characterized by deep uncertainty. Parameters that describe the groundwater system such as hydraulic conductivity can be probabilistically described while road characteristics are assumed to be deterministic. To understand the vulnerability of this road section, a bottom-up planning approach was employed over time where the combinations of parameter values that cause failure were determined and their plausibility of their occurring was analyzed. To design a robust adaptation strategy that will function reasonably well in the present and the future given the large number of uncertain parameter values, performance of adaptation options were investigated. Adaptation strategies that were considered include raising the road, load restrictions, increasing pavement layer thicknesses, replacing moisture-sensitive materials with materials that are not moisture sensitive, improving drainage systems, and treatment of the underlying materials.

  19. [Is there a relation between weight in rats, bone density, ash weight and histomorphometric indicators of trabecular volume and thickness in the bones of extremities?].

    PubMed

    Zák, J; Kapitola, J; Povýsil, C

    2003-01-01

    Authors deal with question, if there is possibility to infer bone histological structure (described by histomorphometric parameters of trabecular bone volume and trabecular thickness) from bone density, ash weight or even from weight of animal (rat). Both tibias of each of 30 intact male rats, 90 days old, were processed. Left tibia was utilized to the determination of histomorphometric parameters of undecalcified bone tissue patterns by automatic image analysis. Right tibia was used to the determination of values of bone density, using Archimedes' principle. Values of bone density, ash weight, ash weight related to bone volume and animal weight were correlated with histomorphometric parameters (trabecular bone volume, trabecular thickness) by Pearson's correlation test. One could presume the existence of relation between data, describing bone mass at the histological level (trabecular bone of tibia) and other data, describing mass of whole bone or even animal mass (weight). But no statistically significant correlation was found. The reason of the present results could be in the deviations of trabecular density in marrow of tibia. Because of higher trabecular bone density in metaphyseal and epiphyseal regions, the histomorphometric analysis of trabecular bone is preferentially done in these areas. It is possible, that this irregularity of trabecular tibial density could be the source of the deviations, which could influence the results of correlations determined. The values of bone density, ash weight and animal weight do not influence trabecular bone volume and vice versa: static histomorphometric parameters of trabecular bone do not reflect bone density, ash weight and weight of animal.

  20. CONSTRAINTS ON THE SYNCHROTRON EMISSION MECHANISM IN GAMMA-RAY BURSTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beniamini, Paz; Piran, Tsvi, E-mail: paz.beniamini@mail.huji.ac.il, E-mail: tsvi.piran@mail.huji.ac.il

    2013-05-20

    We reexamine the general synchrotron model for gamma-ray bursts' (GRBs') prompt emission and determine the regime in the parameter phase space in which it is viable. We characterize a typical GRB pulse in terms of its peak energy, peak flux, and duration and use the latest Fermi observations to constrain the high-energy part of the spectrum. We solve for the intrinsic parameters at the emission region and find the possible parameter phase space for synchrotron emission. Our approach is general and it does not depend on a specific energy dissipation mechanism. Reasonable synchrotron solutions are found with energy ratios ofmore » 10{sup -4} < {epsilon}{sub B}/{epsilon}{sub e} < 10, bulk Lorentz factor values of 300 < {Gamma} < 3000, typical electrons' Lorentz factor values of 3 Multiplication-Sign 10{sup 3} < {gamma}{sub e} < 10{sup 5}, and emission radii of the order 10{sup 15} cm < R < 10{sup 17} cm. Most remarkable among those are the rather large values of the emission radius and the electron's Lorentz factor. We find that soft (with peak energy less than 100 keV) but luminous (isotropic luminosity of 1.5 Multiplication-Sign 10{sup 53}) pulses are inefficient. This may explain the lack of strong soft bursts. In cases when most of the energy is carried out by the kinetic energy of the flow, such as in the internal shocks, the synchrotron solution requires that only a small fraction of the electrons are accelerated to relativistic velocities by the shocks. We show that future observations of very high energy photons from GRBs by CTA could possibly determine all parameters of the synchrotron model or rule it out altogether.« less

  1. Thermal depinning of a single superconducting vortex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sok, Junghyun

    1995-06-19

    Thermal depinning has been studied for a single vortex trapped in a superconducting thin film in order to determine the value of the superconducting order parameter and the superfluid density when the vortex depins and starts to move around the film. For the Pb film in Pb/Al/Al 2O 3/PbBi junction having a gold line, the vortex depins from the artificial pinning site (Au line) and reproducibly moves through the same sequence of other pinning sites before it leaves the junction area of the Pb film. Values of the normalized order parameter Δ/Δ ° vary from Δ/Δ °=0.20 at the firstmore » motion of the vortex to Δ/Δ °=0.16 where the vortex finally leaves the junction. Equivalently, the value of the normalized superfluid density changes from 4% to 2.5% for this sample in this same temperature interval. For the Nb film in Nb/Al/Al 2O 3/Nb junction, thermal depinning occurs when the value of Δ/Δ ° is approximately 0.22 and the value of ρ s/ρ so is approximately 5%. These values are about 20% larger than those of a Pb sample having a gold line, but the values are really very close. For the Nb sample, grain boundaries are important pinning sites whereas, for the Pb sample with a gold line, pinning may have been dominated by an array Pb 3AU precipitates. Because roughly the same answer was obtained for these rather different kinds of pinning site, there is a reasonable chance that this is a general value within factors of 2 for a wide range of materials.« less

  2. The Forecast Interpretation Tool—a Monte Carlo technique for blending climatic distributions with probabilistic forecasts

    USGS Publications Warehouse

    Husak, Gregory J.; Michaelsen, Joel; Kyriakidis, P.; Verdin, James P.; Funk, Chris; Galu, Gideon

    2011-01-01

    Probabilistic forecasts are produced from a variety of outlets to help predict rainfall, and other meteorological events, for periods of 1 month or more. Such forecasts are expressed as probabilities of a rainfall event, e.g. being in the upper, middle, or lower third of the relevant distribution of rainfall in the region. The impact of these forecasts on the expectation for the event is not always clear or easily conveyed. This article proposes a technique based on Monte Carlo simulation for adjusting existing climatologic statistical parameters to match forecast information, resulting in new parameters defining the probability of events for the forecast interval. The resulting parameters are shown to approximate the forecasts with reasonable accuracy. To show the value of the technique as an application for seasonal rainfall, it is used with consensus forecast developed for the Greater Horn of Africa for the 2009 March-April-May season. An alternative, analytical approach is also proposed, and discussed in comparison to the first simulation-based technique.

  3. Prediction of Building Limestone Physical and Mechanical Properties by Means of Ultrasonic P-Wave Velocity

    PubMed Central

    Concu, Giovanna; De Nicolo, Barbara; Valdes, Monica

    2014-01-01

    The aim of this study was to evaluate ultrasonic P-wave velocity as a feature for predicting some physical and mechanical properties that describe the behavior of local building limestone. To this end, both ultrasonic testing and compressive tests were carried out on several limestone specimens and statistical correlation between ultrasonic velocity and density, compressive strength, and modulus of elasticity was studied. The effectiveness of ultrasonic velocity was evaluated by regression, with the aim of observing the coefficient of determination r 2 between ultrasonic velocity and the aforementioned parameters, and the mathematical expressions of the correlations were found and discussed. The strong relations that were established between ultrasonic velocity and limestone properties indicate that these parameters can be reasonably estimated by means of this nondestructive parameter. This may be of great value in a preliminary phase of the diagnosis and inspection of stone masonry conditions, especially when the possibility of sampling material cores is reduced. PMID:24511286

  4. New tools for evaluating LQAS survey designs

    PubMed Central

    2014-01-01

    Lot Quality Assurance Sampling (LQAS) surveys have become increasingly popular in global health care applications. Incorporating Bayesian ideas into LQAS survey design, such as using reasonable prior beliefs about the distribution of an indicator, can improve the selection of design parameters and decision rules. In this paper, a joint frequentist and Bayesian framework is proposed for evaluating LQAS classification accuracy and informing survey design parameters. Simple software tools are provided for calculating the positive and negative predictive value of a design with respect to an underlying coverage distribution and the selected design parameters. These tools are illustrated using a data example from two consecutive LQAS surveys measuring Oral Rehydration Solution (ORS) preparation. Using the survey tools, the dependence of classification accuracy on benchmark selection and the width of the ‘grey region’ are clarified in the context of ORS preparation across seven supervision areas. Following the completion of an LQAS survey, estimation of the distribution of coverage across areas facilitates quantifying classification accuracy and can help guide intervention decisions. PMID:24528928

  5. New tools for evaluating LQAS survey designs.

    PubMed

    Hund, Lauren

    2014-02-15

    Lot Quality Assurance Sampling (LQAS) surveys have become increasingly popular in global health care applications. Incorporating Bayesian ideas into LQAS survey design, such as using reasonable prior beliefs about the distribution of an indicator, can improve the selection of design parameters and decision rules. In this paper, a joint frequentist and Bayesian framework is proposed for evaluating LQAS classification accuracy and informing survey design parameters. Simple software tools are provided for calculating the positive and negative predictive value of a design with respect to an underlying coverage distribution and the selected design parameters. These tools are illustrated using a data example from two consecutive LQAS surveys measuring Oral Rehydration Solution (ORS) preparation. Using the survey tools, the dependence of classification accuracy on benchmark selection and the width of the 'grey region' are clarified in the context of ORS preparation across seven supervision areas. Following the completion of an LQAS survey, estimation of the distribution of coverage across areas facilitates quantifying classification accuracy and can help guide intervention decisions.

  6. Prediction of building limestone physical and mechanical properties by means of ultrasonic P-wave velocity.

    PubMed

    Concu, Giovanna; De Nicolo, Barbara; Valdes, Monica

    2014-01-01

    The aim of this study was to evaluate ultrasonic P-wave velocity as a feature for predicting some physical and mechanical properties that describe the behavior of local building limestone. To this end, both ultrasonic testing and compressive tests were carried out on several limestone specimens and statistical correlation between ultrasonic velocity and density, compressive strength, and modulus of elasticity was studied. The effectiveness of ultrasonic velocity was evaluated by regression, with the aim of observing the coefficient of determination r(2) between ultrasonic velocity and the aforementioned parameters, and the mathematical expressions of the correlations were found and discussed. The strong relations that were established between ultrasonic velocity and limestone properties indicate that these parameters can be reasonably estimated by means of this nondestructive parameter. This may be of great value in a preliminary phase of the diagnosis and inspection of stone masonry conditions, especially when the possibility of sampling material cores is reduced.

  7. Multiple sup 3 H-oxytocin binding sites in rat myometrial plasma membranes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crankshaw, D.; Gaspar, V.; Pliska, V.

    1990-01-01

    The affinity spectrum method has been used to analyse binding isotherms for {sup 3}H-oxytocin to rat myometrial plasma membranes. Three populations of binding sites with dissociation constants (Kd) of 0.6-1.5 x 10(-9), 0.4-1.0 x 10(-7) and 7 x 10(-6) mol/l were identified and their existence verified by cluster analysis based on similarities between Kd, binding capacity and Hill coefficient. When experimental values were compared to theoretical curves constructed using the estimated binding parameters, good fits were obtained. Binding parameters obtained by this method were not influenced by the presence of GTP gamma S (guanosine-5'-O-3-thiotriphosphate) in the incubation medium. The bindingmore » parameters agree reasonably well with those found in uterine cells, they support the existence of a medium affinity site and may allow for an explanation of some of the discrepancies between binding and response in this system.« less

  8. Students' Reasoning about p-Values

    ERIC Educational Resources Information Center

    Aquilonius, Birgit C.; Brenner, Mary E.

    2015-01-01

    Results from a study of 16 community college students are presented. The research question concerned how students reasoned about p-values. Students' approach to p-values in hypothesis testing was procedural. Students viewed p-values as something that one compares to alpha values in order to arrive at an answer and did not attach much meaning to…

  9. Analysis of the statistical thermodynamic model for nonlinear binary protein adsorption equilibria.

    PubMed

    Zhou, Xiao-Peng; Su, Xue-Li; Sun, Yan

    2007-01-01

    The statistical thermodynamic (ST) model was used to study nonlinear binary protein adsorption equilibria on an anion exchanger. Single-component and binary protein adsorption isotherms of bovine hemoglobin (Hb) and bovine serum albumin (BSA) on DEAE Spherodex M were determined by batch adsorption experiments in 10 mM Tris-HCl buffer containing a specific NaCl concentration (0.05, 0.10, and 0.15 M) at pH 7.40. The ST model was found to depict the effect of ionic strength on the single-component equilibria well, with model parameters depending on ionic strength. Moreover, the ST model gave acceptable fitting to the binary adsorption data with the fitted single-component model parameters, leading to the estimation of the binary ST model parameter. The effects of ionic strength on the model parameters are reasonably interpreted by the electrostatic and thermodynamic theories. The effective charge of protein in adsorption phase can be separately calculated from the two categories of the model parameters, and the values obtained from the two methods are consistent. The results demonstrate the utility of the ST model for describing nonlinear binary protein adsorption equilibria.

  10. Application and optimization of input parameter spaces in mass flow modelling: a case study with r.randomwalk and r.ranger

    NASA Astrophysics Data System (ADS)

    Krenn, Julia; Zangerl, Christian; Mergili, Martin

    2017-04-01

    r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.

  11. Determining the Influence of Granule Size on Simulation Parameters and Residual Shear Stress Distribution in Tablets by Combining the Finite Element Method into the Design of Experiments.

    PubMed

    Hayashi, Yoshihiro; Kosugi, Atsushi; Miura, Takahiro; Takayama, Kozo; Onuki, Yoshinori

    2018-01-01

    The influence of granule size on simulation parameters and residual shear stress in tablets was determined by combining the finite element method (FEM) into the design of experiments (DoE). Lactose granules were prepared using a wet granulation method with a high-shear mixer and sorted into small and large granules using sieves. To simulate the tableting process using the FEM, parameters simulating each granule were optimized using a DoE and a response surface method (RSM). The compaction behavior of each granule simulated by FEM was in reasonable agreement with the experimental findings. Higher coefficients of friction between powder and die/punch (μ) and lower by internal friction angle (α y ) were generated in the case of small granules, respectively. RSM revealed that die wall force was affected by α y . On the other hand, the pressure transmissibility rate of punches value was affected not only by the α y value, but also by μ. The FEM revealed that the residual shear stress was greater for small granules than for large granules. These results suggest that the inner structure of a tablet comprising small granules was less homogeneous than that comprising large granules. To evaluate the contribution of the simulation parameters to residual stress, these parameters were assigned to the fractional factorial design and an ANOVA was applied. The result indicated that μ was the critical factor influencing residual shear stress. This study demonstrates the importance of combining simulation and statistical analysis to gain a deeper understanding of the tableting process.

  12. Electronic properties of 3R-CuAlO2 under pressure: Three theoretical approaches

    NASA Astrophysics Data System (ADS)

    Christensen, N. E.; Svane, A.; Laskowski, R.; Palanivel, B.; Modak, P.; Chantis, A. N.; van Schilfgaarde, M.; Kotani, T.

    2010-01-01

    The pressure variation in the structural parameters, u and c/a , of the delafossite CuAlO2 is calculated within the local-density approximation (LDA). Further, the electronic structures as obtained by different approximations are compared: LDA, LDA+U , and a recently developed “quasiparticle self-consistent GW ” (QSGW) approximation. The structural parameters obtained by the LDA agree very well with experiments but, as expected, gaps in the formal band structure are underestimated as compared to optical experiments. The (in LDA too high lying) Cu3d states can be down shifted by LDA+U . The magnitude of the electric field gradient (EFG) as obtained within the LDA is far too small. It can be “fitted” to experiments in LDA+U but a simultaneous adjustment of the EFG and the gap cannot be obtained with a single U value. QSGW yields reasonable values for both quantities. LDA and QSGW yield significantly different values for some of the band-gap deformation potentials but calculations within both approximations predict that 3R-CuAlO2 remains an indirect-gap semiconductor at all pressures in its stability range 0-36 GPa, although the smallest direct gap has a negative pressure coefficient.

  13. Optical spectroscopy of lanthanide ions in ZnO-TeO2 glasses.

    PubMed

    Rolli, R; Wachtler, K; Wachtler, M; Bettinelli, M; Speghini, A; Ajò, D

    2001-09-01

    Zinc tellurite glasses of compositions 19ZnO-80TeO2-1Ln2O3 with Ln = Eu, Er, Nd and Tm were prepared by melt quenching. The absorption spectra were measured and from the experimental oscillator strengths of the f-->f transitions the Judd-Ofelt parameters ohm(lambda) were obtained. The values of the ohm(lambda) parameters are in the range usually observed for oxide glasses. For Nd3+ and Er3+, luminescence spectra in the near infrared were measured and the stimulated emission cross sections sigma(p) were evaluated for some laser transitions. The high values of sigma(p), especially for Nd3+, make them possible candidates for optical applications. Fluorescence line narrowing (FLN) spectra of the Eu3+ doped glass were measured at 20 K, and the energies of the Stark components of the 7F1 and 7F2 states were obtained. A crystal field analysis was carried out assuming a C2v site symmetry. The behaviour of the crystal field ratios B22/B20 and B44/B40 agrees reasonably well with the values calculated using the geometric model proposed by Brecher and Riseberg. The crystal field strength at the Eu3+ sites appears to be very low compared to other oxide glasses.

  14. Evaluating the Community Land Model (CLM4.5) at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    DOE PAGES

    Duarte, Henrique F.; Raczka, Brett M.; Ricciuto, Daniel M.; ...

    2017-09-28

    Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM) version 4.5 against observations at an old-growth coniferous forest site in the Pacific Northwest region of the United States (Wind River AmeriFlux site), characterized by a Mediterranean climate that subjects trees to water stress each summer. CLM was driven by site-observed meteorology and calibrated primarily using parameter values observed at the site ormore » at similar stands in the region. Key model adjustments included parameters controlling specific leaf area and stomatal conductance. Default values of these parameters led to significant underestimation of gross primary production, overestimation of evapotranspiration, and consequently overestimation of photosynthetic 13C discrimination, reflected in reduced 13C: 12C ratios of carbon fluxes and pools. Adjustments in soil hydraulic parameters within CLM were also critical, preventing significant underestimation of soil water content and unrealistic soil moisture stress during summer. After calibration, CLM was able to simulate energy and carbon fluxes, leaf area index, biomass stocks, and carbon isotope ratios of carbon fluxes and pools in reasonable agreement with site observations. Overall, the calibrated CLM was able to simulate the observed response of canopy conductance to atmospheric vapor pressure deficit (VPD) and soil water content, reasonably capturing the impact of water stress on ecosystem functioning. Both simulations and observations indicate that stomatal response from water stress at Wind River was primarily driven by VPD and not soil moisture. The calibration of the Ball–Berry stomatal conductance slope ( m bb) at Wind River aligned with findings from recent CLM experiments at sites characterized by the same plant functional type (needleleaf evergreen temperate forest), despite significant differences in stand composition and age and climatology, suggesting that CLM could benefit from a revised m bb value of 6, rather than the default value of 9, for this plant functional type. Conversely, Wind River required a unique calibration of the hydrology submodel to simulate soil moisture, suggesting that the default hydrology has a more limited applicability. Here, this study demonstrates that carbon isotope data can be used to constrain stomatal conductance and intrinsic water use efficiency in CLM, as an alternative to eddy covariance flux measurements. It also demonstrates that carbon isotopes can expose structural weaknesses in the model and provide a key constraint that may guide future model development.« less

  15. Evaluating the Community Land Model (CLM4.5) at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duarte, Henrique F.; Raczka, Brett M.; Ricciuto, Daniel M.

    Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM) version 4.5 against observations at an old-growth coniferous forest site in the Pacific Northwest region of the United States (Wind River AmeriFlux site), characterized by a Mediterranean climate that subjects trees to water stress each summer. CLM was driven by site-observed meteorology and calibrated primarily using parameter values observed at the site ormore » at similar stands in the region. Key model adjustments included parameters controlling specific leaf area and stomatal conductance. Default values of these parameters led to significant underestimation of gross primary production, overestimation of evapotranspiration, and consequently overestimation of photosynthetic 13C discrimination, reflected in reduced 13C: 12C ratios of carbon fluxes and pools. Adjustments in soil hydraulic parameters within CLM were also critical, preventing significant underestimation of soil water content and unrealistic soil moisture stress during summer. After calibration, CLM was able to simulate energy and carbon fluxes, leaf area index, biomass stocks, and carbon isotope ratios of carbon fluxes and pools in reasonable agreement with site observations. Overall, the calibrated CLM was able to simulate the observed response of canopy conductance to atmospheric vapor pressure deficit (VPD) and soil water content, reasonably capturing the impact of water stress on ecosystem functioning. Both simulations and observations indicate that stomatal response from water stress at Wind River was primarily driven by VPD and not soil moisture. The calibration of the Ball–Berry stomatal conductance slope ( m bb) at Wind River aligned with findings from recent CLM experiments at sites characterized by the same plant functional type (needleleaf evergreen temperate forest), despite significant differences in stand composition and age and climatology, suggesting that CLM could benefit from a revised m bb value of 6, rather than the default value of 9, for this plant functional type. Conversely, Wind River required a unique calibration of the hydrology submodel to simulate soil moisture, suggesting that the default hydrology has a more limited applicability. Here, this study demonstrates that carbon isotope data can be used to constrain stomatal conductance and intrinsic water use efficiency in CLM, as an alternative to eddy covariance flux measurements. It also demonstrates that carbon isotopes can expose structural weaknesses in the model and provide a key constraint that may guide future model development.« less

  16. Evaluating the Community Land Model (CLM4.5) at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    NASA Astrophysics Data System (ADS)

    Duarte, Henrique F.; Raczka, Brett M.; Ricciuto, Daniel M.; Lin, John C.; Koven, Charles D.; Thornton, Peter E.; Bowling, David R.; Lai, Chun-Ta; Bible, Kenneth J.; Ehleringer, James R.

    2017-09-01

    Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM) version 4.5 against observations at an old-growth coniferous forest site in the Pacific Northwest region of the United States (Wind River AmeriFlux site), characterized by a Mediterranean climate that subjects trees to water stress each summer. CLM was driven by site-observed meteorology and calibrated primarily using parameter values observed at the site or at similar stands in the region. Key model adjustments included parameters controlling specific leaf area and stomatal conductance. Default values of these parameters led to significant underestimation of gross primary production, overestimation of evapotranspiration, and consequently overestimation of photosynthetic 13C discrimination, reflected in reduced 13C : 12C ratios of carbon fluxes and pools. Adjustments in soil hydraulic parameters within CLM were also critical, preventing significant underestimation of soil water content and unrealistic soil moisture stress during summer. After calibration, CLM was able to simulate energy and carbon fluxes, leaf area index, biomass stocks, and carbon isotope ratios of carbon fluxes and pools in reasonable agreement with site observations. Overall, the calibrated CLM was able to simulate the observed response of canopy conductance to atmospheric vapor pressure deficit (VPD) and soil water content, reasonably capturing the impact of water stress on ecosystem functioning. Both simulations and observations indicate that stomatal response from water stress at Wind River was primarily driven by VPD and not soil moisture. The calibration of the Ball-Berry stomatal conductance slope (mbb) at Wind River aligned with findings from recent CLM experiments at sites characterized by the same plant functional type (needleleaf evergreen temperate forest), despite significant differences in stand composition and age and climatology, suggesting that CLM could benefit from a revised mbb value of 6, rather than the default value of 9, for this plant functional type. Conversely, Wind River required a unique calibration of the hydrology submodel to simulate soil moisture, suggesting that the default hydrology has a more limited applicability. This study demonstrates that carbon isotope data can be used to constrain stomatal conductance and intrinsic water use efficiency in CLM, as an alternative to eddy covariance flux measurements. It also demonstrates that carbon isotopes can expose structural weaknesses in the model and provide a key constraint that may guide future model development.

  17. The effect of the hot oxygen corona on the interaction of the solar wind with Venus

    NASA Technical Reports Server (NTRS)

    Belotserkovskii, O. M.; Mitnitskii, V. IA.; Breus, T. K.; Krymskii, A. M.; Nagy, A. F.

    1987-01-01

    A numerical gasdynamic model, which includes the effects of mass loading of the shocked solar wind, was used to calculate the density and magnetic field variations in the magnetosheath of Venus. These calculations were carried out for conditions corresponding to a specific orbit of the Pioneer Venus Orbiter (PVO orbit 582). A comparison of the model predictions and the measured shock position, density and magnetic field values showed a reasonable agreement, indicating that a gasdynamic model that includes the effects of mass loading can be used to predict these parameters.

  18. Slow Crack Growth of Germanium

    NASA Technical Reports Server (NTRS)

    Salem, Jon

    2016-01-01

    The fracture toughness and slow crack growth parameters of germanium supplied as single crystal beams and coarse grain disks were measured. Although germanium is anisotropic (A=1.7), it is not as anisotropic as SiC, NiAl, or Cu, as evidence by consistent fracture toughness on the 100, 110, and 111 planes. Germanium does not exhibit significant slow crack growth in distilled water. (n=100). Practical values for engineering design are a fracture toughness of 0.7 MPam and a Weibull modulus of m=6+/-2. For well ground and reasonable handled coupons, fracture strength should be greater than 30 MPa.

  19. COBE DMR-normalized open inflation cold dark matter cosmogony

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Ratra, Bharat; Sugiyama, Naoshi; Banday, Anthony J.

    1995-01-01

    A cut-sky orthogonal mode analysis of the 2 year COBE DMR 53 and 90 GHz sky maps (in Galactic coordinates) is used to determine the normalization of an open inflation model based on the cold dark matter (CDM) scenario. The normalized model is compared to measures of large-scale structure in the universe. Although the DMR data alone does not provide sufficient discriminative power to prefer a particular value of the mass density parameter, the open model appears to be reasonably consistent with observations when Omega(sub 0) is approximately 0.3-0.4 and merits further study.

  20. Study of TLIPSS formation on different metals and alloys and their selective etching

    NASA Astrophysics Data System (ADS)

    Dostovalov, Alexandr V.; Korolkov, Victor P.; Terentiev, Vadim S.; Okotrub, Konstantin A.; Dultsev, Fedor N.; Nemykin, Anton; Babin, Sergey A.

    2017-02-01

    Experimental investigation of thermochemical laser-induced periodic surface structures (TLIPSS) formation on metal films (Ti, Cr, Ni, NiCr) at different processing conditions is presented. The hypothesis that the TLIPSS formation depends significantly on parabolic rate constant for oxide thin film growth is discussed. Evidently, low value of this parameter for Ni is the reason of TLIPSS absence on Ni and NiCr film with low Cr content. The effect of simultaneous ablative (with period ≍λ) and thermochemical (with period ≍λ) LIPSS formation was observed. The formation of structures after TLIPSS selective etching was demonstrated.

  1. The effect of the hot oxygen corona on the interaction of the solar wind with Venus

    NASA Astrophysics Data System (ADS)

    Belotserkovskii, O. M.; Breus, T. K.; Krymskii, A. M.; Mitnitskii, V. Ya.; Nagey, A. F.; Gombosi, T. I.

    1987-05-01

    A numerical gas dynamic model, which includes the effects of mass loading of the shocked solar wind, was used to calculate the density and magnetic field variations in the magnetosheath of Venus. These calculations were carried out for conditions corresponding to a specific orbit of the Pioneer Venus Orbiter (PVO orbit 582). A comparison of the model predictions and the measured shock position, density and magnetic field values showed a reasonable agreement, indicating that a gas dynamic model that includes the effects of mass loading can be used to predict these parameters.

  2. The clockwork supergravity

    NASA Astrophysics Data System (ADS)

    Kehagias, Alex; Riotto, Antonio

    2018-02-01

    We show that the minimal D = 5, N = 2 gauged supergravity set-up may encode naturally the recently proposed clockwork mechanism. The minimal embedding requires one vector multiplet in addition to the supergravity multiplet and the clockwork scalar is identified with the scalar in the vector multiplet. The scalar has a two-parameter potential and it can accommodate the clockwork, the Randall-Sundrum and a no-scale model with a flat potential, depending on the values of the parameters. The continuous clockwork background breaks half of the original supersymmetries, leaving a D = 4, N = 1 theory on the boundaries. We also show that the generated hierarchy by the clockwork is not exponential but rather power law. The reason is that four-dimensional Planck scale has a power-law dependence on the compactification radius, whereas the corresponding KK spectrum depends on the logarithm of the latter.

  3. Two Universal Equations of State for Solids

    NASA Astrophysics Data System (ADS)

    Sun, Jiu-Xun; Wu, Qiang; Guo, Yang; Cai, Ling-Cang

    2010-01-01

    In this paper, two equations of state (EOSs) (Sun Jiu-Xun-Morse with parameters n = 3 and 4, designated by SMS3 and SMS4) with two parameters are proposed to satisfy four merits proposed previously and give improved results for the cohesive energy. By applying ten typical EOSs to fit experimental compression data of 50 materials, it is shown that the SMS4 EOS gives the best results; the Baonza and Morse EOSs give the second best results; the SMS3 and modified generalized Lennard-Jones (mGLJ) EOSs give the third best results. However, the Baonza and mGLJ EOSs cannot give physically reasonable values of cohesive energy and P-V curves in the expansion region; the SMS3 and SMS4 EOS give fairly good results, and have some advantages over the Baonza and mGLJ EOSs in practical applications.

  4. Dilution in single pass arc welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DuPont, J.N.; Marder, A.R.

    1996-06-01

    A study was conducted on dilution of single pass arc welds of type 308 stainless steel filler metal deposited onto A36 carbon steel by the plasma arc welding (PAW), gas tungsten arc welding (GTAW), gas metal arc welding (GMAW), and submerged arc welding (SAW) processes. Knowledge of the arc and melting efficiency was used in a simple energy balance to develop an expression for dilution as a function of welding variables and thermophysical properties of the filler metal and substrate. Comparison of calculated and experimentally determined dilution values shows the approach provides reasonable predictions of dilution when the melting efficiencymore » can be accurately predicted. The conditions under which such accuracy is obtained are discussed. A diagram is developed from the dilution equation which readily reveals the effect of processing parameters on dilution to aid in parameter optimization.« less

  5. Microstructural and electrical properties of PVA/PVP polymer blend films doped with cupric sulphate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemalatha, K.; Gowtham, G. K.; Somashekarappa, H., E-mail: drhssappa@gmail.com

    2016-05-23

    A series of polyvinyl alcohol (PVA)/polyvinyl pyrrolidone (PVP) polymer blends added with different concentrations of cupric sulphate (CuSO{sub 4}) were prepared by solution casting method and were subjected to X-ray diffraction (XRD) and Ac conductance measurements. An attempt has been made to study the changes in crystal imperfection parameters in PVA/PVP blend films with the increase in concentration of CuSO{sub 4}. Results show that decrease in micro crystalline parameter values is accompanied with increase in the amorphous content in the film which is the reason for film to have more flexibility, biodegradability and good ionic conductivity. AC conductance measurements inmore » these films show that the conductivity increases as the concentration of CuSO{sub 4} increases. These films were suitable for electro chemical applications.« less

  6. Magnetic properties comparison of mass standards among seventeen national metrology institutes

    NASA Astrophysics Data System (ADS)

    Becerra, L. O.; Berry, J.; Chang, C. S.; Chapman, G. D.; Chung, J. W.; Davis, R. S.; Field, I.; Fuchs, P.; Jacobsson, U.; Lee, S. M.; Loayza, V. M.; Madec, T.; Matilla, C.; Ooiwa, A.; Scholz, F.; Sutton, C.; van Andel, I.

    2006-10-01

    The ubiquitous technology of magnetic force compensation of gravitational forces acting on artifacts on the pans of modern balances and comparators has brought with it the problem of magnetic leakage from the compensation coils. Leaking magnetic fields, as well as those due to the surroundings of the balance, can interact with the artifact whose mass is to be determined, causing erroneous values to be observed. For this reason, and to comply with normative standards, it has become important for mass metrologists to evaluate the magnetic susceptibility and any remanent magnetization that mass standards may possess. This paper describes a comparison of measurements of these parameters among seventeen national metrology institutes. The measurements are made on three transfer standards whose magnetic parameters span the range that might be encountered in stainless steel mass standards.

  7. Continuous depth profile of mechanical properties in the Nankai accretionary prism based on drilling performance parameters

    NASA Astrophysics Data System (ADS)

    Hamada, Y.; Kitamura, M.; Yamada, Y.; Sanada, Y.; Moe, K.; Hirose, T.

    2016-12-01

    In-situ rock properties in/around seismogenic zone in an accretionary prism are key parameters to understand the development mechanisms of an accretionary prism, spatio-temporal variation of stress state, and so on. For the purpose of acquiring continuous-depth-profile of in-situ formation strength in an accretionary prism, here we propose the new method to evaluate the in-situ rock strength using drilling performance property. Drilling parameters are inevitably obtained by any drilling operation even in the non-coring intervals or at challenging environment where core recovery may be poor. The relationship between the rock properties and drilling parameters has been proposed by previous researches [e.g. Teale 1964]. We introduced the relationship theory of Teale [1964], and developed a converting method to estimate in-situ rock strength without depending on uncertain parameters such as weight on bit (WOB). Specifically, we first calculated equivalent specific toughness (EST) which represents gradient of the relationship between Torque energy and volume of penetration at arbitrary interval (in this study, five meters). Then the EST values were converted into strength using the drilling parameters-rock strengths correlation obtained by Karasawa et al. [2002]. This method was applied to eight drilling holes in the Site C0002 of IODP NanTroSEIZE in order to evaluate in-situ rock strength in shallow to deep accretionary prism. In the shallower part (0 - 300 mbsf), the calculated strength shows sharp increase up to 20 MPa. Then the strength has approximate constant value to 1500 mbsf without significant change even at unconformity around 1000 mbsf (boundary between forearc basin and accretionary prism). Below that depth, value of the strength gradually increases with depth up to 60 MPa at 3000 mbsf with variation between 10 and 80 MPa. Because the calculated strength is across approximately the same lithology, the increase trend can responds to the rock strength. This strength-depth curve correspond reasonably well with the strength data of core and cutting samples collected from hole C0002N and C0002P [Kitamura et al., 2016 AGU]. These results show the validity of the method evaluating in-situ strength from the drilling parameters.

  8. Modeling and sensitivity analysis of mass transfer in active multilayer polymeric film for food applications

    NASA Astrophysics Data System (ADS)

    Bedane, T.; Di Maio, L.; Scarfato, P.; Incarnato, L.; Marra, F.

    2015-12-01

    The barrier performance of multilayer polymeric films for food applications has been significantly improved by incorporating oxygen scavenging materials. The scavenging activity depends on parameters such as diffusion coefficient, solubility, concentration of scavenger loaded and the number of available reactive sites. These parameters influence the barrier performance of the film in different ways. Virtualization of the process is useful to characterize, design and optimize the barrier performance based on physical configuration of the films. Also, the knowledge of values of parameters is important to predict the performances. Inverse modeling and sensitivity analysis are sole way to find reasonable values of poorly defined, unmeasured parameters and to analyze the most influencing parameters. Thus, the objective of this work was to develop a model to predict barrier properties of multilayer film incorporated with reactive layers and to analyze and characterize their performances. Polymeric film based on three layers of Polyethylene terephthalate (PET), with a core reactive layer, at different thickness configurations was considered in the model. A one dimensional diffusion equation with reaction was solved numerically to predict the concentration of oxygen diffused into the polymer taking into account the reactive ability of the core layer. The model was solved using commercial software for different film layer configurations and sensitivity analysis based on inverse modeling was carried out to understand the effect of physical parameters. The results have shown that the use of sensitivity analysis can provide physical understanding of the parameters which highly affect the gas permeation into the film. Solubility and the number of available reactive sites were the factors mainly influencing the barrier performance of three layered polymeric film. Multilayer films slightly modified the steady transport properties in comparison to net PET, giving a small reduction in the permeability and oxygen transfer rate values. Scavenging capacity of the multilayer film increased linearly with the increase of the reactive layer thickness and the oxygen absorption reaction at short times decreased proportionally with the thickness of the external PET layer.

  9. Modeling and sensitivity analysis of mass transfer in active multilayer polymeric film for food applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bedane, T.; Di Maio, L.; Scarfato, P.

    The barrier performance of multilayer polymeric films for food applications has been significantly improved by incorporating oxygen scavenging materials. The scavenging activity depends on parameters such as diffusion coefficient, solubility, concentration of scavenger loaded and the number of available reactive sites. These parameters influence the barrier performance of the film in different ways. Virtualization of the process is useful to characterize, design and optimize the barrier performance based on physical configuration of the films. Also, the knowledge of values of parameters is important to predict the performances. Inverse modeling and sensitivity analysis are sole way to find reasonable values ofmore » poorly defined, unmeasured parameters and to analyze the most influencing parameters. Thus, the objective of this work was to develop a model to predict barrier properties of multilayer film incorporated with reactive layers and to analyze and characterize their performances. Polymeric film based on three layers of Polyethylene terephthalate (PET), with a core reactive layer, at different thickness configurations was considered in the model. A one dimensional diffusion equation with reaction was solved numerically to predict the concentration of oxygen diffused into the polymer taking into account the reactive ability of the core layer. The model was solved using commercial software for different film layer configurations and sensitivity analysis based on inverse modeling was carried out to understand the effect of physical parameters. The results have shown that the use of sensitivity analysis can provide physical understanding of the parameters which highly affect the gas permeation into the film. Solubility and the number of available reactive sites were the factors mainly influencing the barrier performance of three layered polymeric film. Multilayer films slightly modified the steady transport properties in comparison to net PET, giving a small reduction in the permeability and oxygen transfer rate values. Scavenging capacity of the multilayer film increased linearly with the increase of the reactive layer thickness and the oxygen absorption reaction at short times decreased proportionally with the thickness of the external PET layer.« less

  10. Communicating moral reasoning in medicine as an expression of respect for patients and integrity among professionals.

    PubMed

    Kaldjian, Lauris Christopher

    2013-01-01

    The communication of moral reasoning in medicine can be understood as a means of showing respect for patients and colleagues through the giving of moral reasons for actions. This communication is especially important when disagreements arise. While moral reasoning should strive for impartiality, it also needs to acknowledge the individual moral beliefs and values that distinguish each person (moral particularity) and give rise to the challenge of contrasting moral frameworks (moral pluralism). Efforts to communicate moral reasoning should move beyond common approaches to principles-based reasoning in medical ethics by addressing the underlying beliefs and values that define our moral frameworks and guide our interpretations and applications of principles. Communicating about underlying beliefs and values requires a willingness to grapple with challenges of accessibility (the degree to which particular beliefs and values are intelligible between persons) and translatability (the degree to which particular beliefs and values can be transposed from one moral framework to another) as words and concepts are used to communicate beliefs and values. Moral dialogues between professionals and patients and among professionals themselves need to be handled carefully, and sometimes these dialogues invite reference to underlying beliefs and values. When professionals choose to articulate such beliefs and values, they can do so as an expression of respectful patient care and collaboration and as a means of promoting their own moral integrity by signaling the need for consistency between their own beliefs, words and actions.

  11. Could CT screening for lung cancer ever be cost effective in the United Kingdom?

    PubMed Central

    Whynes, David K

    2008-01-01

    Background The absence of trial evidence makes it impossible to determine whether or not mass screening for lung cancer would be cost effective and, indeed, whether a clinical trial to investigate the problem would be justified. Attempts have been made to resolve this issue by modelling, although the complex models developed to date have required more real-world data than are currently available. Being founded on unsubstantiated assumptions, they have produced estimates with wide confidence intervals and of uncertain relevance to the United Kingdom. Method I develop a simple, deterministic, model of a screening regimen potentially applicable to the UK. The model includes only a limited number of parameters, for the majority of which, values have already been established in non-trial settings. The component costs of screening are derived from government guidance and from published audits, whilst the values for test parameters are derived from clinical studies. The expected health gains as a result of screening are calculated by combining published survival data for screened and unscreened cohorts with data from Life Tables. When a degree of uncertainty over a parameter value exists, I use a conservative estimate, i.e. one likely to make screening appear less, rather than more, cost effective. Results The incremental cost effectiveness ratio of a single screen amongst a high-risk male population is calculated to be around £14,000 per quality-adjusted life year gained. The average cost of this screening regimen per person screened is around £200. It is possible that, when obtained experimentally in any future trial, parameter values will be found to differ from those previously obtained in non-trial settings. On the basis both of differing assumptions about evaluation conventions and of reasoned speculations as to how test parameters and costs might behave under screening, the model generates cost effectiveness ratios as high as around £20,000 and as low as around £7,000. Conclusion It is evident that eventually being able to identify a cost effective regimen of CT screening for lung cancer in the UK is by no means an unreasonable expectation. PMID:18302756

  12. Bio-mathematical analysis for the peristaltic flow of single wall carbon nanotubes under the impact of variable viscosity and wall properties.

    PubMed

    Shahzadi, Iqra; Sadaf, Hina; Nadeem, Sohail; Saleem, Anber

    2017-02-01

    The main objective of this paper is to study the Bio-mathematical analysis for the peristaltic flow of single wall carbon nanotubes under the impact of variable viscosity and wall properties. The right and the left walls of the curved channel possess sinusoidal wave that is travelling along the outer boundary. The features of the peristaltic motion are determined by using long wavelength and low Reynolds number approximation. Exact solutions are determined for the axial velocity and for the temperature profile. Graphical results have been presented for velocity profile, temperature and stream function for various physical parameters of interest. Symmetry of the curved channel is disturbed for smaller values of the curvature parameter. It is found that the altitude of the velocity profile increases for larger values of variable viscosity parameter for both the cases (pure blood as well as single wall carbon nanotubes). It is detected that velocity profile increases with increasing values of rigidity parameter. It is due to the fact that an increase in rigidity parameter decreases tension in the walls of the blood vessels which speeds up the blood flow for pure blood as well as single wall carbon nanotubes. Increase in Grashof number decreases the fluid velocity. This is due to the reason that viscous forces play a prominent role that's why increase in Grashof number decreases the velocity profile. It is also found that temperature drops for increasing values of nanoparticle volume fraction. Basically, higher thermal conductivity of the nanoparticles plays a key role for quick heat dissipation, and this justifies the use of the single wall carbon nanotubes in different situations as a coolant. Exact solutions are calculated for the temperature and the velocity profile. Symmetry of the curved channel is destroyed due to the curvedness for velocity, temperature and contour plots. Addition of single wall carbon nanotubes shows a decrease in fluid temperature. Trapping phenomena show that the size of the trapped bolus is smaller for pure blood case as compared to the single wall carbon nanotubes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. The comparison of automated clustering algorithms for resampling representative conformer ensembles with RMSD matrix.

    PubMed

    Kim, Hyoungrae; Jang, Cheongyun; Yadav, Dharmendra K; Kim, Mi-Hyun

    2017-03-23

    The accuracy of any 3D-QSAR, Pharmacophore and 3D-similarity based chemometric target fishing models are highly dependent on a reasonable sample of active conformations. Since a number of diverse conformational sampling algorithm exist, which exhaustively generate enough conformers, however model building methods relies on explicit number of common conformers. In this work, we have attempted to make clustering algorithms, which could find reasonable number of representative conformer ensembles automatically with asymmetric dissimilarity matrix generated from openeye tool kit. RMSD was the important descriptor (variable) of each column of the N × N matrix considered as N variables describing the relationship (network) between the conformer (in a row) and the other N conformers. This approach used to evaluate the performance of the well-known clustering algorithms by comparison in terms of generating representative conformer ensembles and test them over different matrix transformation functions considering the stability. In the network, the representative conformer group could be resampled for four kinds of algorithms with implicit parameters. The directed dissimilarity matrix becomes the only input to the clustering algorithms. Dunn index, Davies-Bouldin index, Eta-squared values and omega-squared values were used to evaluate the clustering algorithms with respect to the compactness and the explanatory power. The evaluation includes the reduction (abstraction) rate of the data, correlation between the sizes of the population and the samples, the computational complexity and the memory usage as well. Every algorithm could find representative conformers automatically without any user intervention, and they reduced the data to 14-19% of the original values within 1.13 s per sample at the most. The clustering methods are simple and practical as they are fast and do not ask for any explicit parameters. RCDTC presented the maximum Dunn and omega-squared values of the four algorithms in addition to consistent reduction rate between the population size and the sample size. The performance of the clustering algorithms was consistent over different transformation functions. Moreover, the clustering method can also be applied to molecular dynamics sampling simulation results.

  14. SiC JFET Transistor Circuit Model for Extreme Temperature Range

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G.

    2008-01-01

    A technique for simulating extreme-temperature operation of integrated circuits that incorporate silicon carbide (SiC) junction field-effect transistors (JFETs) has been developed. The technique involves modification of NGSPICE, which is an open-source version of the popular Simulation Program with Integrated Circuit Emphasis (SPICE) general-purpose analog-integrated-circuit-simulating software. NGSPICE in its unmodified form is used for simulating and designing circuits made from silicon-based transistors that operate at or near room temperature. Two rapid modifications of NGSPICE source code enable SiC JFETs to be simulated to 500 C using the well-known Level 1 model for silicon metal oxide semiconductor field-effect transistors (MOSFETs). First, the default value of the MOSFET surface potential must be changed. In the unmodified source code, this parameter has a value of 0.6, which corresponds to slightly more than half the bandgap of silicon. In NGSPICE modified to simulate SiC JFETs, this parameter is changed to a value of 1.6, corresponding to slightly more than half the bandgap of SiC. The second modification consists of changing the temperature dependence of MOSFET transconductance and saturation parameters. The unmodified NGSPICE source code implements a T(sup -1.5) temperature dependence for these parameters. In order to mimic the temperature behavior of experimental SiC JFETs, a T(sup -1.3) temperature dependence must be implemented in the NGSPICE source code. Following these two simple modifications, the Level 1 MOSFET model of the NGSPICE circuit simulation program reasonably approximates the measured high-temperature behavior of experimental SiC JFETs properly operated with zero or reverse bias applied to the gate terminal. Modification of additional silicon parameters in the NGSPICE source code was not necessary to model experimental SiC JFET current-voltage performance across the entire temperature range from 25 to 500 C.

  15. Correcting the spectroscopic surface gravity using transits and asteroseismology. No significant effect on temperatures or metallicities with ARES and MOOG in local thermodynamic equilibrium

    NASA Astrophysics Data System (ADS)

    Mortier, A.; Sousa, S. G.; Adibekyan, V. Zh.; Brandão, I. M.; Santos, N. C.

    2014-12-01

    Context. Precise stellar parameters (effective temperature, surface gravity, metallicity, stellar mass, and radius) are crucial for several reasons, amongst which are the precise characterization of orbiting exoplanets and the correct determination of galactic chemical evolution. The atmospheric parameters are extremely important because all the other stellar parameters depend on them. Using our standard equivalent-width method on high-resolution spectroscopy, good precision can be obtained for the derived effective temperature and metallicity. The surface gravity, however, is usually not well constrained with spectroscopy. Aims: We use two different samples of FGK dwarfs to study the effect of the stellar surface gravity on the precise spectroscopic determination of the other atmospheric parameters. Furthermore, we present a straightforward formula for correcting the spectroscopic surface gravities derived by our method and with our linelists. Methods: Our spectroscopic analysis is based on Kurucz models in local thermodynamic equilibrium, performed with the MOOG code to derive the atmospheric parameters. The surface gravity was either left free or fixed to a predetermined value. The latter is either obtained through a photometric transit light curve or derived using asteroseismology. Results: We find first that, despite some minor trends, the effective temperatures and metallicities for FGK dwarfs derived with the described method and linelists are, in most cases, only affected within the errorbars by using different values for the surface gravity, even for very large differences in surface gravity, so they can be trusted. The temperatures derived with a fixed surface gravity continue to be compatible within 1 sigma with the accurate results of the infrared flux method (IRFM), as is the case for the unconstrained temperatures. Secondly, we find that the spectroscopic surface gravity can easily be corrected to a more accurate value using a linear function with the effective temperature. Tables 1 and 2 are available in electronic form at http://www.aanda.org

  16. A Linguistic Truth-Valued Temporal Reasoning Formalism and Its Implementation

    NASA Astrophysics Data System (ADS)

    Lu, Zhirui; Liu, Jun; Augusto, Juan C.; Wang, Hui

    Temporality and uncertainty are important features of many real world systems. Solving problems in such systems requires the use of formal mechanism such as logic systems, statistical methods or other reasoning and decision-making methods. In this paper, we propose a linguistic truth-valued temporal reasoning formalism to enable the management of both features concurrently using a linguistic truth valued logic and a temporal logic. We also provide a backward reasoning algorithm which allows the answering of user queries. A simple but realistic scenario in a smart home application is used to illustrate our work.

  17. Safety assessment of a shallow foundation using the random finite element method

    NASA Astrophysics Data System (ADS)

    Zaskórski, Łukasz; Puła, Wojciech

    2015-04-01

    A complex structure of soil and its random character are reasons why soil modeling is a cumbersome task. Heterogeneity of soil has to be considered even within a homogenous layer of soil. Therefore an estimation of shear strength parameters of soil for the purposes of a geotechnical analysis causes many problems. In applicable standards (Eurocode 7) there is not presented any explicit method of an evaluation of characteristic values of soil parameters. Only general guidelines can be found how these values should be estimated. Hence many approaches of an assessment of characteristic values of soil parameters are presented in literature and can be applied in practice. In this paper, the reliability assessment of a shallow strip footing was conducted using a reliability index β. Therefore some approaches of an estimation of characteristic values of soil properties were compared by evaluating values of reliability index β which can be achieved by applying each of them. Method of Orr and Breysse, Duncan's method, Schneider's method, Schneider's method concerning influence of fluctuation scales and method included in Eurocode 7 were examined. Design values of the bearing capacity based on these approaches were referred to the stochastic bearing capacity estimated by the random finite element method (RFEM). Design values of the bearing capacity were conducted for various widths and depths of a foundation in conjunction with design approaches DA defined in Eurocode. RFEM was presented by Griffiths and Fenton (1993). It combines deterministic finite element method, random field theory and Monte Carlo simulations. Random field theory allows to consider a random character of soil parameters within a homogenous layer of soil. For this purpose a soil property is considered as a separate random variable in every element of a mesh in the finite element method with proper correlation structure between points of given area. RFEM was applied to estimate which theoretical probability distribution fits the empirical probability distribution of bearing capacity basing on 3000 realizations. Assessed probability distribution was applied to compute design values of the bearing capacity and related reliability indices β. Conducted analysis were carried out for a cohesion soil. Hence a friction angle and a cohesion were defined as a random parameters and characterized by two dimensional random fields. A friction angle was described by a bounded distribution as it differs within limited range. While a lognormal distribution was applied in case of a cohesion. Other properties - Young's modulus, Poisson's ratio and unit weight were assumed as deterministic values because they have negligible influence on the stochastic bearing capacity. Griffiths D. V., & Fenton G. A. (1993). Seepage beneath water retaining structures founded on spatially random soil. Géotechnique, 43(6), 577-587.

  18. Utility usage forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosking, Jonathan R. M.; Natarajan, Ramesh

    The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.

  19. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    PubMed

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  20. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    PubMed

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  1. Geomagnetic paleointensities by the Thelliers' method from submarine pillow basalts: Effects of seafloor weathering

    USGS Publications Warehouse

    Gromme, Sherman; Mankinen, Edward A.; Marshall, Monte; Coe, Robert S.

    1979-01-01

    Measurements of geomagnetic paleointensity using the Thelliers' double‐heating method in vacuum have been made on 10 specimens of submarine pillow basalt obtained from 7 fragments dredged from localities 700,000 years old or younger. In the magnetic minerals, the titanium/iron ratio parameter x and the cation deficiency (oxidation) parameter x were determined by X‐ray diffraction and Curie temperature measurement. Fresh material (z ≅ 0) provided excellent results: most of the natural remanent magnetization (NRM) could be thermally demagnetized before the magnetic minerals became altered, and the NRM‐TRM lines were straight and well constrained, and geologically reasonable paleointensities were obtained. Somewhat oxidized material (z ≅ 0.2) also provided apparently valid paleointensities: values were similar to those from fresh specimens cut from the same fragments, although only half or less of the NRM could be thermally demagnetized before alteration of the magnetic minerals. More highly oxidized material (z ≅ 0.6) gave a result seriously in error: the paleointensity value is much too low, because of continuous disproportionation of titanomaghemite during the heating experiments and because seafloor weathering had decreased the NRM intensity. From limited published data, the extent of oxidation of titanomagnetite to cation deficient titanomaghemite in pillow basalt exposed on the seafloor appears to be approximately z = 0.3 at 0.2–0.5 m.y., z = 0.6 at 1 m.y., and z = 0.8–1.0 at 10–100 m.y. This implies that valid paleointensities can be obtained from exposed submarine basalt, but only if the basalt is younger than a few hundred thousand years. Equally good paleointensities were obtained from strongly magnetized (L‐type) basalt and moderately magnetized (L‐type) basalt. The degree of low‐temperature oxidation of cubic iron‐titanium oxides in submarine basalts correlates very well with the diminution of amplitude of linear magnetic anomalies when both are compared as a function of crustal age. Systematic radial variation of Curie temperature is a primary feature of submarine basalt pillows, so that estimation of the oxidation parameter z from the Curie temperature alone by assuming a value for x can be in error. Reasonably precise and self‐consistent values of both x and z can be obtained if both the cubic cell dimension and the Curie temperature of the cubic oxide are measured.

  2. Modal Damping Ratio and Optimal Elastic Moduli of Human Body Segments for Anthropometric Vibratory Model of Standing Subjects.

    PubMed

    Gupta, Manoj; Gupta, T C

    2017-10-01

    The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.

  3. Use of DandD for dose assessment under NRC`s radiological criteria for license termination rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallegos, D.P.; Brown, T.J.; Davis, P.A.

    The Decontamination and Decommissioning (DandD) software package has been developed by Sandia National Laboratories for the Nuclear Regulatory Commission (NRC) specifically for the purpose of providing a user-friendly analytical tool to address the dose criteria contained in NRC`s Radiological Criteria for License Termination rule (10 CFR Part 20 Subpart E; NRC, 1997). Specifically, DandD embodies the NRC`s screening methodology to allow licensees to convert residual radioactivity contamination levels at their site to annual dose, in a manner consistent with both 10 CFR Part 20 and the corresponding implementation guidance developed by NRC. The screening methodology employs reasonably conservative scenarios, fatemore » and transport models, and default parameter values that have been developed to allow the NRC to quantitatively estimate the risk of releasing a site given only information about the level of contamination. Therefore, a licensee has the option of specifying only the level of contamination and running the code with the default parameter values, or in the case where site specific information is available to alter the appropriate parameter values and then calculate dose. DandD can evaluate dose for fur different scenarios: residential, building occupancy, building renovation, or drinking water. The screening methodology and DandD are part of a larger decision framework that allows and encourages licensees to optimize decisions on choice of alternative actions at their site, including collection of additional data and information. This decision framework is integrated into and documented in NRC`s technical guidance for decommissioning.« less

  4. Flight test evaluation of predicted light aircraft drag, performance, and stability

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Fox, S. R.

    1979-01-01

    A technique was developed which permits simultaneous extraction of complete lift, drag, and thrust power curves from time histories of a single aircraft maneuver such as a pullup (from V sub max to V sub stall) and pushover (to sub V max for level flight.) The technique is an extension to non-linear equations of motion of the parameter identification methods of lliff and Taylor and includes provisions for internal data compatibility improvement as well. The technique was show to be capable of correcting random errors in the most sensitive data channel and yielding highly accurate results. This technique was applied to flight data taken on the ATLIT aircraft. The drag and power values obtained from the initial least squares estimate are about 15% less than the 'true' values. If one takes into account the rather dirty wing and fuselage existing at the time of the tests, however, the predictions are reasonably accurate. The steady state lift measurements agree well with the extracted values only for small values of alpha. The predicted value of the lift at alpha = 0 is about 33% below that found in steady state tests while the predicted lift slope is 13% below the steady state value.

  5. Managed care's Achilles heel: ethical immaturity.

    PubMed

    Thompson, R E

    2000-01-01

    How can physician executives determine the prevailing values in the managed care arena? What are the consequences when values statements are ignored during decision-making? These questions can be answered using a process called ethical reasoning, which is different and more productive than making moral judgments, such as "is managed care good or bad?" Failing to include ethical reasoning in executive offices and boardrooms is a form of ethical immaturity. It fuels public suspicion that managed care's goal may be maximizing profit at all costs, as opposed to seeking reasonable profit through provision of dependable and accessible health care services. One outcome of ethical reasoning is rediscovering the basic truth that running one's business on competitive rather than altruistic principles is ethical whenever greater efficiencies and economic growth enlarge the size of the pie for everyone. Reasonable self-interest is a perfectly acceptable reason to act ethically. The time has come for physician executives to develop a basic understanding of pragmatic ethics, and to appreciate the value of adding ethical reasoning to the decision-making process.

  6. Empirical Bayes estimation of proportions with application to cowbird parasitism rates

    USGS Publications Warehouse

    Link, W.A.; Hahn, D.C.

    1996-01-01

    Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).

  7. Effect of damping on the laser induced ultrafast switching in rare earth-transition metal alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oniciuc, Eugen; Stoleriu, Laurentiu; Cimpoesu, Dorin

    2014-06-02

    In this paper, we present simulations of thermally induced magnetic switching in ferrimagnetic systems performed with a Landau-Lifshitz-Bloch (LLB) equation for damping constant in a wide range of values. We have systematically studied the GdFeCo ferrimagnet with various concentrations of Gd and compared for some values of parameters the LLB results with atomistic simulations. The agreement is remarkably good, which shows that the dynamics described by the ferrimagnetic LLB is a reasonable approximation of this complex physical phenomenon. As an important element, we show that the LLB is able to also describe the intermediate formation of a ferromagnetic state whichmore » seems to be essential to understand laser induced ultrafast switching. The study reveals the fundamental role of damping during the switching process.« less

  8. Preparation and Characterization of Sulfonic Acid Functionalized Silica and Its Application for the Esterification of Ethanol and Maleic Acid

    NASA Astrophysics Data System (ADS)

    Sirsam, Rajkumar; Usmani, Ghayas

    2016-04-01

    The surface of commercially available silica gel, 60-200 mesh size, was modified with sulfonic acid through surface activation, grafting of 3-Mercaptopropyltrimethoxysilane, oxidation and acidification of 3-Mercaptopropylsilica. Sulfonic Acid Functionalization of Silica (SAFS) was confirmed by Fourier Transform Infra-red (FTIR) spectroscopy and thermal gravimetric analysis. Acid-base titration was used to estimate the cation exchange capacity of the SAFS. Catalytic activity of SAFS was judged for the esterification of ethanol with maleic acid. An effect of different process parameters viz. molar ratio, catalyst loading, speed of agitation and temperature were studied and optimized by Box Behnken Design (BBD) of Response Surface Methodology (RSM). Quadratic model developed by BBD-RSM reasonably satisfied an experimental and predicted values with correlation coefficient value R2 = 0.9504.

  9. Numerical development of a new correlation between biaxial fracture strain and material fracture toughness for small punch test

    NASA Astrophysics Data System (ADS)

    Kumar, Pradeep; Dutta, B. K.; Chattopadhyay, J.

    2017-04-01

    The miniaturized specimens are used to determine mechanical properties of the materials, such as yield stress, ultimate stress, fracture toughness etc. Use of such specimens is essential whenever limited quantity of material is available for testing, such as aged/irradiated materials. The miniaturized small punch test (SPT) is a technique which is widely used to determine change in mechanical properties of the materials. Various empirical correlations are proposed in the literature to determine the value of fracture toughness (JIC) using this technique. bi-axial fracture strain is determined using SPT tests. This parameter is then used to determine JIC using available empirical correlations. The correlations between JIC and biaxial fracture strain quoted in the literature are based on experimental data acquired for large number of materials. There are number of such correlations available in the literature, which are generally not in agreement with each other. In the present work, an attempt has been made to determine the correlation between biaxial fracture strain (εqf) and crack initiation toughness (Ji) numerically. About one hundred materials are digitally generated by varying yield stress, ultimate stress, hardening coefficient and Gurson parameters. Such set of each material is then used to analyze a SPT specimen and a standard TPB specimen. Analysis of SPT specimen generated biaxial fracture strain (εqf) and analysis of TPB specimen generated value of Ji. A graph is then plotted between these two parameters for all the digitally generated materials. The best fit straight line determines the correlation. It has been also observed that it is possible to have variation in Ji for the same value of biaxial fracture strain (εqf) within a limit. Such variation in the value of Ji has been also ascertained using the graph. Experimental SPT data acquired earlier for three materials were then used to get Ji by using newly developed correlation. A reasonable comparison of calculated Ji with the values quoted in literature confirmed usefulness of the correlation.

  10. Determining H {sub 0} with Bayesian hyper-parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardona, Wilmar; Kunz, Martin; Pettorino, Valeria, E-mail: wilmar.cardona@unige.ch, E-mail: Martin.Kunz@unige.ch, E-mail: valeria.pettorino@thphys.uni-heidelberg.de

    We re-analyse recent Cepheid data to estimate the Hubble parameter H {sub 0} by using Bayesian hyper-parameters (HPs). We consider the two data sets from Riess et al. 2011 and 2016 (labelled R11 and R16, with R11 containing less than half the data of R16) and include the available anchor distances (megamaser system NGC4258, detached eclipsing binary distances to LMC and M31, and MW Cepheids with parallaxes), use a weak metallicity prior and no period cut for Cepheids. We find that part of the R11 data is down-weighted by the HPs but that R16 is mostly consistent with expectations formore » a Gaussian distribution, meaning that there is no need to down-weight the R16 data set. For R16, we find a value of H {sub 0} = 73.75 ± 2.11 km s{sup −1} Mpc{sup −1} if we use HPs for all data points (including Cepheid stars, supernovae type Ia, and the available anchor distances), which is about 2.6 σ larger than the Planck 2015 value of H {sub 0} = 67.81 ± 0.92 km s{sup −1} Mpc{sup −1} and about 3.1 σ larger than the updated Planck 2016 value 66.93 ± 0.62 km s{sup −1} Mpc{sup −1}. If we perfom a standard χ{sup 2} analysis as in R16, we find H {sub 0} = 73.46 ± 1.40 (stat) km s{sup −1} Mpc{sup −1}. We test the effect of different assumptions, and find that the choice of anchor distances affects the final value significantly. If we exclude the Milky Way from the anchors, then the value of H {sub 0} decreases. We find however no evident reason to exclude the MW data. The HP method used here avoids subjective rejection criteria for outliers and offers a way to test datasets for unknown systematics.« less

  11. Evaluation of the Orogenic Belt Hypothesis for the Formation of Thaumasia, Mars

    NASA Astrophysics Data System (ADS)

    Nahm, A. L.; Schultz, R. A.

    2008-12-01

    The Thaumasia Highlands (TH) and Solis Planum are two of the best-known examples of compressional tectonics on Mars. The TH is a region of high topography located in the southern portion of the Tharsis Province, Mars. Solis Planum is located in eastern Thaumasia. Two hypotheses for the formation of this region have been suggested: sliding on a weak horizon or thrusting analogous to orogenic wedges on Earth. Both hypotheses require a shallowly dipping to sub-horizontal weak horizon below Thaumasia. Wrinkle ridges in Solis Planum are also inferred to sole into a décollement. If Thaumasia formed by thrusting related to sliding on a décollement, then certain conditions must be met as in critical taper wedge mechanics (CTWM) theory. If the angle between the surface slope and the basal décollement is less than predicted by the critical taper equation, the 'subcritical' wedge will deform internally until critical taper is achieved. Once the critical taper has been achieved, internal deformation ceases and the wedge will slide along its base. Formation of orogenic belts on Earth (such as the Central Mountains in Taiwan) can be described using CTWM. This method is applied here to the Thaumasia region on Mars. The surface slope (alpha) was measured in three locations: Syria Planum-Thaumasia margin, Solis Planum, and the TH. Topographic slopes were compared to the results from the critical taper equation. Because the dip of the basal décollement (beta) cannot be measured directly as on Earth, the dip angle was varied at 0 - 10 degrees; these values span the range of likely values based on terrestrial wedges. Pore fluid pressure (lambda) was varied between 0 (dry) and 0.9 (overpressured); these values span the full range of this important unknown parameter. Material properties, such as the coefficients of internal friction and of the basal décollement, were varied using reasonable values. Preliminary results show that for both reasonable (such as lambda = 0, mu b = 0.85, beta = 0 deg) and extreme (such as lambda = 0.9, mu b = 0.1, beta greater than 0 deg) values of the parameters for Mars, the predicted critical taper angle was typically lower than the measured slope, rendering the orogenic belt hypothesis for the formation of the TH invalid. Comparable analysis of Solis Planum shows it also lacks a décollement.

  12. Mapping Surface Cover Parameters Using Aggregation Rules and Remotely Sensed Cover Classes. Version 1.9

    NASA Technical Reports Server (NTRS)

    Arain, Altaf M.; Shuttleworth, W. James; Yang, Z-Liang; Michaud, Jene; Dolman, Johannes

    1997-01-01

    A coupled model, which combines the Biosphere-Atmosphere Transfer Scheme (BATS) with an advanced atmospheric boundary-layer model, was used to validate hypothetical aggregation rules for BATS-specific surface cover parameters. The model was initialized and tested with observations from the Anglo-Brazilian Amazonian Climate Observational Study and used to simulate surface fluxes for rain forest and pasture mixes at a site near Manaus in Brazil. The aggregation rules are shown to estimate parameters which give area-average surface fluxes similar to those calculated with explicit representation of forest and pasture patches for a range of meteorological and surface conditions relevant to this site, but the agreement deteriorates somewhat when there are large patch-to-patch differences in soil moisture. The aggregation rules, validated as above, were then applied to remotely sensed 1 km land cover data set to obtain grid-average values of BATS vegetation parameters for 2.8 deg x 2.8 deg and 1 deg x 1 deg grids within the conterminous United States. There are significant differences in key vegetation parameters (aerodynamic roughness length, albedo, leaf area index, and stomatal resistance) when aggregate parameters are compared to parameters for the single, dominant cover within the grid. However, the surface energy fluxes calculated by stand-alone BATS with the 2-year forcing, data from the International Satellite Land Surface Climatology Project (ISLSCP) CDROM were reasonably similar using aggregate-vegetation parameters and dominant-cover parameters, but there were some significant differences, particularly in the western USA.

  13. Kalman filter estimation of human pilot-model parameters

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.

    1975-01-01

    The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.

  14. Design and construction of miniature artificial ecosystem based on dynamic response optimization

    NASA Astrophysics Data System (ADS)

    Hu, Dawei; Liu, Hong; Tong, Ling; Li, Ming; Hu, Enzhu

    The miniature artificial ecosystem (MAES) is a combination of man, silkworm, salad and mi-croalgae to partially regenerate O2 , sanitary water and food, simultaneously dispose CO2 and wastes, therefore it have a fundamental life support function. In order to enhance the safety and reliability of MAES and eliminate the influences of internal variations and external dis-turbances, it was necessary to configure MAES as a closed-loop control system, and it could be considered as a prototype for future bioregenerative life support system. However, MAES is a complex system possessing large numbers of parameters, intricate nonlinearities, time-varying factors as well as uncertainties, hence it is difficult to perfectly design and construct a prototype through merely conducting experiments by trial and error method. Our research presented an effective way to resolve preceding problem by use of dynamic response optimiza-tion. Firstly the mathematical model of MAES with first-order nonlinear ordinary differential equations including parameters was developed based on relevant mechanisms and experimental data, secondly simulation model of MAES was derived on the platform of MatLab/Simulink to perform model validation and further digital simulations, thirdly reference trajectories of de-sired dynamic response of system outputs were specified according to prescribed requirements, and finally optimization for initial values, tuned parameter and independent parameters was carried out using the genetic algorithm, the advanced direct search method along with parallel computing methods through computer simulations. The result showed that all parameters and configurations of MAES were determined after a series of computer experiments, and its tran-sient response performances and steady characteristics closely matched the reference curves. Since the prototype is a physical system that represents the mathematical model with reason-able accuracy, so the process of designing and constructing a prototype of MAES is the reverse of mathematical modeling, and must have prerequisite assists from these results of computer simulation.

  15. Regionalizing nonparametric models of precipitation amounts on different temporal scales

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András

    2017-05-01

    Parametric distribution functions are commonly used to model precipitation amounts corresponding to different durations. The precipitation amounts themselves are crucial for stochastic rainfall generators and weather generators. Nonparametric kernel density estimates (KDEs) offer a more flexible way to model precipitation amounts. As already stated in their name, these models do not exhibit parameters that can be easily regionalized to run rainfall generators at ungauged locations as well as at gauged locations. To overcome this deficiency, we present a new interpolation scheme for nonparametric models and evaluate it for different temporal resolutions ranging from hourly to monthly. During the evaluation, the nonparametric methods are compared to commonly used parametric models like the two-parameter gamma and the mixed-exponential distribution. As water volume is considered to be an essential parameter for applications like flood modeling, a Lorenz-curve-based criterion is also introduced. To add value to the estimation of data at sub-daily resolutions, we incorporated the plentiful daily measurements in the interpolation scheme, and this idea was evaluated. The study region is the federal state of Baden-Württemberg in the southwest of Germany with more than 500 rain gauges. The validation results show that the newly proposed nonparametric interpolation scheme provides reasonable results and that the incorporation of daily values in the regionalization of sub-daily models is very beneficial.

  16. Determination of ionospheric electron density profiles from satellite UV (Ultraviolet) emission measurements, fiscal year 1984

    NASA Astrophysics Data System (ADS)

    Daniell, R. E.; Strickland, D. J.; Decker, D. T.; Jasperse, J. R.; Carlson, H. C., Jr.

    1985-04-01

    The possible use of satellite ultraviolet measurements to deduce the ionospheric electron density profile (EDP) on a global basis is discussed. During 1984 comparisons were continued between the hybrid daytime ionospheric model and the experimental observations. These comparison studies indicate that: (1) the essential features of the EDP and certain UV emissions can be modelled; (2) the models are sufficiently sensitive to input parameters to yield poor agreement with observations when typical input values are used; (3) reasonable adjustments of the parameters can produce excellent agreement between theory and data for either EDP or airglow but not both; and (4) the qualitative understanding of the relationship between two input parameters (solar flux and neutral densities) and the model EDP and airglow features has been verified. The development of a hybrid dynamic model for the nighttime midlatitude ionosphere has been initiated. This model is similar to the daytime hybrid model, but uses the sunset EDP as an initial value and calculates the EDP as a function of time through the night. In addition, a semiempirical model has been developed, based on the assumption that the nighttime EDP is always well described by a modified Chapman function. This model has great simplicity and allows the EDP to be inferred in a straightforward manner from optical observations. Comparisons with data are difficult, however, because of the low intensity of the nightglow.

  17. Environmental quality of the operating theaters in Campania Region: long lasting monitoring results.

    PubMed

    Triassi, M; Novi, C; Nardone, A; Russo, I; Montuori, P

    2015-01-01

    The health risk level in the operating theaters is directly correlated to the safety level offered by the healthcare facilities. This is the reason why the national Authorities released several regulations in order to monitor better environmental conditions of the operating theaters, to prevent occupational injuries and disease and to optimize working conditions. For the monitoring of environmental quality of the operating theaters following parameters are considered: quantity of supplied gases, anesthetics concentration, operating theatres volume measurement, air change rate, air conditioning system and air filtration. The objective is to minimize the risks in the operating theaters and to provide the optimal environmental working conditions. This paper reports the environmental conditions of operating rooms performed for several years in the public hospitals of the Campania Region. Investigation of environmental conditions of 162 operating theaters in Campania Region from January 2012 till July 2014 was conducted. Monitoring and analysis of physical and chemical parameters was done. The analysis of the results has been made considering specific standards suggested by national and international regulations. The study showed that 75% of the operating theaters presented normal values for microclimatic monitoring, while the 25% of the operating theaters had at least one parameter outside the limits. The monitoring of the anesthetics gases showed that in 9% of measurements of nitrous oxides and 4% of measurements of halogenated was not within the normal values.

  18. [Neurotic disorders: clinical and biochemical comparison].

    PubMed

    Riazantseva, N V; Novitskiĭ, V V

    2003-05-01

    Parameters of lipid peroxidation (LP) and those of the lipid composition of membrane erythrocytes were examined in 22 patients with neurotic disorders (adaptation disorders and neurasthenia). 45 healthy donors were in the control group. The deposition of cholesterol and of lysophosphotidylcholine as well as a reduced mean level of phosphatidylethanolamine were observed in the erythrocyte membrane of patients with neuroses. An increased mean content of malonic dialdehyde and of diene conjugates in the erythrocyte membrane and reduced mean values of the activity of antioxidant catalase enzyme were detected. However, the cluster analysis made it possible to establish an intensification of LP processes only in patients with the disease duration below three months. It is not ruled out that the reason of the detected heterogeneity of changed parameters characterizing the structural-and-metabolic status of erythrocytes in patients with neurotic disorders is related with differing natures of the adaptation abilities of the body under the influence of various stress factors.

  19. Interfacial Force Field Characterization in a Constrained Vapor Bubble Thermosyphon

    NASA Technical Reports Server (NTRS)

    DasGupta, Sunando; Plawsky, Joel L.; Wayner, Peter C., Jr.

    1995-01-01

    Isothermal profiles of the extended meniscus in a quartz cuvette were measured in the earth's gravitational field using an image-analyzing interferometer that is based on computer-enhanced video microscopy of the naturally occurring interference fringes. These profiles are a function of the stress field. Experimentally, the augmented Young-Laplace equation is an excellent model for the force field at the solid-liquid-vapor interfaces for heptane and pentane menisci on quartz and tetradecane on SFL6. The effects of refractive indices of the solid and liquid on the measurement techniques were demonstrated. Experimentally obtained values of the disjoining pressure and dispersion constants were compared to those predicted from the Dzyaloshinskii - Lifshitz - Pilaevskii theory for an ideal surface and reasonable agreements were obtained. A parameter introduced gives a quantitative measurement of the closeness of the system to equilibrium. The nonequilibrium behavior of this parameter is also presented

  20. The Pioneer 10 plasma analyzer results at Jupiter

    NASA Technical Reports Server (NTRS)

    Wolfe, J. H.

    1975-01-01

    Results are reported for the Pioneer 10 plasma-analyzer experiment at Jupiter. The analyzer system consisted of dual 90-deg quadrispherical electrostatic analyzers, multiple charged-particle detectors, and attendant electronics; it was capable of determining the incident plasma-distribution parameters over the energy range from 100 to 18,000 eV for protons and from approximately 1 to 500 eV for electrons. Data are presented on the interaction between the solar wind and the Jovian magnetosphere, the interplanetary ion flux, observations of the magnetosheath plasma, and traversals of the bow shock and magnetopause. Values are estimated for the proton isotropic temperature, number density, and bulk velocity within the magnetosheath flow field as well as for the beta parameter, ion number density, and magnetic-energy density of the magnetospheric plasma. It is argued that Jupiter has a reasonably thick magnetosphere somewhat similar to earth's except for the vastly different scale sizes involved.

  1. Technical Review of SRS Dose Reconstrruction Methods Used By CDC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpkins, Ali, A

    2005-07-20

    At the request of the Centers for Disease Control and Prevention (CDC), a subcontractor Advanced Technologies and Laboratories International, Inc.(ATL) issued a draft report estimating offsite dose as a result of Savannah River Site operations for the period 1954-1992 in support of Phase III of the SRS Dose Reconstruction Project. The doses reported by ATL differed than those previously estimated by Savannah River Site SRS dose modelers for a variety of reasons, but primarily because (1) ATL used different source terms, (2) ATL considered trespasser/poacher scenarios and (3) ATL did not consistently use site-specific parameters or correct usage parameters. Themore » receptors with the highest dose from atmospheric and liquid pathways were within about a factor of four greater than dose values previously reported by SRS. A complete set of technical comments have also been included.« less

  2. On two diffusion neuronal models with multiplicative noise: The mean first-passage time properties

    NASA Astrophysics Data System (ADS)

    D'Onofrio, G.; Lansky, P.; Pirozzi, E.

    2018-04-01

    Two diffusion processes with multiplicative noise, able to model the changes in the neuronal membrane depolarization between two consecutive spikes of a single neuron, are considered and compared. The processes have the same deterministic part but different stochastic components. The differences in the state-dependent variabilities, their asymptotic distributions, and the properties of the first-passage time across a constant threshold are investigated. Closed form expressions for the mean of the first-passage time of both processes are derived and applied to determine the role played by the parameters involved in the model. It is shown that for some values of the input parameters, the higher variability, given by the second moment, does not imply shorter mean first-passage time. The reason for that can be found in the complete shape of the stationary distribution of the two processes. Applications outside neuroscience are also mentioned.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkov, M. S.; Gusev, Yu. P., E-mail: GusevYP@mpei.ru; Monakov, Yu. V.

    The insertion of current-limiting reactors into electrical equipment operating at a voltage of 110 and 220 kV produces a change in the parameters of the transient recovery voltages at the contacts of the circuit breakers for disconnecting short circuits, which could be the reason for the increase in the duration of the short circuit, damage to the electrical equipment and losses in the power system. The results of mathematical modeling of the transients, caused by tripping of the short circuit in a reactive electric power transmission line are presented, and data are given on the negative effect of a current-limitingmore » resistor on the rate of increase and peak value of the transient recovery voltages. Methods of ensuring the standard requirements imposed on the parameters of the transient recovery voltages when using current-limiting reactors in the high-voltage electrical equipment of power plants and substations are proposed and analyzed.« less

  4. Assessment of hemoglobin responsiveness to epoetin alfa in patients on hemodialysis using a population pharmacokinetic pharmacodynamic model.

    PubMed

    Wu, Liviawati; Mould, Diane R; Perez Ruixo, Juan Jose; Doshi, Sameer

    2015-10-01

    A population pharmacokinetic pharmacodynamic (PK/PD) model describing the effect of epoetin alfa on hemoglobin (Hb) response in hemodialysis patients was developed. Epoetin alfa pharmacokinetics was described using a linear 2-compartment model. PK parameter estimates were similar to previously reported values. A maturation-structured cytokinetic model consisting of 5 compartments linked in a catenary fashion by first-order cell transfer rates following a zero-order input process described the Hb time course. The PD model described 2 subpopulations, one whose Hb response reflected epoetin alfa dosing and a second whose response was unrelated to epoetin alfa dosing. Parameter estimates from the PK/PD model were physiologically reasonable and consistent with published reports. Numerical and visual predictive checks using data from 2 studies were performed. The PK and PD of epoetin alfa were well described by the model. © 2015, The American College of Clinical Pharmacology.

  5. Reasoning in people with obsessive-compulsive disorder.

    PubMed

    Simpson, Jane; Cove, Jennifer; Fineberg, Naomi; Msetfi, Rachel M; J Ball, Linden

    2007-11-01

    The aim of this study was to investigate the inductive and deductive reasoning abilities of people with obsessive-compulsive disorder (OCD). Following previous research, it was predicted that people with OCD would show different abilities on inductive reasoning tasks but similar abilities to controls on deductive reasoning tasks. A two-group comparison was used with both groups matched on a range of demographic variables. Where appropriate, unmatched variables were entered into the analyses as covariates. Twenty-three people with OCD and 25 control participants were assessed on two tasks: an inductive reasoning task (the 20-questions task) and a deductive reasoning task (a syllogistic reasoning task with a content-neutral and content-emotional manipulation). While no group differences emerged on several of the parameters of the inductive reasoning task, the OCD group did differ on one, and arguably the most important, parameter by asking fewer correct direct-hypothesis questions. The syllogistic reasoning task results were analysed using both correct response and conclusion acceptance data. While no main effects of group were evident, significant interactions indicated important differences in the way the OCD group reasoned with content neutral and emotional syllogisms. It was argued that the OCD group's patterns of response on both tasks were characterized by the need for more information, states of uncertainty, and doubt and postponement of a final decision.

  6. Moisture parameters and fungal communities associated with gypsum drywall in buildings.

    PubMed

    Dedesko, Sandra; Siegel, Jeffrey A

    2015-12-08

    Uncontrolled excess moisture in buildings is a common problem that can lead to changes in fungal communities. In buildings, moisture parameters can be classified by location and include assessments of moisture in the air, at a surface, or within a material. These parameters are not equivalent in dynamic indoor environments, which makes moisture-induced fungal growth in buildings a complex occurrence. In order to determine the circumstances that lead to such growth, it is essential to have a thorough understanding of in situ moisture measurement, the influence of building factors on moisture parameters, and the levels of these moisture parameters that lead to indoor fungal growth. Currently, there are disagreements in the literature on this topic. A literature review was conducted specifically on moisture-induced fungal growth on gypsum drywall. This review revealed that there is no consistent measurement approach used to characterize moisture in laboratory and field studies, with relative humidity measurements being most common. Additionally, many studies identify a critical moisture value, below which fungal growth will not occur. The values defined by relative humidity encompassed the largest range, while those defined by moisture content exhibited the highest variation. Critical values defined by equilibrium relative humidity were most consistent, and this is likely due to equilibrium relative humidity being the most relevant moisture parameter to microbial growth, since it is a reasonable measure of moisture available at surfaces, where fungi often proliferate. Several sources concur that surface moisture, particularly liquid water, is the prominent factor influencing microbial changes and that moisture in the air and within a material are of lesser importance. However, even if surface moisture is assessed, a single critical moisture level to prevent fungal growth cannot be defined, due to a number of factors, including variations in fungal genera and/or species, temperature, and nutrient availability. Despite these complexities, meaningful measurements can still be made to inform fungal growth by making localised, long-term, and continuous measurements of surface moisture. Such an approach will capture variations in a material's surface moisture, which could provide insight on a number of conditions that could lead to fungal proliferation.

  7. Low-frequency fluctuations in vertical cavity lasers: Experiments versus Lang-Kobayashi dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torcini, Alessandro; Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, via Sansone 1, 50019 Sesto Fiorentino; Barland, Stephane

    2006-12-15

    The limits of applicability of the Lang-Kobayashi (LK) model for a semiconductor laser with optical feedback are analyzed. The model equations, equipped with realistic values of the parameters, are investigated below the solitary laser threshold where low-frequency fluctuations (LFF's) are usually observed. The numerical findings are compared with experimental data obtained for the selected polarization mode from a vertical cavity surface emitting laser (VCSEL) subject to polarization selective external feedback. The comparison reveals the bounds within which the dynamics of the LK model can be considered as realistic. In particular, it clearly demonstrates that the deterministic LK model, for realisticmore » values of the linewidth enhancement factor {alpha}, reproduces the LFF's only as a transient dynamics towards one of the stationary modes with maximal gain. A reasonable reproduction of real data from VCSEL's can be obtained only by considering the noisy LK or alternatively deterministic LK model for extremely high {alpha} values.« less

  8. Decreasing Kd uncertainties through the application of thermodynamic sorption models.

    PubMed

    Domènech, Cristina; García, David; Pękala, Marek

    2015-09-15

    Radionuclide retardation processes during transport are expected to play an important role in the safety assessment of subsurface disposal facilities for radioactive waste. The linear distribution coefficient (Kd) is often used to represent radionuclide retention, because analytical solutions to the classic advection-diffusion-retardation equation under simple boundary conditions are readily obtainable, and because numerical implementation of this approach is relatively straightforward. For these reasons, the Kd approach lends itself to probabilistic calculations required by Performance Assessment (PA) calculations. However, it is widely recognised that Kd values derived from laboratory experiments generally have a narrow field of validity, and that the uncertainty of the Kd outside this field increases significantly. Mechanistic multicomponent geochemical simulators can be used to calculate Kd values under a wide range of conditions. This approach is powerful and flexible, but requires expert knowledge on the part of the user. The work presented in this paper aims to develop a simplified approach of estimating Kd values whose level of accuracy would be comparable with those obtained by fully-fledged geochemical simulators. The proposed approach consists of deriving simplified algebraic expressions by combining relevant mass action equations. This approach was applied to three distinct geochemical systems involving surface complexation and ion-exchange processes. Within bounds imposed by model simplifications, the presented approach allows radionuclide Kd values to be estimated as a function of key system-controlling parameters, such as the pH and mineralogy. This approach could be used by PA professionals to assess the impact of key geochemical parameters on the variability of radionuclide Kd values. Moreover, the presented approach could be relatively easily implemented in existing codes to represent the influence of temporal and spatial changes in geochemistry on Kd values. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Body Mass Normalization for Ultrasound Measurements of Adolescent Lateral Abdominal Muscle Thickness.

    PubMed

    Linek, Pawel; Saulicz, Edward; Wolny, Tomasz; Myśliwiec, Andrzej

    2017-04-01

    The purpose of this study was to determine the value of the allometric parameter for ultrasound measurements of the thickness of the oblique external (OE), internal (OI), and transversus abdominis (TrA) muscles in the adolescent population. The allometric parameter is the slope of the linear regression line between the log transformed body mass and log transformed muscle size measurement. The study included 321 adolescents between the ages of 10 and 17, consisting of 160 boys and 161 girls. The participants were recruited from local schools and attended regular school classes at normal grade levels. All individuals with no signs of scoliosis (screening with use of a scoliometer), and no surgical procedures performed on the trunk area were included. A real-time ultrasound B-scanner with a linear array transducer was used to obtain images of the lateral abdominal muscles from both sides of the body. The correlation between body mass and the OE muscle was r = 0.69; the OI muscle r = 0.68; and the TrA muscle r = 0.53 (in all cases, P < .0001). The allometric parameter for the OE was 0.88296; the OI 0.718756; and the TrA 0.60986. Using these parameters, no significant correlations were found between body mass and the allometric-scaled thickness of the lateral abdominal muscles. Significant positive correlations exist between body mass and lateral abdominal muscle thickness assessed by ultrasound imaging. Therefore, it is reasonable to advise that the values of the allometric parameters for OE, OI, and TrA obtained in this study should be used in other studies performed on adolescents. © 2016 by the American Institute of Ultrasound in Medicine.

  10. Preliminary Estimation of Kappa Parameter in Croatia

    NASA Astrophysics Data System (ADS)

    Stanko, Davor; Markušić, Snježana; Ivančić, Ines; Mario, Gazdek; Gülerce, Zeynep

    2017-12-01

    Spectral parameter kappa κ is used to describe spectral amplitude decay “crash syndrome” at high frequencies. The purpose of this research is to estimate spectral parameter kappa for the first time in Croatia based on small and moderate earthquakes. Recordings of local earthquakes with magnitudes higher than 3, epicentre distances less than 150 km, and focal depths less than 30 km from seismological stations in Croatia are used. The value of kappa was estimated from the acceleration amplitude spectrum of shear waves from the slope of the high-frequency part where the spectrum starts to decay rapidly to a noise floor. Kappa models as a function of a site and distance were derived from a standard linear regression of kappa-distance dependence. Site kappa was determined from the extrapolation of the regression line to a zero distance. The preliminary results of site kappa across Croatia are promising. In this research, these results are compared with local site condition parameters for each station, e.g. shear wave velocity in the upper 30 m from geophysical measurements and with existing global shear wave velocity - site kappa values. Spatial distribution of individual kappa’s is compared with the azimuthal distribution of earthquake epicentres. These results are significant for a couple of reasons: to extend the knowledge of the attenuation of near-surface crust layers of the Dinarides and to provide additional information on the local earthquake parameters for updating seismic hazard maps of studied area. Site kappa can be used in the re-creation, and re-calibration of attenuation of peak horizontal and/or vertical acceleration in the Dinarides area since information on the local site conditions were not included in the previous studies.

  11. Accounting for uncertainty in pedotransfer functions in vulnerability assessments of pesticide leaching to groundwater.

    PubMed

    Stenemo, Fredrik; Jarvis, Nicholas

    2007-09-01

    A simulation tool for site-specific vulnerability assessments of pesticide leaching to groundwater was developed, based on the pesticide fate and transport model MACRO, parameterized using pedotransfer functions and reasonable worst-case parameter values. The effects of uncertainty in the pedotransfer functions on simulation results were examined for 48 combinations of soils, pesticides and application timings, by sampling pedotransfer function regression errors and propagating them through the simulation model in a Monte Carlo analysis. An uncertainty factor, f(u), was derived, defined as the ratio between the concentration simulated with no errors, c(sim), and the 80th percentile concentration for the scenario. The pedotransfer function errors caused a large variation in simulation results, with f(u) ranging from 1.14 to 1440, with a median of 2.8. A non-linear relationship was found between f(u) and c(sim), which can be used to account for parameter uncertainty by correcting the simulated concentration, c(sim), to an estimated 80th percentile value. For fine-textured soils, the predictions were most sensitive to errors in the pedotransfer functions for two parameters regulating macropore flow (the saturated matrix hydraulic conductivity, K(b), and the effective diffusion pathlength, d) and two water retention function parameters (van Genuchten's N and alpha parameters). For coarse-textured soils, the model was also sensitive to errors in the exponent in the degradation water response function and the dispersivity, in addition to K(b), but showed little sensitivity to d. To reduce uncertainty in model predictions, improved pedotransfer functions for K(b), d, N and alpha would therefore be most useful. 2007 Society of Chemical Industry

  12. Image Processing for Binarization Enhancement via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A. (Inventor)

    2009-01-01

    A technique for enhancing a gray-scale image to improve conversions of the image to binary employs fuzzy reasoning. In the technique, pixels in the image are analyzed by comparing the pixel's gray scale value, which is indicative of its relative brightness, to the values of pixels immediately surrounding the selected pixel. The degree to which each pixel in the image differs in value from the values of surrounding pixels is employed as the variable in a fuzzy reasoning-based analysis that determines an appropriate amount by which the selected pixel's value should be adjusted to reduce vagueness and ambiguity in the image and improve retention of information during binarization of the enhanced gray-scale image.

  13. 77 FR 41742 - In the Matter of: Humane Restraint, Inc., 912 Bethel Circle, Waunakee, WI 53597, Respondent...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-16

    ... under Export Control Classification Number (``ECCN'') 0A982, controlled for Crime Control reasons, and..., classified under ECCN 0A982, controlled for Crime Control reasons, and valued at approximately $112, from the... kit, items classified under ECCN 0A982, controlled for Crime Control reasons, and valued at...

  14. Indirect estimation of emission factors for phosphate surface mining using air dispersion modeling.

    PubMed

    Tartakovsky, Dmitry; Stern, Eli; Broday, David M

    2016-06-15

    To date, phosphate surface mining suffers from lack of reliable emission factors. Due to complete absence of data to derive emissions factors, we developed a methodology for estimating them indirectly by studying a range of possible emission factors for surface phosphate mining operations and comparing AERMOD calculated concentrations to concentrations measured around the mine. We applied this approach for the Khneifiss phosphate mine, Syria, and the Al-Hassa and Al-Abyad phosphate mines, Jordan. The work accounts for numerous model unknowns and parameter uncertainties by applying prudent assumptions concerning the parameter values. Our results suggest that the net mining operations (bulldozing, grading and dragline) contribute rather little to ambient TSP concentrations in comparison to phosphate processing and transport. Based on our results, the common practice of deriving the emission rates for phosphate mining operations from the US EPA emission factors for surface coal mining or from the default emission factor of the EEA seems to be reasonable. Yet, since multiple factors affect dispersion from surface phosphate mines, a range of emission factors, rather than only a single value, was found to satisfy the model performance. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Model calibration and issues related to validation, sensitivity analysis, post-audit, uncertainty evaluation and assessment of prediction data needs

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2007-01-01

    When simulating natural and engineered groundwater flow and transport systems, one objective is to produce a model that accurately represents important aspects of the true system. However, using direct measurements of system characteristics, such as hydraulic conductivity, to construct a model often produces simulated values that poorly match observations of the system state, such as hydraulic heads, flows and concentrations (for example, Barth et al., 2001). This occurs because of inaccuracies in the direct measurements and because the measurements commonly characterize system properties at different scales from that of the model aspect to which they are applied. In these circumstances, the conservation of mass equations represented by flow and transport models can be used to test the applicability of the direct measurements, such as by comparing model simulated values to the system state observations. This comparison leads to calibrating the model, by adjusting the model construction and the system properties as represented by model parameter values, so that the model produces simulated values that reasonably match the observations.

  16. Modeling parameters that characterize pacing of elite female 800-m freestyle swimmers.

    PubMed

    Lipińska, Patrycja; Allen, Sian V; Hopkins, Will G

    2016-01-01

    Pacing offers a potential avenue for enhancement of endurance performance. We report here a novel method for characterizing pacing in 800-m freestyle swimming. Websites provided 50-m lap and race times for 192 swims of 20 elite female swimmers between 2000 and 2013. Pacing for each swim was characterized with five parameters derived from a linear model: linear and quadratic coefficients for effect of lap number, reductions from predicted time for first and last laps, and lap-time variability (standard error of the estimate). Race-to-race consistency of the parameters was expressed as intraclass correlation coefficients (ICCs). The average swim was a shallow negative quadratic with slowest time in the eleventh lap. First and last laps were faster by 6.4% and 3.6%, and lap-time variability was ±0.64%. Consistency between swimmers ranged from low-moderate for the linear and quadratic parameters (ICC = 0.29 and 0.36) to high for the last-lap parameter (ICC = 0.62), while consistency for race time was very high (ICC = 0.80). Only ~15% of swimmers had enough swims (~15 or more) to provide reasonable evidence of optimum parameter values in plots of race time vs. each parameter. The modest consistency of most of the pacing parameters and lack of relationships between parameters and performance suggest that swimmers usually compensated for changes in one parameter with changes in another. In conclusion, pacing in 800-m elite female swimmers can be characterized with five parameters, but identifying an optimal pacing profile is generally impractical.

  17. The Value of Certainty (Invited)

    NASA Astrophysics Data System (ADS)

    Barkstrom, B. R.

    2009-12-01

    It is clear that Earth science data are valued, in part, for their ability to provide some certainty about the past state of the Earth and about its probable future states. We can sharpen this notion by using seven categories of value ● Warning Service, requiring latency of three hours or less, as well as uninterrupted service ● Information Service, requiring latency less than about two weeks, as well as unterrupted service ● Process Information, requiring ability to distinguish between alternative processes ● Short-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of five years or less, e.g. crop insurance ● Mid-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of twenty-five years or less, e.g. power plant siting ● Long-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of a century or less, e.g. one hundred year flood planning ● Doomsday Statistics, requiring ability to construct a reliable statistical record that is useful for reducing the impact of `doomsday' scenarios While the first two of these categories place high value on having an uninterrupted flow of information, and the third places value on contributing to our understanding of physical processes, it is notable that the last four may be placed on a common footing by considering the ability of observations to reduce uncertainty. Quantitatively, we can often identify metrics for parameters of interest that are fairly simple. For example, ● Detection of change in the average value of a single parameter, such as global temperature ● Detection of a trend, whether linear or nonlinear, such as the trend in cloud forcing known as cloud feedback ● Detection of a change in extreme value statistics, such as flood frequency or drought severity For such quantities, we can quantify uncertainty in terms of the entropy which is calculated by creating a set of discrete bins for the value and then using error estimates to assign probabilities, pi, to each bin. The entropy, H, is simply H = ∑i pi log2(1/pi) The value of a new set of observations is the information gain, I, which is I = Hprior - Hposterior The probability distributions that appear in this calculation depend on rigorous evaluation of errors in the observations. While direct estimates of the monetary value of data that could be used in budget prioritizations may not capture the value of data to the scientific community, it appears that the information gain may be a useful start in providing a `common currency' for evaluating projects that serve very different communities. In addition, from the standpoint of governmental accounting, it appears reasonable to assume that much of the expense for scientific data become sunk costs shortly after operations begin and that the real, long-term value is created by the effort scientists expend in creating the software that interprets the data and in the effort expended in calibration and validation. These efforts are the ones that directly contribute to the information gain that provides the value of these data.

  18. Deviations from Vegard's law in semiconductor thin films measured with X-ray diffraction and Rutherford backscattering: The Ge1-ySny and Ge1-xSix cases

    NASA Astrophysics Data System (ADS)

    Xu, Chi; Senaratne, Charutha L.; Culbertson, Robert J.; Kouvetakis, John; Menéndez, José

    2017-09-01

    The compositional dependence of the lattice parameter in Ge1-ySny alloys has been determined from combined X-ray diffraction and Rutherford Backscattering (RBS) measurements of a large set of epitaxial films with compositions in the 0 < y < 0.14 range. In view of contradictory prior results, a critical analysis of this method has been carried out, with emphasis on nonlinear elasticity corrections and systematic errors in popular RBS simulation codes. The approach followed is validated by showing that measurements of Ge1-xSix films yield a bowing parameter θGeSi =-0.0253(30) Å, in excellent agreement with the classic work by Dismukes. When the same methodology is applied to Ge1-ySny alloy films, it is found that the bowing parameter θGeSn is zero within experimental error, so that the system follows Vegard's law. This is in qualitative agreement with ab initio theory, but the value of the experimental bowing parameter is significantly smaller than the theoretical prediction. Possible reasons for this discrepancy are discussed in detail.

  19. What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.

    2012-12-01

    A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less

  20. Normalized inverse characterization of sound absorbing rigid porous media.

    PubMed

    Zieliński, Tomasz G

    2015-06-01

    This paper presents a methodology for the inverse characterization of sound absorbing rigid porous media, based on standard measurements of the surface acoustic impedance of a porous sample. The model parameters need to be normalized to have a robust identification procedure which fits the model-predicted impedance curves with the measured ones. Such a normalization provides a substitute set of dimensionless (normalized) parameters unambiguously related to the original model parameters. Moreover, two scaling frequencies are introduced, however, they are not additional parameters and for different, yet reasonable, assumptions of their values, the identification procedure should eventually lead to the same solution. The proposed identification technique uses measured and computed impedance curves for a porous sample not only in the standard configuration, that is, set to the rigid termination piston in an impedance tube, but also with air gaps of known thicknesses between the sample and the piston. Therefore, all necessary analytical formulas for sound propagation in double-layered media are provided. The methodology is illustrated by one numerical test and by two examples based on the experimental measurements of the acoustic impedance and absorption of porous ceramic samples of different thicknesses and a sample of polyurethane foam.

  1. Investigations of the optical and EPR data and local structure for the trigonal tetrahedral Co2+ centers in LiGa5O8: Co2+ crystal

    NASA Astrophysics Data System (ADS)

    He, Jian; Liao, Bi-Tao; Mei, Yang; Liu, Hong-Gang; Zheng, Wen-Chen

    2018-01-01

    In this paper, we calculate uniformly the optical and EPR data for Co2+ ion at the trigonal tetrahedral Ga3+ site in LiGa5O8 crystal from the complete diagonalization (of energy matrix) method founded on the two-spin-orbit-parameter model, where the contributions to the spectroscopic data from both the spin-orbit parameter of dn ion (in the classical crystal field theory) and that of ligand ions are contained. The calculated ten spectroscopic data (seven optical bands and three spin-Hamiltonian parameters g//, g⊥ and D) with only four adjustable parameters are in good agreement with the available observed values. Compared with the host (GaO4)5- cluster, the great angular distortion and hence the great trigonal distortion of (CoO4)6- impurity center obtained from the calculations are referred to the large charge and size mismatch substitution. This explains reasonably the observed great g-anisotropy Δg (= g// - g⊥) and zero-field splitting D for the (CoO4)6- cluster in LiGa5O8: Co2+ crystal.

  2. Artificial neuron-glia networks learning approach based on cooperative coevolution.

    PubMed

    Mesejo, Pablo; Ibáñez, Oscar; Fernández-Blanco, Enrique; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana B

    2015-06-01

    Artificial Neuron-Glia Networks (ANGNs) are a novel bio-inspired machine learning approach. They extend classical Artificial Neural Networks (ANNs) by incorporating recent findings and suppositions about the way information is processed by neural and astrocytic networks in the most evolved living organisms. Although ANGNs are not a consolidated method, their performance against the traditional approach, i.e. without artificial astrocytes, was already demonstrated on classification problems. However, the corresponding learning algorithms developed so far strongly depends on a set of glial parameters which are manually tuned for each specific problem. As a consequence, previous experimental tests have to be done in order to determine an adequate set of values, making such manual parameter configuration time-consuming, error-prone, biased and problem dependent. Thus, in this paper, we propose a novel learning approach for ANGNs that fully automates the learning process, and gives the possibility of testing any kind of reasonable parameter configuration for each specific problem. This new learning algorithm, based on coevolutionary genetic algorithms, is able to properly learn all the ANGNs parameters. Its performance is tested on five classification problems achieving significantly better results than ANGN and competitive results with ANN approaches.

  3. Assessment of dynamic closure for premixed combustion large eddy simulation

    NASA Astrophysics Data System (ADS)

    Langella, Ivan; Swaminathan, Nedunchezhian; Gao, Yuan; Chakraborty, Nilanjan

    2015-09-01

    Turbulent piloted Bunsen flames of stoichiometric methane-air mixtures are computed using the large eddy simulation (LES) paradigm involving an algebraic closure for the filtered reaction rate. This closure involves the filtered scalar dissipation rate of a reaction progress variable. The model for this dissipation rate involves a parameter βc representing the flame front curvature effects induced by turbulence, chemical reactions, molecular dissipation, and their interactions at the sub-grid level, suggesting that this parameter may vary with filter width or be a scale-dependent. Thus, it would be ideal to evaluate this parameter dynamically by LES. A procedure for this evaluation is discussed and assessed using direct numerical simulation (DNS) data and LES calculations. The probability density functions of βc obtained from the DNS and LES calculations are very similar when the turbulent Reynolds number is sufficiently large and when the filter width normalised by the laminar flame thermal thickness is larger than unity. Results obtained using a constant (static) value for this parameter are also used for comparative evaluation. Detailed discussion presented in this paper suggests that the dynamic procedure works well and physical insights and reasonings are provided to explain the observed behaviour.

  4. Noisy Preferences in Risky Choice: A Cautionary Note

    PubMed Central

    2017-01-01

    We examine the effects of multiple sources of noise in risky decision making. Noise in the parameters that characterize an individual’s preferences can combine with noise in the response process to distort observed choice proportions. Thus, underlying preferences that conform to expected value maximization can appear to show systematic risk aversion or risk seeking. Similarly, core preferences that are consistent with expected utility theory, when perturbed by such noise, can appear to display nonlinear probability weighting. For this reason, modal choices cannot be used simplistically to infer underlying preferences. Quantitative model fits that do not allow for both sorts of noise can lead to wrong conclusions. PMID:28569526

  5. Intrinsic measurement errors for the speed of light in vacuum

    NASA Astrophysics Data System (ADS)

    Braun, Daniel; Schneiter, Fabienne; Fischer, Uwe R.

    2017-09-01

    The speed of light in vacuum, one of the most important and precisely measured natural constants, is fixed by convention to c=299 792 458 m s-1 . Advanced theories predict possible deviations from this universal value, or even quantum fluctuations of c. Combining arguments from quantum parameter estimation theory and classical general relativity, we here establish rigorously the existence of lower bounds on the uncertainty to which the speed of light in vacuum can be determined in a given region of space-time, subject to several reasonable restrictions. They provide a novel perspective on the experimental falsifiability of predictions for the quantum fluctuations of space-time.

  6. Fractal analysis of phasic laser images of the myocardium for the purpose of diagnostics of acute coronary insufficiency

    NASA Astrophysics Data System (ADS)

    Wanchuliak, O. Y.; Bachinskyi, V. T.

    2011-09-01

    In this work on the base of Mueller-matrix description of optical anisotropy, the possibility of monitoring of time changes of myocardium tissue birefringence, has been considered. The optical model of polycrystalline networks of myocardium is suggested. The results of investigating the interrelation between the values correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the distributions of Mueller matrix elements in the points of laser images of myocardium histological sections. The criteria of differentiation of death coming reasons are determined.

  7. Study of X-ray photoionized Fe plasma and comparisons with astrophysical modeling codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foord, M E; Heeter, R F; Chung, H

    The charge state distributions of Fe, Na and F are determined in a photoionized laboratory plasma using high resolution x-ray spectroscopy. Independent measurements of the density and radiation flux indicate the ionization parameter {zeta} in the plasma reaches values {zeta} = 20-25 erg cm s{sup -1} under near steady-state conditions. A curve-of-growth analysis, which includes the effects of velocity gradients in a one-dimensional expanding plasma, fits the observed line opacities. Absorption lines are tabulated in the wavelength region 8-17 {angstrom}. Initial comparisons with a number of astrophysical x-ray photoionization models show reasonable agreement.

  8. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.

  9. Uncertainty quantification and propagation of errors of the Lennard-Jones 12-6 parameters for n-alkanes

    PubMed Central

    Knotts, Thomas A.

    2017-01-01

    Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455

  10. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons

    PubMed Central

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013

  11. A decision method based on uncertainty reasoning of linguistic truth-valued concept lattice

    NASA Astrophysics Data System (ADS)

    Yang, Li; Xu, Yang

    2010-04-01

    Decision making with linguistic information is a research hotspot now. This paper begins by establishing the theory basis for linguistic information processing and constructs the linguistic truth-valued concept lattice for a decision information system, and further utilises uncertainty reasoning to make the decision. That is, we first utilise the linguistic truth-valued lattice implication algebra to unify the different kinds of linguistic expressions; second, we construct the linguistic truth-valued concept lattice and decision concept lattice according to the concrete decision information system and third, we establish the internal and external uncertainty reasoning methods and talk about the rationality of them. We apply these uncertainty reasoning methods into decision making and present some generation methods of decision rules. In the end, we give an application of this decision method by an example.

  12. Extended Higgs-portal dark matter and the Fermi-LAT Galactic Center Excess

    NASA Astrophysics Data System (ADS)

    Casas, J. A.; Gómez Vargas, G. A.; Moreno, J. M.; Quilis, J.; Ruiz de Austri, R.

    2018-06-01

    In the present work, we show that the Galactic Center Excess (GCE) emission, as recently updated by the Fermi-LAT Collaboration, could be explained by a mixture of Fermi-bubbles-like emission plus dark matter (DM) annihilation, in the context of a scalar-singlet Higgs portal scenario (SHP). In fact, the standard SHP, where the DM particle, S, only has renormalizable interactions with the Higgs, is non-operational due to strong constraints, especially from DM direct detection limits. Thus we consider the most economical extension, called ESHP (for extended SHP), which consists solely in the addition of a second (more massive) scalar singlet in the dark sector. The second scalar can be integrated-out, leaving a standard SHP plus a dimension-6 operator. Mainly, this model has only two relevant parameters (the DM mass and the coupling of the dim-6 operator). DM annihilation occurs mainly into two Higgs bosons, SS→ hh. We demonstrate that, despite its economy, the ESHP model provides an excellent fit to the GCE (with p-value ~ 0.6‑0.7) for very reasonable values of the parameters, in particular, mS simeq 130 GeV. This agreement of the DM candidate to the GCE properties does not clash with other observables and keep the S‑particle relic density at the accepted value for the DM content in the universe.

  13. Inverse modeling with RZWQM2 to predict water quality

    USGS Publications Warehouse

    Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.

    2011-01-01

    This chapter presents guidelines for autocalibration of the Root Zone Water Quality Model (RZWQM2) by inverse modeling using PEST parameter estimation software (Doherty, 2010). Two sites with diverse climate and management were considered for simulation of N losses by leaching and in drain flow: an almond [Prunus dulcis (Mill.) D.A. Webb] orchard in the San Joaquin Valley, California and the Walnut Creek watershed in central Iowa, which is predominantly in corn (Zea mays L.)–soybean [Glycine max (L.) Merr.] rotation. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals and sensitivities. We describe operation of PEST in both parameter estimation and predictive analysis modes. The goal of parameter estimation is to identify a unique set of parameters that minimize a weighted least squares objective function, and the goal of predictive analysis is to construct a nonlinear confidence interval for a prediction of interest by finding a set of parameters that maximizes or minimizes the prediction while maintaining the model in a calibrated state. We also describe PEST utilities (PAR2PAR, TSPROC) for maintaining ordered relations among model parameters (e.g., soil root growth factor) and for post-processing of RZWQM2 outputs representing different cropping practices at the Iowa site. Inverse modeling provided reasonable fits to observed water and N fluxes and directly benefitted the modeling through: (i) simultaneous adjustment of multiple parameters versus one-at-a-time adjustment in manual approaches; (ii) clear indication by convergence criteria of when calibration is complete; (iii) straightforward detection of nonunique and insensitive parameters, which can affect the stability of PEST and RZWQM2; and (iv) generation of confidence intervals for uncertainty analysis of parameters and model predictions. Composite scaled sensitivities, which reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.

  14. Validating predictions of evolving porosity and permeability in carbonate reservoir rocks exposed to CO2-brine

    NASA Astrophysics Data System (ADS)

    Smith, M. M.; Hao, Y.; Carroll, S.

    2017-12-01

    Improving our ability to better forecast the extent and impact of changes in porosity and permeability due to CO2-brine-carbonate reservoir interactions should lower uncertainty in long-term geologic CO2 storage capacity estimates. We have developed a continuum-scale reactive transport model that simulates spatial and temporal changes to porosity, permeability, mineralogy, and fluid composition within carbonate rocks exposed to CO2 and brine at storage reservoir conditions. The model relies on two primary parameters to simulate brine-CO2-carbonate mineral reaction: kinetic rate constant(s), kmineral, for carbonate dissolution; and an exponential parameter, n, relating porosity change to resulting permeability. Experimental data collected from fifteen core-flooding experiments conducted on samples from the Weyburn (Saskatchewan, Canada) and Arbuckle (Kansas, USA) carbonate reservoirs were used to calibrate the reactive-transport model and constrain the useful range of k and n values. Here we present the results of our current efforts to validate this model and the use of these parameter values, by comparing predictions of extent and location of dissolution and the evolution of fluid permeability against our results from new core-flood experiments conducted on samples from the Duperow Formation (Montana, USA). Agreement between model predictions and experimental data increase our confidence that these parameter ranges need not be considered site-specific but may be applied (within reason) at various locations and reservoirs. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  15. Intraocular straylight screening in medical testing centres for driver licence holders in Spain

    PubMed Central

    Michael, Ralph; Barraquer, Rafael I.; Rodríguez, Judith; Tuñi i Picado, Josep; Jubal, Joan Serra; González Luque, Juan Carlos; van den Berg, Tom

    2010-01-01

    Purpose To test the performance of the C-quant straylight meter during the daily routine work in medical testing centres for driver license applicants and driver license holders in Spain. Methods Altogether 914 subjects, of which 376 younger than 35 years, 428 between 35 and 60 years and 110 over 60 years were measured with the C-quant in three medical testing centres (Barcelona, Zaragoza and Palma de Mallorca) in 2006. Technicians were instructed once and the measurements were done during the daily routine work. We recorded: age, BCVA, self-reported subjective blinding at night; and from the C-quant: straylight parameter (log s), measurement quality parameters (ESD, Q) and test duration. Results Total C-quant test duration increases slightly with age from a mean of 7 min (< 35 years) to a mean of 9 min (> 60). At first attempt, 82 % of all subjects produced reliable results (ESD < 0.12). The straylight parameter for this group was independent of ESD and ESD was independent of total test duration. The known age dependence of the straylight parameter and the weak correlation with BCVA was confirmed. The distribution of subjective blinding at night was very different between test centres. Subjects with “very strong” subjective blinding had significantly higher straylight values than subjects with “no” subjective blinding. Subjects avoiding night driving had significant higher straylight values than subjects driving at night. Conclusion The C-quant measure is reasonable fast. Good subject instruction is important to get first attempt reliable results. Self-reported subjective blinding results depend strongly on the interviewer.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthias C. M. Troffaes; Gero Walter; Dana Kelly

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus onmore » elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.« less

  17. Quantitative estimation of climatic parameters from vegetation data in North America by the mutual climatic range technique

    USGS Publications Warehouse

    Anderson, Katherine H.; Bartlein, Patrick J.; Strickland, Laura E.; Pelltier, Richard T.; Thompson, Robert S.; Shafer, Sarah L.

    2012-01-01

    The mutual climatic range (MCR) technique is perhaps the most widely used method for estimating past climatic parameters from fossil assemblages, largely because it can be conducted on a simple list of the taxa present in an assemblage. When applied to plant macrofossil data, this unweighted approach (MCRun) will frequently identify a large range for a given climatic parameter where the species in an assemblage can theoretically live together. To narrow this range, we devised a new weighted approach (MCRwt) that employs information from the modern relations between climatic parameters and plant distributions to lessen the influence of the "tails" of the distributions of the climatic data associated with the taxa in an assemblage. To assess the performance of the MCR approaches, we applied them to a set of modern climatic data and plant distributions on a 25-km grid for North America, and compared observed and estimated climatic values for each grid point. In general, MCRwt was superior to MCRun in providing smaller anomalies, less bias, and better correlations between observed and estimated values. However, by the same measures, the results of Modern Analog Technique (MAT) approaches were superior to MCRwt. Although this might be reason to favor MAT approaches, they are based on assumptions that may not be valid for paleoclimatic reconstructions, including that: 1) the absence of a taxon from a fossil sample is meaningful, 2) plant associations were largely unaffected by past changes in either levels of atmospheric carbon dioxide or in the seasonal distributions of solar radiation, and 3) plant associations of the past are adequately represented on the modern landscape. To illustrate the application of these MCR and MAT approaches to paleoclimatic reconstructions, we applied them to a Pleistocene paleobotanical assemblage from the western United States. From our examinations of the estimates of modern and past climates from vegetation assemblages, we conclude that the MCRun technique provides reliable and unbiased estimates of the ranges of possible climatic conditions that can reasonably be associated with these assemblages. The application of MCRwt and MAT approaches can further constrain these estimates and may provide a systematic way to assess uncertainty. The data sets required for MCR analyses in North America are provided in a parallel publication.

  18. Inter-facility transfer of surgical emergencies in a developing country: effects on management and surgical outcomes.

    PubMed

    Khan, Salma; Zafar, Hasnain; Zafar, Syed Nabeel; Haroon, Naveed

    2014-02-01

    Outcomes of surgical emergencies are associated with promptness of the appropriate surgical intervention. However, delayed presentation of surgical patients is common in most developing countries. Delays commonly occur due to transfer of patients between facilities. The aim of the present study was to assess the effect of delays in treatment caused by inter-facility transfers of patients presenting with surgical emergencies as measured by objective and subjective parameters. We prospectively collected data on all patients presenting with an acute surgical emergency at Aga Khan University Hospital (AKUH). Information regarding demographics, social class, reason and number of transfers, and distance traveled were collected. Patients were categorized into two groups, those transferred to AKUH from another facility (transferred) and direct arrivals (non-transfers). Differences between presenting physiological parameters, vital statistics, and management were tested between the two groups by the chi square and t tests. Ninety-nine patients were included, 49 (49.5 %) patients having been transferred from another facility. The most common reason for transfer was "lack of satisfactory surgical care." There were significant differences in presenting pulse, oxygen saturation, respiratory rate, fluid for resuscitation, glasgow coma scale, and revised trauma score (all p values <0.001) between transferred and non-transferred patients. In 56 patients there was a further delay in admission, and the most common reason was bed availability, followed by financial constraints. Three patients were shifted out of the hospital due to lack of ventilator, and 14 patients left against medical advice due to financial limitations. One patient died. Inter-facility transfer of patients with surgical emergencies is common. These patients arrive with deranged physiology which requires complex and prolonged hospital care. Patients who cannot afford treatment are most vulnerable to transfers and delays.

  19. Spatial variation of statistical properties of extreme water levels along the eastern Baltic Sea

    NASA Astrophysics Data System (ADS)

    Pindsoo, Katri; Soomere, Tarmo; Rocha, Eugénio

    2016-04-01

    Most of existing projections of future extreme water levels rely on the use of classic generalised extreme value distributions. The choice to use a particular distribution is often made based on the absolute value of the shape parameter of the Generalise Extreme Value distribution. If this parameter is small, the Gumbel distribution is most appropriate while in the opposite case the Weibull or Frechet distribution could be used. We demonstrate that the alongshore variation in the statistical properties of numerically simulated high water levels along the eastern coast of the Baltic Sea is so large that the use of a single distribution for projections of extreme water levels is highly questionable. The analysis is based on two simulated data sets produced in the Swedish Meteorological and Hydrological Institute. The output of the Rossby Centre Ocean model is sampled with a resolution of 6 h and the output of the circulation model NEMO with a resolution of 1 h. As the maxima of water levels of subsequent years may be correlated in the Baltic Sea, we also employ maxima for stormy seasons. We provide a detailed analysis of spatial variation of the parameters of the family of extreme value distributions along an approximately 600 km long coastal section from the north-western shore of Latvia in the Baltic Proper until the eastern Gulf of Finland. The parameters are evaluated using maximum likelihood method and method of moments. The analysis also covers the entire Gulf of Riga. The core parameter of this family of distributions, the shape parameter of the Generalised Extreme Value distribution, exhibits extensive variation in the study area. Its values evaluated using the Hydrognomon software and maximum likelihood method, vary from about -0.1 near the north-western coast of Latvia in the Baltic Proper up to about 0.05 in the eastern Gulf of Finland. This parameter is very close to zero near Tallinn in the western Gulf of Finland. Thus, it is natural that the Gumbel distribution gives adequate projections of extreme water levels for the vicinity of Tallinn. More importantly, this feature indicates that the use of a single distribution for the projections of extreme water levels and their return periods for the entire Baltic Sea coast is inappropriate. The physical reason is the interplay of the complex shape of large subbasins (such as the Gulf of Riga and Gulf of Finland) of the sea and highly anisotropic wind regime. The 'impact' of this anisotropy on the statistics of water level is amplified by the overall anisotropy of the distributions of the frequency of occurrence of high and low water levels. The most important conjecture is that long-term behaviour of water level extremes in different coastal sections of the Baltic Sea may be fundamentally different.

  20. Default values for assessment of potential dermal exposure of the hands to industrial chemicals in the scope of regulatory risk assessments.

    PubMed

    Marquart, Hans; Warren, Nicholas D; Laitinen, Juha; van Hemmen, Joop J

    2006-07-01

    Dermal exposure needs to be addressed in regulatory risk assessment of chemicals. The models used so far are based on very limited data. The EU project RISKOFDERM has gathered a large number of new measurements on dermal exposure to industrial chemicals in various work situations, together with information on possible determinants of exposure. These data and information, together with some non-RISKOFDERM data were used to derive default values for potential dermal exposure of the hands for so-called 'TGD exposure scenarios'. TGD exposure scenarios have similar values for some very important determinant(s) of dermal exposure, such as amount of substance used. They form narrower bands within the so-called 'RISKOFDERM scenarios', which cluster exposure situations according to the same purpose of use of the products. The RISKOFDERM scenarios in turn are narrower bands within the so-called Dermal Exposure Operation units (DEO units) that were defined in the RISKOFDERM project to cluster situations with similar exposure processes and exposure routes. Default values for both reasonable worst case situations and typical situations were derived, both for single datasets and, where possible, for combined datasets that fit the same TGD exposure scenario. The following reasonable worst case potential hand exposures were derived from combined datasets: (i) loading and filling of large containers (or mixers) with large amounts (many litres) of liquids: 11,500 mg per scenario (14 mg cm(-2) per scenario with surface of the hands assumed to be 820 cm(2)); (ii) careful mixing of small quantities (tens of grams in <1l): 4.1 mg per scenario (0.005 mg cm(-2) per scenario); (iii) spreading of (viscous) liquids with a comb on a large surface area: 130 mg per scenario (0.16 mg cm(-2) per scenario); (iv) brushing and rolling of (relatively viscous) liquid products on surfaces: 6500 mg per scenario (8 mg cm(-2) per scenario) and (v) spraying large amounts of liquids (paints, cleaning products) on large areas: 12,000 mg per scenario (14 mg cm(-2) per scenario). These default values are considered useful for estimating exposure for similar substances in similar situations with low uncertainty. Several other default values based on single datasets can also be used, but lead to estimates with a higher uncertainty, due to their more limited basis. Sufficient analogy in all described parameters of the scenario, including duration, is needed to enable proper use of the default values. The default values lead to similar estimates as the RISKOFDERM dermal exposure model that was based on the same datasets, but uses very different parameters. Both approaches are preferred over older general models, such as EASE, that are not based on data from actual dermal exposure situations.

  1. [Quality assurance from the viewpoint of the x-ray film industry].

    PubMed

    von Volkmann, T

    1992-08-01

    The parameters of a film-screen-combination are listed in the directive to section 16 of the german X-ray Regulation. These parameters are determined by methods described in DIN standards and published by the manufacturer. Comparable but less precise parameters are determined in the Acceptance Test. For physical reasons it is not possible to determine the speed of an X-ray film or the intensification factor of a screen separately. The films, screens and processing chemicals delivered by the members of the manufacturer association ZVEI are kept below a deviation (expressed as relative contribution to the system speed S) of +/- 10% for the majority of products, the upper limit is +/- 15%. Poor storage and transport conditions may adversely affect the quality of X-ray films. A special labeling of the film box shall serve to guarantee safe distribution channels. The processing conditions are adjusted at the Acceptance Test according to the manufacturers recommendations. The Constancy Test of film processing serves to maintain these correct conditions. Methods deviating from the DIN-method are of limited (Bayerische method) or no value (Stuttgart method).

  2. Dosimetric characterization of a bi-directional micromultileaf collimator for stereotactic applications.

    PubMed

    Bucciolini, M; Russo, S; Banci Buonamici, F; Pini, S; Silli, P

    2002-07-01

    A 6 MV photon beam from Linac SL75-5 has been collimated with a new micromultileaf device that is able to shape the field in the two orthogonal directions with four banks of leaves. This is the first clinical installation of the collimator and in this paper the dosimetric characterization of the system is reported. The dosimetric parameters required by the treatment planning system used for the dose calculation in the patient are: tissue maximum ratios, output factors, transmission and leakage of the leaves, penumbra values. Ionization chambers, silicon diode, radiographic films, and LiF thermoluminescent dosimeters have been employed for measurements of absolute dose and beam dosimetric data. Measurements with different dosimeters supply results in reasonable agreement among them and consistent with data available in literature for other models of micromultileaf collimator; that permits the use of the measured parameters for clinical applications. The discrepancies between results obtained with the different detectors (around 2%) for the analyzed parameters can be considered an indication of the accuracy that can be reached by current stereotactic dosimetry.

  3. DYNAMICS OF SELF-GRAVITY WAKES IN DENSE PLANETARY RINGS. I. PITCH ANGLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michikoshi, Shugo; Kokubo, Eiichiro; Fujii, Akihiko

    2015-10-20

    We investigate the dynamics of self-gravity wakes in dense planetary rings. In particular, we examine how the pitch angles of self-gravity wakes depend on ring parameters using N-body simulations. We calculate the pitch angles using the two-dimensional autocorrelation function of the ring surface density. We obtain the pitch angles for the inner and outer parts of the autocorrelation function separately. We confirm that the pitch angles are 15°–30° for reasonable ring parameters, which are consistent with previous studies. We find that the inner pitch angle increases with the Saturnicentric distance, while it barely depends on the optical depth and themore » restitution coefficient of ring particles. The increase of the inner pitch angle with the Saturnicentric distance is consistent with the observations of the A ring. The outer pitch angle does not have a clear dependence on any ring parameters and is about 10°–15°. This value is consistent with the pitch angle of spiral arms in collisionless systems.« less

  4. Atomic layer deposition for fabrication of HfO2/Al2O3 thin films with high laser-induced damage thresholds.

    PubMed

    Wei, Yaowei; Pan, Feng; Zhang, Qinghua; Ma, Ping

    2015-01-01

    Previous research on the laser damage resistance of thin films deposited by atomic layer deposition (ALD) is rare. In this work, the ALD process for thin film generation was investigated using different process parameters such as various precursor types and pulse duration. The laser-induced damage threshold (LIDT) was measured as a key property for thin films used as laser system components. Reasons for film damaged were also investigated. The LIDTs for thin films deposited by improved process parameters reached a higher level than previously measured. Specifically, the LIDT of the Al2O3 thin film reached 40 J/cm(2). The LIDT of the HfO2/Al2O3 anti-reflector film reached 18 J/cm(2), the highest value reported for ALD single and anti-reflect films. In addition, it was shown that the LIDT could be improved by further altering the process parameters. All results show that ALD is an effective film deposition technique for fabrication of thin film components for high-power laser systems.

  5. Bflinks: Reliable Bugfix Links via Bidirectional References and Tuned Heuristics

    PubMed Central

    2014-01-01

    Background. Data from software version archives and defect databases can be used for defect insertion circumstance analysis and defect prediction. The first step in such analyses is identifying defect-correcting changes in the version archive (bugfix commits) and enriching them with additional metadata by establishing bugfix links to corresponding entries in the defect database. Candidate bugfix commits are typically identified via heuristic string matching on the commit message. Research Questions. Which filters could be used to obtain a set of bugfix links? How to tune their parameters? What accuracy is achieved? Method. We analyze a modular set of seven independent filters, including new ones that make use of reverse links, and evaluate visual heuristics for setting cutoff parameters. For a commercial repository, a product expert manually verifies over 2500 links to validate the results with unprecedented accuracy. Results. The heuristics pick a very good parameter value for five filters and a reasonably good one for the sixth. The combined filtering, called bflinks, provides 93% precision and only 7% results loss. Conclusion. Bflinks can provide high-quality results and adapts to repositories with different properties. PMID:27433506

  6. The primacy of thinking about possibilities in the development of reasoning.

    PubMed

    Gauffroy, Caroline; Barrouillet, Pierre

    2011-07-01

    One of the main tenets of the mental model theory is that when individuals reason, they think about possibilities. According to this theory, reasoning on what is possible from the truth of a sentence would be psychologically basic, whereas reasoning the other way round, on the truth or falsity of a sentence from a given state of affairs, would require some meta-ability. The present study tested the developmental corollary of this theory, which is that reasoning about possibilities should develop first, whereas the development of reasoning about truth-value should be delayed. For this purpose, 3rd, 6th, and 9th graders as well as adults were presented with tasks requiring them to evaluate either the possibilities compatible with conditional sentences or the truth-value of these sentences from these same possibilities. The results revealed 2 phenomena. First, the same developmental trend was observed in both tasks with 3 successive interpretational levels: conjunctive, biconditional, and then conditional. Second, there was a developmental lag between the 2 forms of reasoning--with developmental transitions from one level to the next occurring about 3 years later when reasoning about truth-value. The implications of these results for theories of cognitive development and of reasoning are discussed. PsycINFO Database Record (c) 2011 APA, all rights reserved

  7. Modeling ferroelectric film properties and size effects from tetragonal interlayer in Hf1-xZrxO2 grains

    NASA Astrophysics Data System (ADS)

    Künneth, Christopher; Materlik, Robin; Kersch, Alfred

    2017-05-01

    Size effects from surface or interface energy play a pivotal role in stabilizing the ferroelectric phase in recently discovered thin film Zirconia-Hafnia. However, sufficient quantitative understanding has been lacking due to the interference with the stabilizing effect from dopants. For the important class of undoped Hf1-xZrxO2, a phase stability model based on free energy from Density functional theory (DFT) and surface energy values adapted to the sparse experimental and theoretical data has been successful to describe key properties of the available thin film data. Since surfaces and interfaces are prone to interference, the predictive capability of the model is surprising and directs to a hitherto undetected, underlying reason. New experimental data hint on the existence of an interlayer on the grain surface fixed in the tetragonal phase possibly shielding from external influence. To explore the consequences of such a mechanism, we develop an interface free energy model to include the fixed interlayer, generalize the grain model to include a grain radius distribution, calculate average polarization and permittivity, and compare the model with available experimental data. Since values for interface energies are sparse or uncertain, we obtain its values from minimizing the least square difference between predicted key parameters to experimental data in a global optimization. Since the detailed values for DFT energies depend on the chosen method, we repeat the search for different computed data sets and come out with quantitatively different but qualitatively consistent values for interface energies. The resulting values are physically very reasonable and the model is able to give qualitative prediction. On the other hand, the optimization reveals that the model is not able to fully capture the experimental data. We discuss possible physical effects and directions of research to possibly close this gap.

  8. Estimation of k-ε parameters using surrogate models and jet-in-crossflow data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan

    2014-11-01

    We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less

  9. On the Enthalpy and Entropy of Point Defect Formation in Crystals

    NASA Astrophysics Data System (ADS)

    Kobelev, N. P.; Khonik, V. A.

    2018-03-01

    A standard way to determine the formation enthalpy H and entropy S of point defect formation in crystals consists in the application of the Arrhenius equation for the defect concentration. In this work, we show that a formal use of this method actually gives the effective (apparent) values of these quantities, which appear to be significantly overestimated. The underlying physical reason lies in temperature-dependent formation enthalpy of the defects, which is controlled by temperature dependence of the elastic moduli. We present an evaluation of the "true" H- and S-values for aluminum, which are derived on the basis of experimental data by taking into account temperature dependence of the formation enthalpy related to temperature dependence of the elastic moduli. The knowledge of the "true" activation parameters is needed for a correct calculation of the defect concentration constituting thus an issue of major importance for different fundamental and application issues of condensed matter physics and chemistry.

  10. Porosity and hydraulic conductivity estimation of the basaltic aquifer in Southern Syria by using nuclear and electrical well logging techniques

    NASA Astrophysics Data System (ADS)

    Asfahani, Jamal

    2017-08-01

    An alternative approach using nuclear neutron-porosity and electrical resistivity well logging of long (64 inch) and short (16 inch) normal techniques is proposed to estimate the porosity and the hydraulic conductivity ( K) of the basaltic aquifers in Southern Syria. This method is applied on the available logs of Kodana well in Southern Syria. It has been found that the obtained K value by applying this technique seems to be reasonable and comparable with the hydraulic conductivity value of 3.09 m/day obtained by the pumping test carried out at Kodana well. The proposed alternative well logging methodology seems as promising and could be practiced in the basaltic environments for the estimation of hydraulic conductivity parameter. However, more detailed researches are still required to make this proposed technique very performed in basaltic environments.

  11. Enhanced magnetocaloric effect tuning efficiency in Ni-Mn-Sn alloy ribbons

    NASA Astrophysics Data System (ADS)

    Quintana-Nedelcos, A.; Sánchez Llamazares, J. L.; Daniel-Perez, G.

    2017-11-01

    The present work was undertaken to investigate the effect of microstructure on the magnetic entropy change of Ni50Mn37Sn13 ribbon alloys. Unchanged sample composition and cell parameter of austenite allowed us to study strictly the correlation between the average grain size and the total magnetic field induced entropy change (ΔST). We found that a size-dependent martensitic transformation tuning results in a wide temperature range tailoring (>40 K) of the magnetic entropy change with a reasonably small variation on the peak value of the total field induced entropy change. The peak values varied from 6.0 J kg-1 K-1 to 7.7 J kg-1 K-1 for applied fields up to 2 T. Different tuning efficiencies obtained by diverse MCE tailoring approaches are compared to highlight the advantages of the herein proposed mechanism.

  12. Minimally inconsistent reasoning in Semantic Web.

    PubMed

    Zhang, Xiaowang

    2017-01-01

    Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning.

  13. Minimally inconsistent reasoning in Semantic Web

    PubMed Central

    Zhang, Xiaowang

    2017-01-01

    Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning. PMID:28750030

  14. Describing the geographic spread of dengue disease by traveling waves.

    PubMed

    Maidana, Norberto Aníbal; Yang, Hyun Mo

    2008-09-01

    Dengue is a human disease transmitted by the mosquito Aedes aegypti. For this reason geographical regions infested by this mosquito species are under the risk of dengue outbreaks. In this work, we propose a mathematical model to study the spatial dissemination of dengue using a system of partial differential reaction-diffusion equations. With respect to the human and mosquito populations, we take into account their respective subclasses of infected and uninfected individuals. The dynamics of the mosquito population considers only two subpopulations: the winged form (mature female mosquitoes), and an aquatic population (comprising eggs, larvae and pupae). We disregard the long-distance movement by transportation facilities, for which reason the diffusion is considered restricted only to the winged form. The human population is considered homogeneously distributed in space, in order to describe localized dengue dissemination during a short period of epidemics. The cross-infection is modeled by the law of mass action. A threshold value as a function of the model's parameters is obtained, which determines the rate of dengue dissemination and the risk of dengue outbreaks. Assuming that an area was previously colonized by the mosquitoes, the rate of disease dissemination is determined as a function of the model's parameters. This rate of dissemination of dengue disease is determined by applying the traveling wave solutions to the corresponding system of partial differential equations.

  15. Study of tissue oxygen supply rate in a macroscopic photodynamic therapy singlet oxygen model

    NASA Astrophysics Data System (ADS)

    Zhu, Timothy C.; Liu, Baochang; Penjweini, Rozhin

    2015-03-01

    An appropriate expression for the oxygen supply rate (Γs) is required for the macroscopic modeling of the complex mechanisms of photodynamic therapy (PDT). It is unrealistic to model the actual heterogeneous tumor microvascular networks coupled with the PDT processes because of the large computational requirement. In this study, a theoretical microscopic model based on uniformly distributed Krogh cylinders is used to calculate Γs=g (1-[O]/[]0) that can replace the complex modeling of blood vasculature while maintaining a reasonable resemblance to reality; g is the maximum oxygen supply rate and [O]/[]0 is the volume-average tissue oxygen concentration normalized to its value prior to PDT. The model incorporates kinetic equations of oxygen diffusion and convection within capillaries and oxygen saturation from oxyhemoglobin. Oxygen supply to the tissue is via diffusion from the uniformly distributed blood vessels. Oxygen can also diffuse along the radius and the longitudinal axis of the cylinder within tissue. The relations of Γs to [3O2]/] are examined for a biologically reasonable range of the physiological parameters for the microvasculature and several light fluence rates (ϕ). The results show a linear relationship between Γs and [3O2]/], independent of ϕ and photochemical parameters; the obtained g ranges from 0.4 to 1390 μM/s.

  16. Developmental relations between sympathy, moral emotion attributions, moral reasoning, and social justice values from childhood to early adolescence.

    PubMed

    Daniel, Ella; Dys, Sebastian P; Buchmann, Marlis; Malti, Tina

    2014-10-01

    This study examined the development of sympathy, moral emotion attributions (MEA), moral reasoning, and social justice values in a representative sample of Swiss children (N = 1273) at 6 years of age (Time 1), 9 years of age (Time 2), and 12 years of age (Time 3). Cross-lagged panel analyses revealed that sympathy predicted subsequent increases in MEA and moral reasoning, but not vice versa. In addition, sympathy and moral reasoning at 6 and 9 years of age were associated with social justice values at 12 years of age. The results point to increased integration of affect and cognition in children's morality from middle childhood to early adolescence, as well as to the role of moral development in the emergence of social justice values. Copyright © 2014 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  17. Evaluations of Conflicts Between Latino Values and Autonomy Desires Among Puerto Rican Adolescents.

    PubMed

    Villalobos Solís, Myriam; Smetana, Judith G; Tasopoulos-Chan, Marina

    2017-09-01

    Puerto Rican adolescents (N = 105; M age  = 15.97 years, SD = 1.40) evaluated hypothetical situations describing conflicts between Latino values (family obligations and respeto) and autonomy desires regarding personal, friendship, and dating activities. Adolescents judged that peers should prioritize Latino values over autonomy, which led to greater feelings of pride than happiness. However, they believed that teens would prioritize autonomy over Latino values, which led to greater feelings of happiness than pride. Adolescents reasoned about autonomy desires as personal issues, whereas reasoning about Latino values was multifaceted, including references to conventions and concerns for others. Furthermore, judgments and reasoning depended on the type of autonomy desire and Latino value and sometimes, by participants' age and sex. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  18. Relative importance of first and second derivatives of nuclear magnetic resonance chemical shifts and spin-spin coupling constants for vibrational averaging.

    PubMed

    Dracínský, Martin; Kaminský, Jakub; Bour, Petr

    2009-03-07

    Relative importance of anharmonic corrections to molecular vibrational energies, nuclear magnetic resonance (NMR) chemical shifts, and J-coupling constants was assessed for a model set of methane derivatives, differently charged alanine forms, and sugar models. Molecular quartic force fields and NMR parameter derivatives were obtained quantum mechanically by a numerical differentiation. In most cases the harmonic vibrational function combined with the property second derivatives provided the largest correction of the equilibrium values, while anharmonic corrections (third and fourth energy derivatives) were found less important. The most computationally expensive off-diagonal quartic energy derivatives involving four different coordinates provided a negligible contribution. The vibrational corrections of NMR shifts were small and yielded a convincing improvement only for very accurate wave function calculations. For the indirect spin-spin coupling constants the averaging significantly improved already the equilibrium values obtained at the density functional theory level. Both first and complete second shielding derivatives were found important for the shift corrections, while for the J-coupling constants the vibrational parts were dominated by the diagonal second derivatives. The vibrational corrections were also applied to some isotopic effects, where the corrected values reasonably well reproduced the experiment, but only if a full second-order expansion of the NMR parameters was included. Contributions of individual vibrational modes for the averaging are discussed. Similar behavior was found for the methane derivatives, and for the larger and polar molecules. The vibrational averaging thus facilitates interpretation of previous experimental results and suggests that it can make future molecular structural studies more reliable. Because of the lengthy numerical differentiation required to compute the NMR parameter derivatives their analytical implementation in future quantum chemistry packages is desirable.

  19. Temporal variation in methane emissions in a shallow lake at a southern mid latitude during high and low rainfall periods.

    PubMed

    Fusé, Victoria S; Priano, M Eugenia; Williams, Karen E; Gere, José I; Guzmán, Sergio A; Gratton, Roberto; Juliarena, M Paula

    2016-10-01

    The global methane (CH 4 ) emission of lakes is estimated at between 6 and 16 % of total natural CH 4 emissions. However, these values have a high uncertainty due to the wide variety of lakes with important differences in their morphological, biological, and physicochemical parameters and the relatively scarse data from southern mid-latitude lakes. For these reasons, we studied CH 4 fluxes and CH 4 dissolved in water in a typical shallow lake in the Pampean Wetland, Argentina, during four periods of consecutive years (April 2011-March 2015) preceded by different rainfall conditions. Other water physicochemical parameters were measured and meteorological data were reported. We identified three different states of the lake throughout the study as the result of the irregular alternation between high and low rainfall periods, with similar water temperature values but with important variations in dissolved oxygen, chemical oxygen demand, water turbidity, electric conductivity, and water level. As a consequence, marked seasonal and interannual variations occurred in CH 4 dissolved in water and CH 4 fluxes from the lake. These temporal variations were best reflected by water temperature and depth of the Secchi disk, as a water turbidity estimation, which had a significant double correlation with CH 4 dissolved in water. The mean CH 4 fluxes values were 0.22 and 4.09 mg/m 2 /h for periods with low and high water turbidity, respectively. This work suggests that water temperature and turbidity measurements could serve as indicator parameters of the state of the lake and, therefore, of its behavior as either a CH 4 source or sink.

  20. Truth-Valued-Flow Inference (TVFI) and its applications in approximate reasoning

    NASA Technical Reports Server (NTRS)

    Wang, Pei-Zhuang; Zhang, Hongmin; Xu, Wei

    1993-01-01

    The framework of the theory of Truth-valued-flow Inference (TVFI) is introduced. Even though there are dozens of papers presented on fuzzy reasoning, we think it is still needed to explore a rather unified fuzzy reasoning theory which has the following two features: (1) it is simplified enough to be executed feasibly and easily; and (2) it is well structural and well consistent enough that it can be built into a strict mathematical theory and is consistent with the theory proposed by L.A. Zadeh. TVFI is one of the fuzzy reasoning theories that satisfies the above two features. It presents inference by the form of networks, and naturally views inference as a process of truth values flowing among propositions.

  1. Acid-base and copper-binding properties of three organic matter fractions isolated from a forest floor soil solution

    NASA Astrophysics Data System (ADS)

    van Schaik, Joris W. J.; Kleja, Dan B.; Gustafsson, Jon Petter

    2010-02-01

    Vast amounts of knowledge about the proton- and metal-binding properties of dissolved organic matter (DOM) in natural waters have been obtained in studies on isolated humic and fulvic (hydrophobic) acids. Although macromolecular hydrophilic acids normally make up about one-third of DOM, their proton- and metal-binding properties are poorly known. Here, we investigated the acid-base and Cu-binding properties of the hydrophobic (fulvic) acid fraction and two hydrophilic fractions isolated from a soil solution. Proton titrations revealed a higher total charge for the hydrophilic acid fractions than for the hydrophobic acid fraction. The most hydrophilic fraction appeared to be dominated by weak acid sites, as evidenced by increased slope of the curve of surface charge versus pH at pH values above 6. The titration curves were poorly predicted by both Stockholm Humic Model (SHM) and NICA-Donnan model calculations using generic parameter values, but could be modelled accurately after optimisation of the proton-binding parameters (pH ⩽ 9). Cu-binding isotherms for the three fractions were determined at pH values of 4, 6 and 9. With the optimised proton-binding parameters, the SHM model predictions for Cu binding improved, whereas the NICA-Donnan predictions deteriorated. After optimisation of Cu-binding parameters, both models described the experimental data satisfactorily. Iron(III) and aluminium competed strongly with Cu for binding sites at both pH 4 and pH 6. The SHM model predicted this competition reasonably well, but the NICA-Donnan model underestimated the effects significantly at pH 6. Overall, the Cu-binding behaviour of the two hydrophilic acid fractions was very similar to that of the hydrophobic acid fraction, despite the differences observed in proton-binding characteristics. These results show that for modelling purposes, it is essential to include the hydrophilic acid fraction in the pool of 'active' humic substances.

  2. Optical pulse characteristics of sonoluminescence at low acoustic drive levels.

    PubMed

    Arakeri, V H; Giri, A

    2001-06-01

    From a nonaqueous alkali-metal salt solution, it is possible to observe sonoluminescence (SL) at low acoustic drive levels with the ratio of the acoustic pressure amplitude to the ambient pressure being about 1. In this case, the emission has a narrowband spectral content and consists of a few flashes of light from a levitated gas bubble going through an unstable motion. A systematic statistical study of the optical pulse characteristics of this form of SL is reported here. The results support our earlier findings [Phys. Rev. E 58, R2713 (1998)], but in addition we have clearly established a variation in the optical pulse duration with certain physical parameters such as the gas thermal conductivity. Quantitatively, the SL optical pulse width is observed to vary from 10 ns to 165 ns with the most probable value being 82 ns, for experiments with krypton-saturated sodium salt ethylene glycol solution. With argon, the variation is similar to that of krypton but the most probable value is reduced to 62 ns. The range is significantly smaller with helium, being from 22 ns to 65 ns with the most probable value also being reduced to 42 ns. The observed large variation, for example with krypton, under otherwise fixed controllable experimental parameters indicates that it is an inherent property of the observed SL process, which is transient in nature. It is this feature that necessitated our statistical study. Numerical simulations of the SL process using the bubble dynamics approach of Kamath, Prosperetti, and Egolfopoulos [J. Acoust. Soc. Am. 94, 248 (1993)] suggest that a key uncontrolled parameter, namely the initial bubble radius, may be responsible for the observations. In spite of the fact that certain parameters in the numerical computations have to be fixed from a best fit to one set of experimental data, the observed overall experimental trends of optical pulse characteristics are predicted reasonably well.

  3. Optical pulse characteristics of sonoluminescence at low acoustic drive levels

    NASA Astrophysics Data System (ADS)

    Arakeri, Vijay H.; Giri, Asis

    2001-06-01

    From a nonaqueous alkali-metal salt solution, it is possible to observe sonoluminescence (SL) at low acoustic drive levels with the ratio of the acoustic pressure amplitude to the ambient pressure being about 1. In this case, the emission has a narrowband spectral content and consists of a few flashes of light from a levitated gas bubble going through an unstable motion. A systematic statistical study of the optical pulse characteristics of this form of SL is reported here. The results support our earlier findings [Phys. Rev. E 58, R2713 (1998)], but in addition we have clearly established a variation in the optical pulse duration with certain physical parameters such as the gas thermal conductivity. Quantitatively, the SL optical pulse width is observed to vary from 10 ns to 165 ns with the most probable value being 82 ns, for experiments with krypton-saturated sodium salt ethylene glycol solution. With argon, the variation is similar to that of krypton but the most probable value is reduced to 62 ns. The range is significantly smaller with helium, being from 22 ns to 65 ns with the most probable value also being reduced to 42 ns. The observed large variation, for example with krypton, under otherwise fixed controllable experimental parameters indicates that it is an inherent property of the observed SL process, which is transient in nature. It is this feature that necessitated our statistical study. Numerical simulations of the SL process using the bubble dynamics approach of Kamath, Prosperetti, and Egolfopoulos [J. Acoust. Soc. Am. 94, 248 (1993)] suggest that a key uncontrolled parameter, namely the initial bubble radius, may be responsible for the observations. In spite of the fact that certain parameters in the numerical computations have to be fixed from a best fit to one set of experimental data, the observed overall experimental trends of optical pulse characteristics are predicted reasonably well.

  4. Heuristic lipophilicity potential for computer-aided rational drug design: optimizations of screening functions and parameters.

    PubMed

    Du, Q; Mezey, P G

    1998-09-01

    In this research we test and compare three possible atom-based screening functions used in the heuristic molecular lipophilicity potential (HMLP). Screening function 1 is a power distance-dependent function, bi/[formula: see text] Ri-r [formula: see text] gamma, screening function 2 is an exponential distance-dependent function, bi exp(-[formula: see text] Ri-r [formula: see text]/d0), and screening function 3 is a weighted distance-dependent function, sign(bi) exp[-xi [formula: see text] Ri-r [formula: see text]/magnitude of bi)]. For every screening function, the parameters (gamma, d0, and xi) are optimized using 41 common organic molecules of 4 types of compounds: aliphatic alcohols, aliphatic carboxylic acids, aliphatic amines, and aliphatic alkanes. The results of calculations show that screening function 3 cannot give chemically reasonable results, however, both the power screening function and the exponential screening function give chemically satisfactory results. There are two notable differences between screening functions 1 and 2. First, the exponential screening function has larger values in the short distance than the power screening function, therefore more influence from the nearest neighbors is involved using screening function 2 than screening function 1. Second, the power screening function has larger values in the long distance than the exponential screening function, therefore screening function 1 is effected by atoms at long distance more than screening function 2. For screening function 1, the suitable range of parameter gamma is 1.0 < gamma < 3.0, gamma = 2.3 is recommended, and gamma = 2.0 is the nearest integral value. For screening function 2, the suitable range of parameter d0 is 1.5 < d0 < 3.0, and d0 = 2.0 is recommended. HMLP developed in this research provides a potential tool for computer-aided three-dimensional drug design.

  5. The use of laboratory-determined ion exchange parameters in the predictive modelling of field-scale major cation migration in groundwater over a 40-year period.

    PubMed

    Carlyle, Harriet F; Tellam, John H; Parker, Karen E

    2004-01-01

    An attempt has been made to estimate quantitatively cation concentration changes as estuary water invades a Triassic Sandstone aquifer in northwest England. Cation exchange capacities and selectivity coefficients for Na(+), K(+), Ca(2+), and Mg(2+) were measured in the laboratory using standard techniques. Selectivity coefficients were also determined using a method involving optimized back-calculation from flushing experiments, thus permitting better representation of field conditions; in all cases, the Gaines-Thomas/constant cation exchange capacity (CEC) model was found to be a reasonable, though not perfect, first description. The exchange parameters interpreted from the laboratory experiments were used in a one-dimensional reactive transport mixing cell model, and predictions compared with field pumping well data (Cl and hardness spanning a period of around 40 years, and full major ion analyses in approximately 1980). The concentration patterns predicted using Gaines-Thomas exchange with calcite equilibrium were similar to the observed patterns, but the concentrations of the divalent ions were significantly overestimated, as were 1980 sulphate concentrations, and 1980 alkalinity concentrations were underestimated. Including representation of sulphate reduction in the estuarine alluvium failed to replicate 1980 HCO(3) and pH values. However, by including partial CO(2) degassing following sulphate reduction, a process for which there is 34S and 18O evidence from a previous study, a good match for SO(4), HCO(3), and pH was attained. Using this modified estuary water and averaged values from the laboratory ion exchange parameter determinations, good predictions for the field cation data were obtained. It is concluded that the Gaines-Thomas/constant exchange capacity model with averaged parameter values can be used successfully in ion exchange predictions in this aquifer at a regional scale and over extended time scales, despite the numerous assumptions inherent in the approach; this has also been found to be the case in the few other published studies of regional ion exchanging flow.

  6. The use of laboratory-determined ion exchange parameters in the predictive modelling of field-scale major cation migration in groundwater over a 40-year period

    NASA Astrophysics Data System (ADS)

    Carlyle, Harriet F.; Tellam, John H.; Parker, Karen E.

    2004-01-01

    An attempt has been made to estimate quantitatively cation concentration changes as estuary water invades a Triassic Sandstone aquifer in northwest England. Cation exchange capacities and selectivity coefficients for Na +, K +, Ca 2+, and Mg 2+ were measured in the laboratory using standard techniques. Selectivity coefficients were also determined using a method involving optimized back-calculation from flushing experiments, thus permitting better representation of field conditions; in all cases, the Gaines-Thomas/constant cation exchange capacity (CEC) model was found to be a reasonable, though not perfect, first description. The exchange parameters interpreted from the laboratory experiments were used in a one-dimensional reactive transport mixing cell model, and predictions compared with field pumping well data (Cl and hardness spanning a period of around 40 years, and full major ion analyses in ˜1980). The concentration patterns predicted using Gaines-Thomas exchange with calcite equilibrium were similar to the observed patterns, but the concentrations of the divalent ions were significantly overestimated, as were 1980 sulphate concentrations, and 1980 alkalinity concentrations were underestimated. Including representation of sulphate reduction in the estuarine alluvium failed to replicate 1980 HCO 3 and pH values. However, by including partial CO 2 degassing following sulphate reduction, a process for which there is 34S and 18O evidence from a previous study, a good match for SO 4, HCO 3, and pH was attained. Using this modified estuary water and averaged values from the laboratory ion exchange parameter determinations, good predictions for the field cation data were obtained. It is concluded that the Gaines-Thomas/constant exchange capacity model with averaged parameter values can be used successfully in ion exchange predictions in this aquifer at a regional scale and over extended time scales, despite the numerous assumptions inherent in the approach; this has also been found to be the case in the few other published studies of regional ion exchanging flow.

  7. Sediment residence times constrained by uranium-series isotopes: A critical appraisal of the comminution approach

    NASA Astrophysics Data System (ADS)

    Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim

    2013-02-01

    Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.

  8. Feasibility of TCP-based dose painting by numbers applied to a prostate case with (18)F-choline PET imaging.

    PubMed

    Dirscherl, Thomas; Rickhey, Mark; Bogner, Ludwig

    2012-02-01

    A biologically adaptive radiation treatment method to maximize the TCP is shown. Functional imaging is used to acquire a heterogeneous dose prescription in terms of Dose Painting by Numbers and to create a patient-specific IMRT plan. Adapted from a method for selective dose escalation under the guidance of spatial biology distribution, a model, which translates heterogeneously distributed radiobiological parameters into voxelwise dose prescriptions, was developed. At the example of a prostate case with (18)F-choline PET imaging, different sets of reported values for the parameters were examined concerning their resulting range of dose values. Furthermore, the influence of each parameter of the linear-quadratic model was investigated. A correlation between PET signal and proliferation as well as cell density was assumed. Using our in-house treatment planning software Direct Monte Carlo Optimization (DMCO), a treatment plan based on the obtained dose prescription was generated. Gafchromic EBT films were irradiated for evaluation. When a TCP of 95% was aimed at, the maximal dose in a voxel of the prescription exceeded 100Gy for most considered parameter sets. One of the parameter sets resulted in a dose range of 87.1Gy to 99.3Gy, yielding a TCP of 94.7%, and was investigated more closely. The TCP of the plan decreased to 73.5% after optimization based on that prescription. The dose difference histogram of optimized and prescribed dose revealed a mean of -1.64Gy and a standard deviation of 4.02Gy. Film verification showed a reasonable agreement of planned and delivered dose. If the distribution of radiobiological parameters within a tumor is known, this model can be used to create a dose-painting by numbers plan which maximizes the TCP. It could be shown, that such a heterogeneous dose distribution is technically feasible. Copyright © 2012. Published by Elsevier GmbH.

  9. Reconstruction of the Mars Science Laboratory Parachute Performance and Comparison to the Descent Simulation

    NASA Technical Reports Server (NTRS)

    Cruz, Juan R.; Way, David W.; Shidner, Jeremy D.; Davis, Jody L.; Adams, Douglas S.; Kipp, Devin M.

    2013-01-01

    The Mars Science Laboratory used a single mortar-deployed disk-gap-band parachute of 21.35 m nominal diameter to assist in the landing of the Curiosity rover on the surface of Mars. The parachute system s performance on Mars has been reconstructed using data from the on-board inertial measurement unit, atmospheric models, and terrestrial measurements of the parachute system. In addition, the parachute performance results were compared against the end-to-end entry, descent, and landing (EDL) simulation created to design, develop, and operate the EDL system. Mortar performance was nominal. The time from mortar fire to suspension lines stretch (deployment) was 1.135 s, and the time from suspension lines stretch to first peak force (inflation) was 0.635 s. These times were slightly shorter than those used in the simulation. The reconstructed aerodynamic portion of the first peak force was 153.8 kN; the median value for this parameter from an 8,000-trial Monte Carlo simulation yielded a value of 175.4 kN - 14% higher than the reconstructed value. Aeroshell dynamics during the parachute phase of EDL were evaluated by examining the aeroshell rotation rate and rotational acceleration. The peak values of these parameters were 69.4 deg/s and 625 deg/sq s, respectively, which were well within the acceptable range. The EDL simulation was successful in predicting the aeroshell dynamics within reasonable bounds. The average total parachute force coefficient for Mach numbers below 0.6 was 0.624, which is close to the pre-flight model nominal drag coefficient of 0.615.

  10. Deriving and Constraining 3D CME Kinematic Parameters from Multi-Viewpoint Coronagraph Images

    NASA Astrophysics Data System (ADS)

    Thompson, B. J.; Mei, H. F.; Barnes, D.; Colaninno, R. C.; Kwon, R.; Mays, M. L.; Mierla, M.; Moestl, C.; Richardson, I. G.; Verbeke, C.

    2017-12-01

    Determining the 3D properties of a coronal mass ejection using multi-viewpoint coronagraph observations can be a tremendously complicated process. There are many factors that inhibit the ability to unambiguously identify the speed, direction and shape of a CME. These factors include the need to separate the "true" CME mass from shock-associated brightenings, distinguish between non-radial or deflected trajectories, and identify asymmetric CME structures. Additionally, different measurement methods can produce different results, sometimes with great variations. Part of the reason for the wide range of values that can be reported for a single CME is due to the difficulty in determining the CME's longitude since uncertainty in the angle of the CME relative to the observing image planes results in errors in the speed and topology of the CME. Often the errors quoted in an individual study are remarkably small when compared to the range of values that are reported by different authors for the same CME. For example, two authors may report speeds of 700 +- 50 km/sec and 500+-50 km/sec for the same CME. Clearly a better understanding of the accuracy of CME measurements, and an improved assessment of the limitations of the different methods, would be of benefit. We report on a survey of CME measurements, wherein we compare the values reported by different authors and catalogs. The survey will allow us to establish typical errors for the parameters that are commonly used as inputs for CME propagation models such as ENLIL and EUHFORIA. One way modelers handle inaccuracies in CME parameters is to use an ensemble of CMEs, sampled across ranges of latitude, longitude, speed and width. The CMEs simulated in order to determine the probability of a "direct hit" and, for the cases with a "hit," derive a range of possible arrival times. Our study will provide improved guidelines for generating CME ensembles that more accurately sample across the range of plausible values.

  11. Maternal health status correlates with nest success of leatherback sea turtles (Dermochelys coriacea) from Florida.

    PubMed

    Perrault, Justin R; Miller, Debra L; Eads, Erica; Johnson, Chris; Merrill, Anita; Thompson, Larry J; Wyneken, Jeanette

    2012-01-01

    Of the seven sea turtle species, the critically endangered leatherback sea turtle (Dermochelys coriacea) exhibits the lowest and most variable nest success (i.e., hatching success and emergence success) for reasons that remain largely unknown. In an attempt to identify or rule out causes of low reproductive success in this species, we established the largest sample size (n = 60-70 for most values) of baseline blood parameters (protein electrophoresis, hematology, plasma biochemistry) for this species to date. Hematologic, protein electrophoretic and biochemical values are important tools that can provide information regarding the physiological condition of an individual and population health as a whole. It has been proposed that the health of nesting individuals affects their reproductive output. In order to establish correlations with low reproductive success in leatherback sea turtles from Florida, we compared maternal health indices to hatching success and emergence success of their nests. As expected, hatching success (median = 57.4%) and emergence success (median = 49.1%) in Floridian leatherbacks were low during the study period (2007-2008 nesting seasons), a trend common in most nesting leatherback populations (average global hatching success = ∼50%). One protein electrophoretic value (gamma globulin protein) and one hematologic value (red blood cell count) significantly correlated with hatching success and emergence success. Several maternal biochemical parameters correlated with hatching success and/or emergence success including alkaline phosphatase activity, blood urea nitrogen, calcium, calcium:phosphorus ratio, carbon dioxide, cholesterol, creatinine, and phosphorus. Our results suggest that in leatherbacks, physiological parameters correlate with hatching success and emergence success of their nests. We conclude that long-term and comparative studies are needed to determine if certain individuals produce nests with lower hatching success and emergence success than others, and if those individuals with evidence of chronic suboptimal health have lower reproductive success.

  12. Maternal Health Status Correlates with Nest Success of Leatherback Sea Turtles (Dermochelys coriacea) from Florida

    PubMed Central

    Perrault, Justin R.; Miller, Debra L.; Eads, Erica; Johnson, Chris; Merrill, Anita; Thompson, Larry J.; Wyneken, Jeanette

    2012-01-01

    Of the seven sea turtle species, the critically endangered leatherback sea turtle (Dermochelys coriacea) exhibits the lowest and most variable nest success (i.e., hatching success and emergence success) for reasons that remain largely unknown. In an attempt to identify or rule out causes of low reproductive success in this species, we established the largest sample size (n = 60–70 for most values) of baseline blood parameters (protein electrophoresis, hematology, plasma biochemistry) for this species to date. Hematologic, protein electrophoretic and biochemical values are important tools that can provide information regarding the physiological condition of an individual and population health as a whole. It has been proposed that the health of nesting individuals affects their reproductive output. In order to establish correlations with low reproductive success in leatherback sea turtles from Florida, we compared maternal health indices to hatching success and emergence success of their nests. As expected, hatching success (median = 57.4%) and emergence success (median = 49.1%) in Floridian leatherbacks were low during the study period (2007–2008 nesting seasons), a trend common in most nesting leatherback populations (average global hatching success = ∼50%). One protein electrophoretic value (gamma globulin protein) and one hematologic value (red blood cell count) significantly correlated with hatching success and emergence success. Several maternal biochemical parameters correlated with hatching success and/or emergence success including alkaline phosphatase activity, blood urea nitrogen, calcium, calcium∶phosphorus ratio, carbon dioxide, cholesterol, creatinine, and phosphorus. Our results suggest that in leatherbacks, physiological parameters correlate with hatching success and emergence success of their nests. We conclude that long-term and comparative studies are needed to determine if certain individuals produce nests with lower hatching success and emergence success than others, and if those individuals with evidence of chronic suboptimal health have lower reproductive success. PMID:22359635

  13. Analysis of statistical misconception in terms of statistical reasoning

    NASA Astrophysics Data System (ADS)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  14. High-resolution rovibrational study of the Coriolis-coupled v 12 and v 15 modes of [1.1.1]propellane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkpatrick, Robynne W; Masiello, Tony; Jariyasopit, Narumol

    Infrared spectra of the small strained cage molecule [1.1.1]propellane have been obtained at high resolution (0.0015 cm -1) and the J and K, l rovibrational structure has been resolved for the first time. We recently used combination-differences to obtain ground state parameters for propellane; over 4,100 differences from five fundamental and four combination bands were used in this process. The combination-difference approach eliminated errors due to localized perturbations in the upper state levels of the transitions and gave well-determined ground state parameters. In the current work, these ground state parameters were used in a determination of the upper state parametersmore » for the v 12(e') perpendicular and v 15(a 2") parallel bands. Over 4000 infrared transitions were fitted for each band, with J, K values ranging up to 71, 51 and 92, 90 respectively. While the transition frequencies for both bands can be fit nicely using separate analyses for each band, the strong intensity perturbations observed in the weaker v 12 band indicated that Coriolis coupling between the two modes was significant and should be included. Due to correlations with other parameters, the Coriolis coupling parameter ζ y 15,12a for the v 15 and v 12 interaction is poorly determined by a transition frequency fit alone. However, by combining the frequency fit with a fit of experimental intensities, a value of -0.42 was obtained, quite close to that predicted from the ab initio calculation (-0.44). This intensity fit also yielded a (∂μ z/∂Q 15)/(∂μ x/∂Q 12a) dipole derivative ratio of 36.5, in reasonable agreement with a value of 29.2 predicted by Gaussian ab initio density functional calculations using a cc-pVTZ basis. This ratio is unusually high due to large charge movement as the novel central Caxial-Caxial bond is displaced along the symmetry axis of the molecule for the v 15 mode.« less

  15. Gap-filling methods to impute eddy covariance flux data by preserving variance.

    NASA Astrophysics Data System (ADS)

    Kunwor, S.; Staudhammer, C. L.; Starr, G.; Loescher, H. W.

    2015-12-01

    To represent carbon dynamics, in terms of exchange of CO2 between the terrestrial ecosystem and the atmosphere, eddy covariance (EC) data has been collected using eddy flux towers from various sites across globe for more than two decades. However, measurements from EC data are missing for various reasons: precipitation, routine maintenance, or lack of vertical turbulence. In order to have estimates of net ecosystem exchange of carbon dioxide (NEE) with high precision and accuracy, robust gap-filling methods to impute missing data are required. While the methods used so far have provided robust estimates of the mean value of NEE, little attention has been paid to preserving the variance structures embodied by the flux data. Preserving the variance of these data will provide unbiased and precise estimates of NEE over time, which mimic natural fluctuations. We used a non-linear regression approach with moving windows of different lengths (15, 30, and 60-days) to estimate non-linear regression parameters for one year of flux data from a long-leaf pine site at the Joseph Jones Ecological Research Center. We used as our base the Michaelis-Menten and Van't Hoff functions. We assessed the potential physiological drivers of these parameters with linear models using micrometeorological predictors. We then used a parameter prediction approach to refine the non-linear gap-filling equations based on micrometeorological conditions. This provides us an opportunity to incorporate additional variables, such as vapor pressure deficit (VPD) and volumetric water content (VWC) into the equations. Our preliminary results indicate that improvements in gap-filling can be gained with a 30-day moving window with additional micrometeorological predictors (as indicated by lower root mean square error (RMSE) of the predicted values of NEE). Our next steps are to use these parameter predictions from moving windows to gap-fill the data with and without incorporation of potential driver variables of the parameters traditionally used. Then, comparisons of the predicted values from these methods and 'traditional' gap-filling methods (using 12 fixed monthly windows) will be assessed to show the scale of preserving variance. Further, this method will be applied to impute artificially created gaps for analyzing if variance is preserved.

  16. Electronic structure of ruthenium-doped iron chalcogenides

    NASA Astrophysics Data System (ADS)

    Winiarski, M. J.; Samsel-Czekała, M.; Ciechan, A.

    2014-12-01

    The structural and electronic properties of hypothetical RuxFe1-xSe and RuxFe1-xTe systems have been investigated from first principles within the density functional theory (DFT). Reasonable values of lattice parameters and chalcogen atomic positions in the tetragonal unit cell of iron chalcogenides have been obtained with the use of norm-conserving pseudopotentials. The well known discrepancies between experimental data and DFT-calculated results for structural parameters of iron chalcogenides are related to the semicore atomic states which were frozen in the used here approach. Such an approach yields valid results of the electronic structures of the investigated compounds. The Ru-based chalcogenides exhibit the same topology of the Fermi surface (FS) as that of FeSe, differing only in subtle FS nesting features. Our calculations predict that the ground states of RuSe and RuTe are nonmagnetic, whereas those of the solid solutions RuxFe1-xSe and RuxFe1-xTe become the single- and double-stripe antiferromagnetic, respectively. However, the calculated stabilization energy values are comparable for each system. The phase transitions between these magnetic arrangements may be induced by slight changes of the chalcogen atom positions and the lattice parameters a in the unit cell of iron selenides and tellurides. Since the superconductivity in iron chalcogenides is believed to be mediated by the spin fluctuations in single-stripe magnetic phase, the RuxFe1-xSe and RuxFe1-xTe systems are good candidates for new superconducting iron-based materials.

  17. A physically-based method for predicting peak discharge of floods caused by failure of natural and constructed earthen dams

    USGS Publications Warehouse

    Walder, J.S.; O'Connor, J. E.; Costa, J.E.; ,

    1997-01-01

    We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V.D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether < ??? 1 or < ??? 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.We analyze a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?????1 or ?????1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.

  18. Current Awareness and Use of the Strain Echocardiography in Routine Clinical Practices: Result of a Nationwide Survey in Korea.

    PubMed

    Lee, Ju-Hee; Park, Jae-Hyeong; Park, Seung Woo; Kim, Woo-Shik; Sohn, Il Suk; Chin, Jung Yeon; Cho, Jung Sun; Youn, Ho-Joong; Jung, Hae Ok; Lee, Sun Hwa; Kim, Seong-Hwan; Chung, Wook-Jin; Shim, Chi Young; Jeong, Jin-Won; Choi, Eui-Young; Rim, Se-Joong; Kim, Jang-Young; Kim, Kye Hun; Shin, Joon-Han; Kim, Dae-Hee; Jeon, Ung; Choi, Jung Hyun; Kim, Yong-Jin; Joo, Seung Jae; Kim, Ki-Hong; Cho, Kyoung Im; Cho, Goo-Yeong

    2017-09-01

    Because conventional echocardiographic parameters have several limitations, strain echocardiography has often been introduced in clinical practice. However, there are also obstacles in using it in clinical practice. Therefore, we wanted to find the current status of awareness on using strain echocardiography in Korea. We conducted a nationwide survey to evaluate current use and awareness of strain echocardiography from the members of the Korean Society of Echocardiography. We gathered total 321 questionnaires from 25 cardiology centers in Korea. All participants were able to perform or interpret echocardiographic examinations. All participating institutions performed strain echocardiography. Most of our study participants (97%) were aware of speckle tracking echocardiography and 185 (58%) performed it for clinical and research purposes. Two-dimensional strain echocardiography was the most commonly used modality and left ventricle (LV) was the most commonly used cardiac chamber (99%) for clinical purposes. Most of the participants (89%) did not think LV strain can replace LV ejection fraction (LVEF) in their clinical practice. The common reasons for not performing routine use of strain echocardiography was diversity of strain measurements and lack of normal reference value. Many participants had a favorable view of the future of strain echocardiography. Most of our study participants were aware of strain echocardiography, and all institutions performed strain echocardiography for clinical and research purposes. However, they did not think the LV strain values could replace LVEF. The diversity of strain measurements and lack of normal reference values were common reasons for not using strain echocardiography in clinical practice.

  19. Takahasi Nearest-Neighbour Gas Revisited II: Morse Gases

    NASA Astrophysics Data System (ADS)

    Matsumoto, Akira

    2011-12-01

    Some thermodynamic quantities for the Morse potential are analytically evaluated at an isobaric process. The parameters of Morse gases for 21 substances are obtained by the second virial coefficient data and the spectroscopic data of diatomic molecules. Also some thermodynamic quantities for water are calculated numerically and drawn graphically. The inflexion point of the length L which depends on temperature T and pressure P corresponds physically to a boiling point. L indicates the liquid phase from lower temperature to the inflexion point and the gaseous phase from the inflexion point to higher temperature. The boiling temperatures indicate reasonable values compared with experimental data. The behaviour of L suggests a chance of a first-order phase transition in one dimension.

  20. Duplex Tear Film Evaporation Analysis.

    PubMed

    Stapf, M R; Braun, R J; King-Smith, P E

    2017-12-01

    Tear film thinning, hyperosmolarity, and breakup can cause irritation and damage to the human eye, and these form an area of active investigation for dry eye syndrome research. Recent research demonstrates that deficiencies in the lipid layer may cause locally increased evaporation, inducing conditions for breakup. In this paper, we explore the conditions for tear film breakup by considering a model for tear film dynamics with two mobile fluid layers, the aqueous and lipid layers. In addition, we include the effects of osmosis, evaporation as modified by the lipid, and the polar portion of the lipid layer. We solve the system numerically for reasonable parameter values and initial conditions and analyze how shifts in these cause changes to the system's dynamics.

  1. Stochastic optimization for the detection of changes in maternal heart rate kinetics during pregnancy

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, M. S.; Barakat, R. O.; Cordente Martínez, C. A.; Sampedro Molinuevo, J.

    2011-03-01

    The stochastic optimization method ALOPEX IV has been successfully applied to the problem of detecting possible changes in the maternal heart rate kinetics during pregnancy. For this reason, maternal heart rate data were recorded before, during and after gestation, during sessions of exercises of constant mild intensity; ALOPEX IV stochastic optimization was used to calculate the parameter values that optimally fit a dynamical systems model to the experimental data. The results not only demonstrate the effectiveness of ALOPEX IV stochastic optimization, but also have important implications in the area of exercise physiology, as they reveal important changes in the maternal cardiovascular dynamics, as a result of pregnancy.

  2. Surface tension of flowing soap films

    NASA Astrophysics Data System (ADS)

    Sane, Aakash; Mandre, Shreyas; Kim, Ildoo

    2018-04-01

    The surface tension of flowing soap films is measured with respect to the film thickness and the concentration of soap solution. We perform this measurement by measuring the curvature of the nylon wires that bound the soap film channel and use the measured curvature to parametrize the relation between the surface tension and the tension of the wire. We find the surface tension of our soap films increases when the film is relatively thin or made of soap solution of low concentration, otherwise it approaches an asymptotic value 30 mN/m. A simple adsorption model with only two parameters describes our observations reasonably well. With our measurements, we are also able to measure Gibbs elasticity for our soap film.

  3. Noisy preferences in risky choice: A cautionary note.

    PubMed

    Bhatia, Sudeep; Loomes, Graham

    2017-10-01

    We examine the effects of multiple sources of noise in risky decision making. Noise in the parameters that characterize an individual's preferences can combine with noise in the response process to distort observed choice proportions. Thus, underlying preferences that conform to expected value maximization can appear to show systematic risk aversion or risk seeking. Similarly, core preferences that are consistent with expected utility theory, when perturbed by such noise, can appear to display nonlinear probability weighting. For this reason, modal choices cannot be used simplistically to infer underlying preferences. Quantitative model fits that do not allow for both sorts of noise can lead to wrong conclusions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Reasons to value the health care intangible asset valuation.

    PubMed

    Reilly, Robert F

    2012-01-01

    There are numerous individual reasons to conduct a health care intangible asset valuation. This discussion summarized many of these reasons and considered the common categories of these individual reasons. Understanding the reason for the intangible asset analysis is an important prerequisite to conducting the valuation, both for the analyst and the health care owner/operator. This is because an intangible asset valuation may not be the type of analysis that the owner/operator really needs. Rather, the owner/operator may really need an economic damages measurement, a license royalty rate analysis, an intercompany transfer price study, a commercialization potential evaluation, or some other type of intangible asset analysis. In addition, a clear definition of the reason for the valuation will allow the analyst to understand if (1) any specific analytical guidelines, procedures, or regulations apply and (2) any specific reporting requirement applies. For example, intangible asset valuations prepared for fair value accounting purposes should meet specific ASC 820 fair value accounting guidance. Intangible asset valuations performed for intercompany transfer price tax purposes should comply with the guidance provided in the Section 482 regulations. Likewise, intangible asset valuations prepared for Section 170 charitable contribution purposes should comply with specific reporting requirements. The individual reasons for the health care intangible asset valuation may influence the standard of value applied, the valuation date selected, the valuation approaches and methods applied, the form and format of valuation report prepared, and even the type of professional employed to perform the valuation.

  5. System and method for motor parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less

  6. Precise Orbital and Geodetic Parameter Estimation using SLR Observations for ILRS AAC

    NASA Astrophysics Data System (ADS)

    Kim, Young-Rok; Park, Eunseo; Oh, Hyungjik Jay; Park, Sang-Young; Lim, Hyung-Chul; Park, Chandeok

    2013-12-01

    In this study, we present results of precise orbital geodetic parameter estimation using satellite laser ranging (SLR) observations for the International Laser Ranging Service (ILRS) associate analysis center (AAC). Using normal point observations of LAGEOS-1, LAGEOS-2, ETALON-1, and ETALON-2 in SLR consolidated laser ranging data format, the NASA/ GSFC GEODYN II and SOLVE software programs were utilized for precise orbit determination (POD) and finding solutions of a terrestrial reference frame (TRF) and Earth orientation parameters (EOPs). For POD, a weekly-based orbit determination strategy was employed to process SLR observations taken from 20 weeks in 2013. For solutions of TRF and EOPs, loosely constrained scheme was used to integrate POD results of four geodetic SLR satellites. The coordinates of 11 ILRS core sites were determined and daily polar motion and polar motion rates were estimated. The root mean square (RMS) value of post-fit residuals was used for orbit quality assessment, and both the stability of TRF and the precision of EOPs by external comparison were analyzed for verification of our solutions. Results of post-fit residuals show that the RMS of the orbits of LAGEOS-1 and LAGEOS-2 are 1.20 and 1.12 cm, and those of ETALON-1 and ETALON-2 are 1.02 and 1.11 cm, respectively. The stability analysis of TRF shows that the mean value of 3D stability of the coordinates of 11 ILRS core sites is 7.0 mm. An external comparison, with respect to International Earth rotation and Reference systems Service (IERS) 08 C04 results, shows that standard deviations of polar motion XP and YP are 0.754 milliarcseconds (mas) and 0.576 mas, respectively. Our results of precise orbital and geodetic parameter estimation are reasonable and help advance research at ILRS AAC.

  7. Volumetric breast density measurement: sensitivity analysis of a relative physics approach

    PubMed Central

    Lau, Susie; Abdul Aziz, Yang Faridah

    2016-01-01

    Objective: To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. Methods: 3317 raw digital mammograms were processed with Volpara® (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Results: Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Conclusion: Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Advances in knowledge: Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be. PMID:27452264

  8. Multivariate models for prediction of rheological characteristics of filamentous fermentation broth from the size distribution.

    PubMed

    Petersen, Nanna; Stocks, Stuart; Gernaey, Krist V

    2008-05-01

    The main purpose of this article is to demonstrate that principal component analysis (PCA) and partial least squares regression (PLSR) can be used to extract information from particle size distribution data and predict rheological properties. Samples from commercially relevant Aspergillus oryzae fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield stress (tau(y)), consistency index (K), and flow behavior index (n)) resulted in a large standard deviation of the parameter estimates. The flow behavior index was not found to be correlated with any of the other measured variables and previous studies have suggested a constant value of the flow behavior index in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining rheological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (micro(app)), yield stress (tau(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as micro(app). Validation on an independent test set yielded a root mean square error of 1.21 Pa for tau(y), 0.209 Pa s(n) for K, and 0.0288 Pa s for micro(app), corresponding to R(2) = 0.95, R(2) = 0.94, and R(2) = 0.95 respectively. Copyright 2007 Wiley Periodicals, Inc.

  9. Equivalent circuit parameters of nickel/metal hydride batteries from sparse impedance measurements

    NASA Astrophysics Data System (ADS)

    Nelatury, Sudarshan Rao; Singh, Pritpal

    In a recent communication, a method for extracting the equivalent circuit parameters of a lead acid battery from sparse (only three) impedance spectroscopy observations at three different frequencies was proposed. It was based on an equivalent circuit consisting of a bulk resistance, a reaction resistance and a constant phase element (CPE). Such a circuit is a very appropriate model of a lead-acid cell at high state of charge (SOC). This paper is a sequel to it and presents an application of it in case of nickel/metal hydride (Ni/MH) batteries, which also at high SOC are represented by the same circuit configuration. But when the SOC of a Ni/MH battery under interrogation goes low, The EIS curve has a positive slope at the low frequency end and our technique yields complex values for the otherwise real circuit parameters, suggesting the need for additional elements in the equivalent circuit and a definite relationship between parameter consistency and SOC. To improvise the previous algorithm, in order that it works reasonably well at both high and low SOCs, we propose three more measurements—two at very low frequencies to include the Warburg response and one at a high frequency to model the series inductance, in addition to the three in the mid frequency band—totally six measurements. In most of the today's instrumentation, it is the user who should choose the circuit configuration and the number of frequencies where impedance should be measured and the accompanying software performs data fitting by complex nonlinear least squares. The proposed method has built into it an SOC-based decision-making capability—both to choose the circuit configuration and to estimate the values of the circuit elements.

  10. The Predictive Power of Electronic Polarizability for Tailoring the Refractivity of High Index Glasses Optical Basicity Versus the Single Oscillator Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCloy, John S.; Riley, Brian J.; Johnson, Bradley R.

    Four compositions of high density (~8 g/cm3) heavy metal oxide glasses composed of PbO, Bi2O3, and Ga2O3 were produced and refractivity parameters (refractive index and density) were computed and measured. Optical basicity was computed using three different models – average electronegativity, ionic-covalent parameter, and energy gap – and the basicity results were used to compute oxygen polarizability and subsequently refractive index. Refractive indices were measured in the visible and infrared at 0.633 μm, 1.55 μm, 3.39 μm, 5.35 μm, 9.29 μm, and 10.59 μm using a unique prism coupler setup, and data were fitted to the Sellmeier expression to obtainmore » an equation of the dispersion of refractive index with wavelength. Using this dispersion relation, single oscillator energy, dispersion energy, and lattice energy were determined. Oscillator parameters were also calculated for the various glasses from their oxide values as an additional means of predicting index. Calculated dispersion parameters from oxides underestimate the index by 3 to 4%. Predicted glass index from optical basicity, based on component oxide energy gaps, underpredicts the index at 0.633 μm by only 2%, while other basicity scales are less accurate. The predicted energy gap of the glasses based on this optical basicity overpredicts the Tauc optical gap as determined by transmission measurements by 6 to 10%. These results show that for this system, density, refractive index in the visible, and energy gap can be reasonably predicted using only composition, optical basicity values for the constituent oxides, and partial molar volume coefficients. Calculations such as these are useful for a priori prediction of optical properties of glasses.« less

  11. Development and Training of a Neural Controller for Hind Leg Walking in a Dog Robot

    PubMed Central

    Hunt, Alexander; Szczecinski, Nicholas; Quinn, Roger

    2017-01-01

    Animals dynamically adapt to varying terrain and small perturbations with remarkable ease. These adaptations arise from complex interactions between the environment and biomechanical and neural components of the animal's body and nervous system. Research into mammalian locomotion has resulted in several neural and neuro-mechanical models, some of which have been tested in simulation, but few “synthetic nervous systems” have been implemented in physical hardware models of animal systems. One reason is that the implementation into a physical system is not straightforward. For example, it is difficult to make robotic actuators and sensors that model those in the animal. Therefore, even if the sensorimotor circuits were known in great detail, those parameters would not be applicable and new parameter values must be found for the network in the robotic model of the animal. This manuscript demonstrates an automatic method for setting parameter values in a synthetic nervous system composed of non-spiking leaky integrator neuron models. This method works by first using a model of the system to determine required motor neuron activations to produce stable walking. Parameters in the neural system are then tuned systematically such that it produces similar activations to the desired pattern determined using expected sensory feedback. We demonstrate that the developed method successfully produces adaptive locomotion in the rear legs of a dog-like robot actuated by artificial muscles. Furthermore, the results support the validity of current models of mammalian locomotion. This research will serve as a basis for testing more complex locomotion controllers and for testing specific sensory pathways and biomechanical designs. Additionally, the developed method can be used to automatically adapt the neural controller for different mechanical designs such that it could be used to control different robotic systems. PMID:28420977

  12. Volumetric breast density measurement: sensitivity analysis of a relative physics approach.

    PubMed

    Lau, Susie; Ng, Kwan Hoong; Abdul Aziz, Yang Faridah

    2016-10-01

    To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. 3317 raw digital mammograms were processed with Volpara(®) (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be.

  13. Gender differences associated with rearfoot, midfoot, and forefoot kinematics during running.

    PubMed

    Takabayashi, Tomoya; Edama, Mutsuaki; Nakamura, Masatoshi; Nakamura, Emi; Inai, Takuma; Kubo, Masayoshi

    2017-11-01

    Females, as compared with males, have a higher proportion of injuries in the foot region. However, the reason for this gender difference regarding foot injuries remains unclear. This study aimed to investigate gender differences associated with rearfoot, midfoot, and forefoot kinematics during running. Twelve healthy males and 12 females ran on a treadmill. The running speed was set to speed which changes from walking to running. Three-dimensional kinematics of rearfoot, midfoot, and forefoot were collected and compared between males and females. Furthermore, spatiotemporal parameters (speed, cadence, and step length) were measured. In the rearfoot angle, females showed a significantly greater peak value of plantarflexion and range of motion in the sagittal plane as compared with males (effect size (ES) = 1.55 and ES = 1.12, respectively). In the midfoot angle, females showed a significantly greater peak value of dorsiflexion and range of motion in the sagittal plane as compared with males (ES = 1.49 and ES = 1.71, respectively). The forefoot peak angles and ranges of motion were not significantly different between the genders in all three planes. A previous study suggested that a gender-related difference in excessive motions of the lower extremities during running has been suggested as a contributing factor to running injuries. Therefore, the present investigation may provide insight into the reason for the high incidence of foot injuries in females.

  14. Combining 3D Hydraulic Tomography with Tracer Tests for Improved Transport Characterization.

    PubMed

    Sanchez-León, E; Leven, C; Haslauer, C P; Cirpka, O A

    2016-07-01

    Hydraulic tomography (HT) is a method for resolving the spatial distribution of hydraulic parameters to some extent, but many details important for solute transport usually remain unresolved. We present a methodology to improve solute transport predictions by combining data from HT with the breakthrough curve (BTC) of a single forced-gradient tracer test. We estimated the three dimensional (3D) hydraulic-conductivity field in an alluvial aquifer by inverting tomographic pumping tests performed at the Hydrogeological Research Site Lauswiesen close to Tübingen, Germany, using a regularized pilot-point method. We compared the estimated parameter field to available profiles of hydraulic-conductivity variations from direct-push injection logging (DPIL), and validated the hydraulic-conductivity field with hydraulic-head measurements of tests not used in the inversion. After validation, spatially uniform parameters for dual-domain transport were estimated by fitting tracer data collected during a forced-gradient tracer test. The dual-domain assumption was used to parameterize effects of the unresolved heterogeneity of the aquifer and deemed necessary to fit the shape of the BTC using reasonable parameter values. The estimated hydraulic-conductivity field and transport parameters were subsequently used to successfully predict a second independent tracer test. Our work provides an efficient and practical approach to predict solute transport in heterogeneous aquifers without performing elaborate field tracer tests with a tomographic layout. © 2015, National Ground Water Association.

  15. Sensor Fusion Based on an Integrated Neural Network and Probability Density Function (PDF) Dual Kalman Filter for On-Line Estimation of Vehicle Parameters and States.

    PubMed

    Vargas-Melendez, Leandro; Boada, Beatriz L; Boada, Maria Jesus L; Gauchia, Antonio; Diaz, Vicente

    2017-04-29

    Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33 % of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle's parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle's roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle's states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm.

  16. 1-D DC Resistivity Modeling and Interpretation in Anisotropic Media Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pekşen, Ertan; Yas, Türker; Kıyak, Alper

    2014-09-01

    We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

  17. Sensor Fusion Based on an Integrated Neural Network and Probability Density Function (PDF) Dual Kalman Filter for On-Line Estimation of Vehicle Parameters and States

    PubMed Central

    Vargas-Melendez, Leandro; Boada, Beatriz L.; Boada, Maria Jesus L.; Gauchia, Antonio; Diaz, Vicente

    2017-01-01

    Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33% of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle’s parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle’s roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle’s states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm. PMID:28468252

  18. Weibull analysis of fracture test data on bovine cortical bone: influence of orientation.

    PubMed

    Khandaker, Morshed; Ekwaro-Osire, Stephen

    2013-01-01

    The fracture toughness, K IC, of a cortical bone has been experimentally determined by several researchers. The variation of K IC values occurs from the variation of specimen orientation, shape, and size during the experiment. The fracture toughness of a cortical bone is governed by the severest flaw and, hence, may be analyzed using Weibull statistics. To the best of the authors' knowledge, however, no studies of this aspect have been published. The motivation of the study is the evaluation of Weibull parameters at the circumferential-longitudinal (CL) and longitudinal-circumferential (LC) directions. We hypothesized that Weibull parameters vary depending on the bone microstructure. In the present work, a two-parameter Weibull statistical model was applied to calculate the plane-strain fracture toughness of bovine femoral cortical bone obtained using specimens extracted from CL and LC directions of the bone. It was found that the Weibull modulus of fracture toughness was larger for CL specimens compared to LC specimens, but the opposite trend was seen for the characteristic fracture toughness. The reason for these trends is the microstructural and extrinsic toughening mechanism differences between CL and LC directions bone. The Weibull parameters found in this study can be applied to develop a damage-mechanics model for bone.

  19. A semi-mechanistic model of CP-690,550-induced reduction in neutrophil counts in patients with rheumatoid arthritis.

    PubMed

    Gupta, Pankaj; Friberg, Lena E; Karlsson, Mats O; Krishnaswami, Sriram; French, Jonathan

    2010-06-01

    CP-690,550, a selective inhibitor of the Janus kinase family, is being developed as an oral disease-modifying antirheumatic drug for the treatment of rheumatoid arthritis (RA). A semi-mechanistic model was developed to characterize the time course of drug-induced absolute neutrophil count (ANC) reduction in a phase 2a study. Data from 264 RA patients receiving 6-week treatment (placebo, 5, 15, 30 mg bid) followed by a 6-week off-treatment period were analyzed. The model included a progenitor cell pool, a maturation chain comprising transit compartments, a circulation pool, and a feedback mechanism. The model was adequately described by system parameters (BASE(h), ktr(h), gamma, and k(circ)), disease effect parameters (DIS), and drug effect parameters (k(off) and k(D)). The disease manifested as an increase in baseline ANC and reduced maturation time due to increased demand from the inflammation site. The drug restored the perturbed system parameters to their normal values via an indirect mechanism. ANC reduction due to a direct myelosuppressive drug effect was not supported. The final model successfully described the dose- and time-dependent changes in ANC and predicted the incidence of neutropenia at different doses reasonably well.

  20. Interpreting the 750 GeV diphoton excess by the singlet extension of the Manohar-Wise model

    NASA Astrophysics Data System (ADS)

    Cao, Junjie; Han, Chengcheng; Shang, Liangliang; Su, Wei; Yang, Jin Min; Zhang, Yang

    2016-04-01

    The evidence of a new scalar particle X from the 750 GeV diphoton excess, and the absence of any other signal of new physics at the LHC so far suggest the existence of new colored scalars, which may be moderately light and thus can induce sizable Xgg and Xγγ couplings without resorting to very strong interactions. Motivated by this speculation, we extend the Manohar-Wise model by adding one gauge singlet scalar field. The resulting theory then predicts one singlet dominated scalar ϕ as well as three kinds of color-octet scalars, which can mediate through loops the ϕgg and ϕγγ interactions. After fitting the model to the diphoton data at the LHC, we find that in reasonable parameter regions the excess can be explained at 1σ level by the process gg → ϕ → γγ, and the best points predict the central value of the excess rate with χmin2 = 2.32, which corresponds to a p-value of 0.68. We also consider the constraints from various LHC Run I signals, and we conclude that, although these constraints are powerful in excluding the parameter space of the model, the best points are still experimentally allowed.

  1. Sensor data monitoring and decision level fusion scheme for early fire detection

    NASA Astrophysics Data System (ADS)

    Rizogiannis, Constantinos; Thanos, Konstantinos Georgios; Astyakopoulos, Alkiviadis; Kyriazanos, Dimitris M.; Thomopoulos, Stelios C. A.

    2017-05-01

    The aim of this paper is to present the sensor monitoring and decision level fusion scheme for early fire detection which has been developed in the context of the AF3 Advanced Forest Fire Fighting European FP7 research project, adopted specifically in the OCULUS-Fire control and command system and tested during a firefighting field test in Greece with prescribed real fire, generating early-warning detection alerts and notifications. For this purpose and in order to improve the reliability of the fire detection system, a two-level fusion scheme is developed exploiting a variety of observation solutions from air e.g. UAV infrared cameras, ground e.g. meteorological and atmospheric sensors and ancillary sources e.g. public information channels, citizens smartphone applications and social media. In the first level, a change point detection technique is applied to detect changes in the mean value of each measured parameter by the ground sensors such as temperature, humidity and CO2 and then the Rate-of-Rise of each changed parameter is calculated. In the second level the fire event Basic Probability Assignment (BPA) function is determined for each ground sensor using Fuzzy-logic theory and then the corresponding mass values are combined in a decision level fusion process using Evidential Reasoning theory to estimate the final fire event probability.

  2. Estimation of Scale Deposition in the Water Walls of an Operating Indian Coal Fired Boiler: Predictive Modeling Approach Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Kumari, Amrita; Das, Suchandan Kumar; Srivastava, Prem Kumar

    2016-04-01

    Application of computational intelligence for predicting industrial processes has been in extensive use in various industrial sectors including power sector industry. An ANN model using multi-layer perceptron philosophy has been proposed in this paper to predict the deposition behaviors of oxide scale on waterwall tubes of a coal fired boiler. The input parameters comprises of boiler water chemistry and associated operating parameters, such as, pH, alkalinity, total dissolved solids, specific conductivity, iron and dissolved oxygen concentration of the feed water and local heat flux on boiler tube. An efficient gradient based network optimization algorithm has been employed to minimize neural predictions errors. Effects of heat flux, iron content, pH and the concentrations of total dissolved solids in feed water and other operating variables on the scale deposition behavior have been studied. It has been observed that heat flux, iron content and pH of the feed water have a relatively prime influence on the rate of oxide scale deposition in water walls of an Indian boiler. Reasonably good agreement between ANN model predictions and the measured values of oxide scale deposition rate has been observed which is corroborated by the regression fit between these values.

  3. Estimating the spin diffusion length and the spin Hall angle from spin pumping induced inverse spin Hall voltages

    NASA Astrophysics Data System (ADS)

    Roy, Kuntal

    2017-11-01

    There exists considerable confusion in estimating the spin diffusion length of materials with high spin-orbit coupling from spin pumping experiments. For designing functional devices, it is important to determine the spin diffusion length with sufficient accuracy from experimental results. An inaccurate estimation of spin diffusion length also affects the estimation of other parameters (e.g., spin mixing conductance, spin Hall angle) concomitantly. The spin diffusion length for platinum (Pt) has been reported in the literature in a wide range of 0.5-14 nm, and in particular it is a constant value independent of Pt's thickness. Here, the key reasonings behind such a wide range of reported values of spin diffusion length have been identified comprehensively. In particular, it is shown here that a thickness-dependent conductivity and spin diffusion length is necessary to simultaneously match the experimental results of effective spin mixing conductance and inverse spin Hall voltage due to spin pumping. Such a thickness-dependent spin diffusion length is tantamount to the Elliott-Yafet spin relaxation mechanism, which bodes well for transitional metals. This conclusion is not altered even when there is significant interfacial spin memory loss. Furthermore, the variations in the estimated parameters are also studied, which is important for technological applications.

  4. Extended tests of an SU(3) partial dynamical symmetry

    DOE PAGES

    Couture, Aaron Joseph; Casten, Richard F.; Cakirli, R. B.

    2015-01-16

    Background: A recent survey of well-deformed rare earth nuclei showed that B(E2) values from the γ band to the ground band could be explained rather well by a parameter-free description in terms of a partial dynamical symmetry (PDS). Purpose: Our purpose in this paper is to extend this study to deformed and transitional nuclei in the actinide and A ~ 100 regions to determine if the success of the PDS description is general in medium- and heavy-mass nuclei and to investigate further where it breaks down. Method: As with the previous study we study the empirical relative B(E2 : γmore » to ground) values in comparison to a pure rotor (Alaga) model and to the SU(3) PDS. Results: The data for the actinides, albeit sparser than in the rare-earth region, are reasonably well accounted for by the PDS but with systematic discrepancies. For the Mo isotopes, the PDS improves on the Alaga rules but largely fails to account for the data. Conclusions: As in the rare earths, the parameter-free PDS gives improved predictions compared to the Alaga rules for the actinides. The differences between the PDS predictions and the data are shown to point directly to specific mixing effects. Finally, in the Mo isotopes, their transitional character is directly seen in the large deviations of the B(E2) values from the PDS in the direction of the selection rules of the vibrator.« less

  5. Thermophysical Parameters of Organic PCM Coconut Oil from T-History Method and Its Potential as Thermal Energy Storage in Indonesia

    NASA Astrophysics Data System (ADS)

    Silalahi, Alfriska O.; Sukmawati, Nissa; Sutjahja, I. M.; Kurnia, D.; Wonorahardjo, S.

    2017-07-01

    The thermophysical parameters of organic phase change material (PCM) of coconut oil (co_oil) have been studied by analyzing the temperature vs time data during liquid-solid phase transition (solidification process) based on T-history method, adopting the original version and its modified form to extract the values of mean specific heats of the solid and liquid co_oil and the heat of fusion related to phase transition of co_oil. We found that the liquid-solid phase transition occurs rather gradually, which might be due to the fact that co_oil consists of many kinds of fatty acids with the largest amount of lauric acid (about 50%), with relatively small supercooling degree. For this reason, the end of phase transition region become smeared out, although the inflection point in the temperature derivative is clearly observed signifying the drastic temperature variation between the phase transition and solid phase periods. The data have led to the values of mean specific heat of the solid and liquid co_oil that are comparable to the pure lauric acid, while the value for heat of fusion is resemble to those of the DSC result, both from references data. The advantage of co_oil as the potential sensible and latent TES for room-temperature conditioning application in Indonesia is discussed in terms of its rather broad working temperature range due to its mixture composition characteristic.

  6. Impact of removable partial denture prosthesis on chewing efficiency

    PubMed Central

    BESSADET, Marion; NICOLAS, Emmanuel; SOCHAT, Marine; HENNEQUIN, Martine; VEYRUNE, Jean-Luc

    2013-01-01

    Removable partial denture prostheses are still being used for anatomic, medical and economic reasons. However, the impact on chewing parameters is poorly described. Objectives The objective of this study was to estimate the impact of removable partial denture prosthesis on masticatory parameters. Material and Methods Nineteen removable partial denture prosthesis (RPDP) wearers participated in the study. Among them, 10 subjects were Kennedy Class III partially edentulous and 9 with posterior edentulism (Class I). All presented a complete and full dentate opposing arch. The subjects chewed samples of carrots and peanuts with and without their prosthesis. The granulometry of the expectorated boluses from carrot and peanuts was characterized by median particle size (D50), determined at the natural point of swallowing. Number of chewing cycles (CC), chewing time (CT) and chewing frequency (CF=CC/CT) were video recorded. Results With RPDP, the mean D50 values for carrot and peanuts were lower [Repeated Model Procedures (RMP), F=15, p<0.001] regardless of the type of Kennedy Class. For each food, mean CC, CT and CF values recorded decreased (RMP, F=18, F=9, and F=20 respectively, p<0.01). With or without RPD, the boluses' granulometry values were above the masticatory normative index (MNI) determined as 4,000 µm. Conclusion RPDP rehabilitation improves the ability to reduce the bolus particle size, but does not reestablish fully the masticatory function. Clinical relevance This study encourages the clinical improvement of oral rehabilitation procedure. PMID:24212983

  7. Lattice-dynamical model for the filled skutterudite LaFe4Sb12: Harmonic and anharmonic couplings

    NASA Astrophysics Data System (ADS)

    Feldman, J. L.; Singh, D. J.; Bernstein, N.

    2014-06-01

    The filled skutterudite LaFe4Sb12 shows greatly reduced thermal conductivity compared to that of the related unfilled compound CoSb3, although the microscopic reasons for this are unclear. We calculate harmonic and anharmonic force constants for the interaction of the La filler atom with the framework atoms. We find that force constants show a general trend of decaying rapidly with distance and are very small for the interaction of the La with its next-nearest-neighbor Sb and nearest-neighbor La. However, a few rather long-range interactions, such as with the next-nearest-neighbor La and with the third neighbor Sb, are surprisingly strong, although still small. We test the central-force approximation and find significant deviations from it. Using our force constants we calculate a bare La mode Gruneisen parameter and find a value of 3-4, substantially higher than values associated with cage atom anharmonicity, i.e., a value of about 1 for CoSb3 but much smaller than a previous estimate [Bernstein et al., Phys. Rev. B 81, 134301 (2010), 10.1103/PhysRevB.81.134301]. This latter difference is primarily due to the previously used overestimate of the La-Fe cubic force constants. We also find a substantial negative contribution to this bare La Gruneisen parameter from the aforementioned third-neighbor La-Sb interaction. Our results underscore the need for rather long-range interactions in describing the role of anharmonicity on the dynamics in this material.

  8. Predicting the cosmological constant with the scale-factor cutoff measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Simone, Andrea; Guth, Alan H.; Salem, Michael P.

    2008-09-15

    It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less

  9. Chemical composition, nutritive value, and toxicological evaluation of Bauhinia cheilantha seeds: a legume from semiarid regions widely used in folk medicine.

    PubMed

    Teixeira, Daniel Câmara; Farias, Davi Felipe; Carvalho, Ana Fontenele Urano; Arantes, Mariana Reis; Oliveira, José Tadeu Abreu; Sousa, Daniele Oliveira Bezerra; Pereira, Mirella Leite; Oliveira, Hermogenes David; Andrade-Neto, Manoel; Vasconcelos, Ilka Maria

    2013-01-01

    Among the Bauhinia species, B. cheilantha stands out for its seed protein content. However, there is no record of its nutritional value, being used in a nonsustainable way in the folk medicine and for large-scale extraction of timber. The aim of this study was to investigate the food potential of B. cheilantha seeds with emphasis on its protein quality to provide support for flora conservation and use as raw material or as prototype for the development of bioproducts with high added socioeconomic value. B. cheilantha seeds show high protein content (35.9%), reasonable essential amino acids profile, low levels of antinutritional compounds, and nutritional parameters comparable to those of legumes widely used such as soybean and cowpea. The heat treatment of the seeds as well as the protein extraction process (to obtain the protein concentrate) increased the acceptance of diets by about 100% when compared to that of raw Bc diet. These wild legume seeds can be promising alternative source of food to overcome the malnutrition problem faced by low income people adding socioeconomic value to the species.

  10. Chemical Composition, Nutritive Value, and Toxicological Evaluation of Bauhinia cheilantha Seeds: A Legume from Semiarid Regions Widely Used in Folk Medicine

    PubMed Central

    Teixeira, Daniel Câmara; Farias, Davi Felipe; Carvalho, Ana Fontenele Urano; Arantes, Mariana Reis; Oliveira, José Tadeu Abreu; Sousa, Daniele Oliveira Bezerra; Pereira, Mirella Leite; Oliveira, Hermogenes David; Andrade-Neto, Manoel; Vasconcelos, Ilka Maria

    2013-01-01

    Among the Bauhinia species, B. cheilantha stands out for its seed protein content. However, there is no record of its nutritional value, being used in a nonsustainable way in the folk medicine and for large-scale extraction of timber. The aim of this study was to investigate the food potential of B. cheilantha seeds with emphasis on its protein quality to provide support for flora conservation and use as raw material or as prototype for the development of bioproducts with high added socioeconomic value. B. cheilantha seeds show high protein content (35.9%), reasonable essential amino acids profile, low levels of antinutritional compounds, and nutritional parameters comparable to those of legumes widely used such as soybean and cowpea. The heat treatment of the seeds as well as the protein extraction process (to obtain the protein concentrate) increased the acceptance of diets by about 100% when compared to that of raw Bc diet. These wild legume seeds can be promising alternative source of food to overcome the malnutrition problem faced by low income people adding socioeconomic value to the species. PMID:23691507

  11. On the possibility of an alpha-sq omega-type dynamo in a thin layer inside the sun

    NASA Technical Reports Server (NTRS)

    Choudhuri, Arnab Rai

    1990-01-01

    If the solar dynamo operates in a thin layer of 10,000-km thickness at the interface between the convection zone and the radiative core, using the facts that the dynamo should have a period of 22 years and a half-wavelength of 40 deg in the theta-direction, it is possible to impose restrictions on the values which various dynamo parameters are allowed to have. It is pointed out that the dynamo should be of alpha-sq omega nature, and kinematical calculations are presented for free dynamo waves and for dynamos in thin rectangular slabs with appropriate boundary conditions. An alpha-sq omega dynamo is expected to produce a significant poloidal field which does not leak to the solar surface. It is found that the turbulent diffusity eta and alpha-coefficient are restricted to values within about a factor of 10, the median values being eta of about 10 to the 10th sq cm/sec and alpha of about 10 cm/sec. On the basis of mixing length theory, it is pointed out that such values imply a reasonable turbulent velocity of the order 30 m/s, but rather small turbulent length scales like 300 km.

  12. Perturbations and gradients as fundamental tests for modeling the soil carbon cycle

    NASA Astrophysics Data System (ADS)

    Bond-Lamberty, B. P.; Bailey, V. L.; Becker, K.; Fansler, S.; Hinkle, C.; Liu, C.

    2013-12-01

    An important step in matching process-level knowledge to larger-scale measurements and model results is to challenge those models with site-specific perturbations and/or changing environmental conditions. Here we subject modified versions of an ecosystem process model to two stringent tests: replicating a long-term climate change dryland experiment (Rattlesnake Mountain) and partitioning the carbon fluxes of a soil drainage gradient in the northern Everglades (Disney Wilderness Preserve). For both sites, on-site measurements were supplemented by laboratory incubations of soil columns. We used a parameter-space search algorithm to optimize, within observational limits, the model's influential inputs, so that the spun-up carbon stocks and fluxes matched observed values. Modeled carbon fluxes (net primary production and net ecosystem exchange) agreed with measured values, within observational error limits, but the model's partitioning of soil fluxes (autotrophic versus heterotrophic), did not match laboratory measurements from either site. Accounting for site heterogeneity at DWP, modeled carbon exchange was reasonably consistent with values from eddy covariance. We discuss the implications of this work for ecosystem- to global scale modeling of ecosystems in a changing climate.

  13. Slope Estimation in Noisy Piecewise Linear Functions✩

    PubMed Central

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2014-01-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020

  14. Slope Estimation in Noisy Piecewise Linear Functions.

    PubMed

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2015-03-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.

  15. Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model

    PubMed Central

    Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.

    2013-01-01

    One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874

  16. Interaction model between capsule robot and intestine based on nonlinear viscoelasticity.

    PubMed

    Zhang, Cheng; Liu, Hao; Tan, Renjia; Li, Hongyi

    2014-03-01

    Active capsule endoscope could also be called capsule robot, has been developed from laboratory research to clinical application. However, the system still has defects, such as poor controllability and failing to realize automatic checks. The imperfection of the interaction model between capsule robot and intestine is one of the dominating reasons causing the above problems. A model is hoped to be established for the control method of the capsule robot in this article. It is established based on nonlinear viscoelasticity. The interaction force of the model consists of environmental resistance, viscous resistance and Coulomb friction. The parameters of the model are identified by experimental investigation. Different methods are used in the experiment to obtain different values of the same parameter at different velocities. The model is proved to be valid by experimental verification. The achievement in this article is the attempted perfection of an interaction model. It is hoped that the model can optimize the control method of the capsule robot in the future.

  17. Air condition sensor on KNX network

    NASA Astrophysics Data System (ADS)

    Gecova, Katerina; Vala, David; Slanina, Zdenek; Walendziuk, Wojciech

    2017-08-01

    One of the main goals of modern buildings in addition to the management environment is also attempt to save energy. For this reason, increased demands on the prevention of energy loss, which can be expressed for example as an inefficient use of the available functions as a building or heat leakage. Reducing heat loss as a perfect tightness of doors and windows in the building, however, restricts the natural ventilation, which leads to a gradual deterioration of the quality of the internal environment. This state then has a very significant impact on human health. In the closed, poorly ventilated area, the person staying at increasing the carbon dioxide concentration, temperature and humidity, which impacts the human thermoregulation system, increases fatigue and causes restlessness. It is therefore necessary to monitor these parameters and then control so as to ensure stable and optimal human values. The aim is to design and implementation Module sensors that will be able to measure different parameters, allowing the subsequent regulation of indoor environmental quality.

  18. Earth Tide Analysis Specifics in Case of Unstable Aquifer Regime

    NASA Astrophysics Data System (ADS)

    Vinogradov, Evgeny; Gorbunova, Ella; Besedina, Alina; Kabychenko, Nikolay

    2017-06-01

    We consider the main factors that affect underground water flow including aquifer supply, collector state, and distant earthquakes seismic waves' passage. In geodynamically stable conditions underground inflow change can significantly distort hydrogeological response to Earth tides, which leads to the incorrect estimation of phase shift between tidal harmonics of ground displacement and water level variations in a wellbore. Besides an original approach to phase shift estimation that allows us to get one value per day for the semidiurnal M2 wave, we offer the empirical method of excluding periods of time that are strongly affected by high inflow. In spite of rather strong ground motion during earthquake waves' passage, we did not observe corresponding phase shift change against the background on significant recurrent variations due to fluctuating inflow influence. Though inflow variations do not look like the only important parameter that must be taken into consideration while performing phase shift analysis, permeability estimation is not adequate without correction based on background alternations of aquifer parameters due to natural and anthropogenic reasons.

  19. Application of multivariable search techniques to the optimization of airfoils in a low speed nonlinear inviscid flow field

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Merz, A. W.

    1975-01-01

    Multivariable search techniques are applied to a particular class of airfoil optimization problems. These are the maximization of lift and the minimization of disturbance pressure magnitude in an inviscid nonlinear flow field. A variety of multivariable search techniques contained in an existing nonlinear optimization code, AESOP, are applied to this design problem. These techniques include elementary single parameter perturbation methods, organized search such as steepest-descent, quadratic, and Davidon methods, randomized procedures, and a generalized search acceleration technique. Airfoil design variables are seven in number and define perturbations to the profile of an existing NACA airfoil. The relative efficiency of the techniques are compared. It is shown that elementary one parameter at a time and random techniques compare favorably with organized searches in the class of problems considered. It is also shown that significant reductions in disturbance pressure magnitude can be made while retaining reasonable lift coefficient values at low free stream Mach numbers.

  20. Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2008-07-15

    In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper's capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determinedmore » for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency ({eta} {approx} 20%) and dark count probability (p{sub dark} {approx} 10{sup -7})« less

  1. Robust automatic measurement of 3D scanned models for the human body fat estimation.

    PubMed

    Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo

    2015-03-01

    In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.

  2. Statistical distributions of extreme dry spell in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Jemain, Abdul Aziz

    2010-11-01

    Statistical distributions of annual extreme (AE) series and partial duration (PD) series for dry-spell event are analyzed for a database of daily rainfall records of 50 rain-gauge stations in Peninsular Malaysia, with recording period extending from 1975 to 2004. The three-parameter generalized extreme value (GEV) and generalized Pareto (GP) distributions are considered to model both series. In both cases, the parameters of these two distributions are fitted by means of the L-moments method, which provides a robust estimation of them. The goodness-of-fit (GOF) between empirical data and theoretical distributions are then evaluated by means of the L-moment ratio diagram and several goodness-of-fit tests for each of the 50 stations. It is found that for the majority of stations, the AE and PD series are well fitted by the GEV and GP models, respectively. Based on the models that have been identified, we can reasonably predict the risks associated with extreme dry spells for various return periods.

  3. Earth Tide Analysis Specifics in Case of Unstable Aquifer Regime

    NASA Astrophysics Data System (ADS)

    Vinogradov, Evgeny; Gorbunova, Ella; Besedina, Alina; Kabychenko, Nikolay

    2018-05-01

    We consider the main factors that affect underground water flow including aquifer supply, collector state, and distant earthquakes seismic waves' passage. In geodynamically stable conditions underground inflow change can significantly distort hydrogeological response to Earth tides, which leads to the incorrect estimation of phase shift between tidal harmonics of ground displacement and water level variations in a wellbore. Besides an original approach to phase shift estimation that allows us to get one value per day for the semidiurnal M2 wave, we offer the empirical method of excluding periods of time that are strongly affected by high inflow. In spite of rather strong ground motion during earthquake waves' passage, we did not observe corresponding phase shift change against the background on significant recurrent variations due to fluctuating inflow influence. Though inflow variations do not look like the only important parameter that must be taken into consideration while performing phase shift analysis, permeability estimation is not adequate without correction based on background alternations of aquifer parameters due to natural and anthropogenic reasons.

  4. Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol

    NASA Astrophysics Data System (ADS)

    Molotkov, S. N.

    2008-07-01

    In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper’s capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determined for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency (η ≈ 20%) and dark count probability ( p dark ˜ 10-7).

  5. A unifying strain criterion for fracture of fibrous composite laminates

    NASA Technical Reports Server (NTRS)

    Poe, C. C., Jr.

    1983-01-01

    Fibrous composite materials, such as graphite/epoxy, are light, stiff, and strong. They have great potential for reducing weight in aircraft structures. However, for a realization of this potential, designers will have to know the fracture toughness of composite laminates in order to design damage tolerant structures. In connection with the development of an economical testing procedure, there is a great need for a single fracture toughness parameter which can be used to predict the stress-intensity factor (K(Q)) for all laminates of interest to the designer. Poe and Sova (1980) have derived a general fracture toughness parameter (Qc), which is a material constant. It defines the critical level of strains in the principal load-carryng plies. The present investigation is concerned with the calculation of values for the ratio of Qc and the ultimate tensile strain of the fibers. The obtained data indicate that this ratio is reasonably constant for layups which fail largely by self-similar crack extension.

  6. Modelling the transitional boundary layer

    NASA Technical Reports Server (NTRS)

    Narasimha, R.

    1990-01-01

    Recent developments in the modelling of the transition zone in the boundary layer are reviewed (the zone being defined as extending from the station where intermittency begins to depart from zero to that where it is nearly unity). The value of using a new non-dimensional spot formation rate parameter, and the importance of allowing for so-called subtransitions within the transition zone, are both stressed. Models do reasonably well in constant pressure 2-dimensional flows, but in the presence of strong pressure gradients further improvements are needed. The linear combination approach works surprisingly well in most cases, but would not be so successful in situations where a purely laminar boundary layer would separate but a transitional one would not. Intermittency-weighted eddy viscosity methods do not predict peak surface parameters well without the introduction of an overshooting transition function whose connection with the spot theory of transition is obscure. Suggestions are made for further work that now appears necessary for developing improved models of the transition zone.

  7. Parameter estimation for lithium ion batteries

    NASA Astrophysics Data System (ADS)

    Santhanagopalan, Shriram

    With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of road conditions is important. An algorithm to predict the SOC in time intervals as small as 5 ms is of critical demand. In such cases, the conventional non-linear estimation procedure is not time-effective. There exist methodologies in the literature, such as those based on fuzzy logic; however, these techniques require a lot of computational storage space. Consequently, it is not possible to implement such techniques on a micro-chip for integration as a part of a real-time device. The Extended Kalman Filter (EKF) based approach presented in this work is a first step towards developing an efficient method to predict online, the State of Charge of a lithium ion cell based on an electrochemical model. The final part of the dissertation focuses on incorporating uncertainty in parameter values into electrochemical models using the polynomial chaos theory (PCT).

  8. Environmental values, ethics, and depreciative behavior in wildland settings

    Treesearch

    Dorceta E. Taylor; Patricia L. Winter

    1995-01-01

    Preliminary results were examined from a self-administered questionnaire regarding the relationships between personal values, individual characteristics, and depreciative behaviors. Respondents were queried about socio-demographics, reasons for visiting forest recreation areas, reasons for liking and disliking the forest, activities witnessed while visiting the forest...

  9. Expertise and reasoning with possibility: An explanation of modal logic and expert systems

    NASA Technical Reports Server (NTRS)

    Rochowiak, Daniel

    1988-01-01

    Recently systems of modal reasoning have been brought to the foreground of artificial intelligence studies. The intuitive idea of research efforts in this area is that in addition to the actual world in which sentences have certain truth values there are other worlds in which those sentences have different truth values. Such alternative worlds can be considered as possible worlds, and an agent may or may not have access to some or all of them. This approach to reasoning can be valuable in extending the expert system paradigm. Using the scheme of reasoning proposed by Toulmin, Reike and Janick and the modal system T, a scheme is proposed for expert reasoning that mitigates some of the criticisms raised by Schank and Nickerson.

  10. CREST v2.1 Refined by a Distributed Linear Reservoir Routing Scheme

    NASA Astrophysics Data System (ADS)

    Shen, X.; Hong, Y.; Zhang, K.; Hao, Z.; Wang, D.

    2014-12-01

    Hydrologic modeling is important in water resources management, and flooding disaster warning and management. Routing scheme is among the most important components of a hydrologic model. In this study, we replace the lumped LRR (linear reservoir routing) scheme used in previous versions of the distributed hydrological model, CREST (coupled routing and excess storage) by a newly proposed distributed LRR method, which is theoretically more suitable for distributed hydrological models. Consequently, we have effectively solved the problems of: 1) low values of channel flow in daily simulation, 2) discontinuous flow value along the river network during flood events and 3) irrational model parameters. The CREST model equipped with both the routing schemes have been tested in the Gan River basin. The distributed LRR scheme has been confirmed to outperform the lumped counterpart by two comparisons, hydrograph validation and visual speculation of the continuity of stream flow along the river: 1) The CREST v2.1 (version 2.1) with the implementation of the distributed LRR achieved excellent result of [NSCE(Nash coefficient), CC (correlation coefficient), bias] =[0.897, 0.947 -1.57%] while the original CREST v2.0 produced only negative NSCE, close to zero CC and large bias. 2) CREST v2.1 produced more naturally smooth river flow pattern along the river network while v2.0 simulated bumping and discontinuous discharge along the mainstream. Moreover, we further observe that by using the distributed LRR method, 1) all model parameters fell within their reasonable region after an automatic optimization; 2) CREST forced by satellite-based precipitation and PET products produces a reasonably well result, i.e., (NSCE, CC, bias) = (0.756, 0.871, -0.669%) in the case study, although there is still room to improve regarding their low spatial resolution and underestimation of the heavy rainfall events in the satellite products.

  11. Reaction-Diffusion-Delay Model for EPO/TNF-α Interaction in articular cartilage lesion abatement

    PubMed Central

    2012-01-01

    Background Injuries to articular cartilage result in the development of lesions that form on the surface of the cartilage. Such lesions are associated with articular cartilage degeneration and osteoarthritis. The typical injury response often causes collateral damage, primarily an effect of inflammation, which results in the spread of lesions beyond the region where the initial injury occurs. Results and discussion We present a minimal mathematical model based on known mechanisms to investigate the spread and abatement of such lesions. The first case corresponds to the parameter values listed in Table 1, while the second case has parameter values as in Table 2. In particular we represent the "balancing act" between pro-inflammatory and anti-inflammatory cytokines that is hypothesized to be a principal mechanism in the expansion properties of cartilage damage during the typical injury response. We present preliminary results of in vitro studies that confirm the anti-inflammatory activities of the cytokine erythropoietin (EPO). We assume that the diffusion of cytokines determine the spatial behavior of injury response and lesion expansion so that a reaction diffusion system involving chemical species and chondrocyte cell state population densities is a natural way to represent cartilage injury response. We present computational results using the mathematical model showing that our representation is successful in capturing much of the interesting spatial behavior of injury associated lesion development and abatement in articular cartilage. Further, we discuss the use of this model to study the possibility of using EPO as a therapy for reducing the amount of inflammation induced collateral damage to cartilage during the typical injury response. Table 1 Model Parameter Values for Results in Figure 5 Table of Parameter Values Corresponding to Simulations in Figure 5 Parameter Value Units Reason D R 0.1 c m 2 day Determined from [13] D M 0.05 c m 2 day Determined from [13] D F 0.05 c m 2 day Determined from [13] D P 0.005 c m 2 day Determined from [13] δ R 0.01 1 day Approximated δ M 0.6 1 day Approximated δ F 0.6 1 day Approximated δ P 0.0087 1 day Approximated δ U 0.0001 1 day Approximated σ R 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ M 0.00001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ F 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ P 0 micromolar ⋅ c m 2 day ⋅ cells Case with no anti-inflammatory response Λ 10 micromolar Approximated λ R 10 micromolar Approximated λ M 10 micromolar Approximated λ F 10 micromolar Approximated λ P 10 micromolar Approximated α 0 1 day Case with no anti-inflammatory response β 1 100 1 day Approximated Β 2 50 1 day Approximated γ 10 1 day Approximated ν 0.5 1 day Approximated μ S A 1 1 day Approximated μ D N 0.5 1 day Approximated τ 1 0.5 days Taken from [5] τ 2 1 days Taken from [5] Table 2 Model Parameter Values for Results in Figure 6 Table of Parameter Values Corresponding to Simulations in Figure 6 Parameter Value Units Reason D R 0.1 c m 2 day Determined from [13] D M 0.05 c m 2 day Determined from [13] D F 0.05 c m 2 day Determined from [13] DP 0.005 c m 2 day Determined from [13] δ R 0.01 1 day Approximated δ M 0.6 1 day Approximated δ F 0.6 1 day Approximated δ P 0.0087 1 day Approximated δ U 0.0001 1 day Approximated σ R 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ M 0.00001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ F 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ P 0.001 micromolar ⋅ c m 2 day ⋅ cells Approximated Λ 10 micromolar Approximated λ R 10 micromolar Approximated λ M 10 micromolar Approximated λ F 10 micromolar Approximated λ P 10 micromolar Approximated α 10 1 day Approximated β 1 100 1 day Approximated β 2 50 1 day Approximated γ 10 1 day Approximated ν 0.5 1 day Approximated μ S A 1 1 day Approximated μ D N 0.5 1 day Approximated τ 1 0.5 days Taken from [5] τ 2 1 days Taken from [5] Conclusions The mathematical model presented herein suggests that not only are anti-inflammatory cy-tokines, such as EPO necessary to prevent chondrocytes signaled by pro-inflammatory cytokines from entering apoptosis, they may also influence how chondrocytes respond to signaling by pro-inflammatory cytokines. Reviewers This paper has been reviewed by Yang Kuang, James Faeder and Anna Marciniak-Czochra. PMID:22353555

  12. Oppositional Defiance, Moral Reasoning and Moral Value Evaluation as Predictors of Self-Reported Juvenile Delinquency

    ERIC Educational Resources Information Center

    Beerthuizen, Marinus G. C. J.; Brugman, Daniel; Basinger, Karen S.

    2013-01-01

    This study investigated the relationships among oppositional defiant attitudes, moral reasoning, moral value evaluation and self-reported delinquent behaviour in adolescents ("N" = 351, "M"[subscript AGE] = 13.8 years, "SD"[subscript AGE] = 1.1). Of particular interest were the moderating effects of age, educational…

  13. Statistical Reasoning Ability, Self-Efficacy, and Value Beliefs in a University Statistics Course

    ERIC Educational Resources Information Center

    Olani, A.; Hoekstra, R.; Harskamp, E.; van der Werf, G.

    2011-01-01

    Introduction: The study investigated the degree to which students' statistical reasoning abilities, statistics self-efficacy, and perceived value of statistics improved during a reform based introductory statistics course. The study also examined whether the changes in these learning outcomes differed with respect to the students' mathematical…

  14. Ethics Assessment in a General Education Programme

    ERIC Educational Resources Information Center

    Quesenberry, Le Gene; Phillips, Jamie; Woodburne, Paul; Yang, Chin

    2012-01-01

    The purpose of this study was to assess whether flagged "values intensive" courses within a public university's general education curriculum impacted on students' abilities to reason ethically. The major research question to be explored was, "what effect does taking a values intensive course have on students' ethical reasoning ability, when…

  15. Computational solution verification and validation applied to a thermal model of a ruggedized instrumentation package

    DOE PAGES

    Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...

    2014-01-01

    This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less

  16. The framed Standard Model (II) — A first test against experiment

    NASA Astrophysics Data System (ADS)

    Chan, Hong-Mo; Tsou, Sheung Tsun

    2015-10-01

    Apart from the qualitative features described in Paper I (Ref. 1), the renormalization group equation derived for the rotation of the fermion mass matrices are amenable to quantitative study. The equation depends on a coupling and a fudge factor and, on integration, on 3 integration constants. Its application to data analysis, however, requires the input from experiment of the heaviest generation masses mt, mb, mτ, mν3 all of which are known, except for mν3. Together then with the theta-angle in the QCD action, there are in all 7 real unknown parameters. Determining these 7 parameters by fitting to the experimental values of the masses mc, mμ, me, the CKM elements |Vus|, |Vub|, and the neutrino oscillation angle sin2θ 13, one can then calculate and compare with experiment the following 12 other quantities ms, mu/md, |Vud|, |Vcs|, |Vtb|, |Vcd|, |Vcb|, |Vts|, |Vtd|, J, sin22θ 12, sin22θ 23, and the results all agree reasonably well with data, often to within the stringent experimental error now achieved. Counting the predictions not yet measured by experiment, this means that 17 independent parameters of the standard model are now replaced by 7 in the FSM.

  17. Io - A volcanic flow model for the hot spot emission spectrum and a thermostatic mechanism

    NASA Technical Reports Server (NTRS)

    Sinton, V. M.

    1982-01-01

    The hot spots of Io are modeled as a steady state of active areas at 600 K, continuing creation of new lava flows and calderas, cooling off of recent flows and calderas, and the cessation of radiation of old flows and calderas from the accumulation of insulation added by resurfacing. There are three adjustable parameters in this model: the area of active sources at 600 K, the rate of production of new area that is cooling, and the temperature of cessation of emission as the result of resurfacing. The resurfacing rate sets constrains on this last parameter. The emission spectrum computed with reasonable values for these parameters is an excellent match to the spectrum from recent observations. A thermostatic mechanism is described whereby the volcanic activity is turned on for a long period of time and is then turned off for a nearly equal period. As a result the presently observed internal heat flow of approximately 1.5 W/sq m may be as much as twice the rate of production of internal heat. Thus the restrictions placed on theories of tidal dissipation by the observed heat flow may be partially relieved.

  18. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  19. 40 CFR 60.58c - Reporting and recordkeeping requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....57c(d), the owner or operator shall maintain all operating parameter data collected; (xvii) For...) Identification of calendar days for which data on emission rates or operating parameters specified under... operating parameters not measured, reasons for not obtaining the data, and a description of corrective...

  20. Estimation of primordial spectrum with post-WMAP 3-year data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman; Souradeep, Tarun

    2008-07-15

    In this paper we implement an improved (error-sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the Wilkinson Microwave Anisotropy Probe (WMAP) 3 year data to determine the primordial power spectrum assuming different points in the cosmological parameter space for a flat {lambda}CDM cosmological model. We also present the preliminary results of the cosmological parameter estimation by assuming a free form of the primordial spectrum, for a reasonably large volume of the parameter space. The recovered spectrum for a considerably large number of the points in the cosmological parameter space has a likelihood far better than a 'bestmore » fit' power law spectrum up to {delta}{chi}{sub eff}{sup 2}{approx_equal}-30. We use discrete wavelet transform (DWT) for smoothing the raw recovered spectrum from the binned data. The results obtained here reconfirm and sharpen the conclusion drawn from our previous analysis of the WMAP 1st year data. A sharp cut off around the horizon scale and a bump after the horizon scale seem to be a common feature for all of these reconstructed primordial spectra. We have shown that although the WMAP 3 year data prefers a lower value of matter density for a power law form of the primordial spectrum, for a free form of the spectrum, we can get a very good likelihood to the data for higher values of matter density. We have also shown that even a flat cold dark matter model, allowing a free form of the primordial spectrum, can give a very high likelihood fit to the data. Theoretical interpretation of the results is open to the cosmology community. However, this work provides strong evidence that the data retains discriminatory power in the cosmological parameter space even when there is full freedom in choosing the primordial spectrum.« less

  1. Examining the Influence of Context and Professional Culture on Clinical Reasoning Through Rhetorical-Narrative Analysis.

    PubMed

    Peters, Amanda; Vanstone, Meredith; Monteiro, Sandra; Norman, Geoff; Sherbino, Jonathan; Sibbald, Matthew

    2017-05-01

    According to the dual process model of reasoning, physicians make diagnostic decisions using two mental systems: System 1, which is rapid, unconscious, and intuitive, and System 2, which is slow, rational, and analytical. Currently, little is known about physicians' use of System 1 or intuitive reasoning in practice. In a qualitative study of clinical reasoning, physicians were asked to tell stories about times when they used intuitive reasoning while working up an acutely unwell patient, and we combine socio-narratology and rhetorical theory to analyze physicians' stories. Our analysis reveals that in describing their work, physicians draw on two competing narrative structures: one that is aligned with an evidence-based medicine approach valuing System 2 and one that is aligned with cooperative decision making involving others in the clinical environment valuing System 1. Our findings support an understanding of clinical reasoning as distributed, contextual, and influenced by professional culture.

  2. On the accuracy of estimation of basic pharmacokinetic parameters by the traditional noncompartmental equations and the prediction of the steady-state volume of distribution in obese patients based upon data derived from normal subjects.

    PubMed

    Berezhkovskiy, Leonid M

    2011-06-01

    The steady-state and terminal volumes of distribution, as well as the mean residence time of drug in the body (V(ss), V(β), and MRT) are the common pharmacokinetic parameters calculated using the drug plasma concentration-time profile C(p) (t) following intravenous (i.v. bolus or constant rate infusion) drug administration. These calculations are valid for the linear pharmacokinetic system with central elimination (i.e., elimination rate being proportional to drug concentration in plasma). Formally, the assumption of central elimination is not normally met because the rate of drug elimination is proportional to the unbound drug concentration at elimination site, although equilibration between systemic circulation and the site of clearance for majority of small molecule drugs is fast. Thus, the assumption of central elimination is practically quite adequate. It appears reasonable to estimate the extent of possible errors in determination of these pharmacokinetic parameters due to the absence of central elimination. The comparison of V(ss), V(β), and MRT calculated by exact equations and the commonly used ones was made considering a simplified physiologically based pharmacokinetic model. It was found that if the drug plasma concentration profile is detected accurately, determination of drug distribution volumes and MRT using the traditional noncompartmental calculations of these parameters from C(p) (t) yields the values very close to that obtained from exact equations. Though in practice, the accurate measurement of C(p) (t), especially its terminal phase, may not always be possible. This is particularly applicable for obtaining the distribution volumes of lipophilic compounds in obese subjects, when the possibility of late terminal phase at low drug concentration is quite likely, specifically for compounds with high clearance. An accurate determination of V(ss) is much needed in clinical practice because it is critical for the proper selection of drug treatment regimen. For that reason, we developed a convenient method for calculation of V(ss) in obese (or underweight) subjects. It is based on using the V(ss) values obtained from pharmacokinetic studies in normal subjects and the physicochemical properties of drug molecule. A simple criterion that determines either the increase or decrease of V(ss) (per unit body weight) due to obesity is obtained. The accurate determination of adipose tissue-plasma partition coefficient is crucial for the practical application of suggested method. Copyright © 2011 Wiley-Liss, Inc.

  3. Mechanical Characteristics Analysis of Surrounding Rock on Anchor Bar Reinforcement

    NASA Astrophysics Data System (ADS)

    Gu, Shuan-cheng; Zhou, Pan; Huang, Rong-bin

    2018-03-01

    Through the homogenization method, the composite of rock and anchor bar is considered as the equivalent material of continuous, homogeneous, isotropic and strength parameter enhancement, which is defined as reinforcement body. On the basis of elasticity, the composite and the reinforcement are analyzed, Based on strengthening theory of surrounding rock and displacement equivalent conditions, the expression of reinforcement body strength parameters and mechanical parameters is deduced. The example calculation shows that the theoretical results are close to the results of the Jia-mei Gao[9], however, closer to the results of FLAC3D numerical simulation, it is proved that the model and surrounding rock reinforcement body theory are reasonable. the model is easy to analyze and calculate, provides a new way for determining reasonable bolt support parameters, can also provides reference for the stability analysis of underground cavern bolting support.

  4. Validating models of target acquisition performance in the dismounted soldier context

    NASA Astrophysics Data System (ADS)

    Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.

    2018-04-01

    The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral paradigm.

  5. Metabolic Tumor Volume and Total Lesion Glycolysis in Oropharyngeal Cancer Treated With Definitive Radiotherapy: Which Threshold Is the Best Predictor of Local Control?

    PubMed

    Castelli, Joël; Depeursinge, Adrien; de Bari, Berardino; Devillers, Anne; de Crevoisier, Renaud; Bourhis, Jean; Prior, John O

    2017-06-01

    In the context of oropharyngeal cancer treated with definitive radiotherapy, the aim of this retrospective study was to identify the best threshold value to compute metabolic tumor volume (MTV) and/or total lesion glycolysis to predict local-regional control (LRC) and disease-free survival. One hundred twenty patients with a locally advanced oropharyngeal cancer from 2 different institutions treated with definitive radiotherapy underwent FDG PET/CT before treatment. Various MTVs and total lesion glycolysis were defined based on 2 segmentation methods: (i) an absolute threshold of SUV (0-20 g/mL) or (ii) a relative threshold for SUVmax (0%-100%). The parameters' predictive capabilities for disease-free survival and LRC were assessed using the Harrell C-index and Cox regression model. Relative thresholds between 40% and 68% and absolute threshold between 5.5 and 7 had a similar predictive value for LRC (C-index = 0.65 and 0.64, respectively). Metabolic tumor volume had a higher predictive value than gross tumor volume (C-index = 0.61) and SUVmax (C-index = 0.54). Metabolic tumor volume computed with a relative threshold of 51% of SUVmax was the best predictor of disease-free survival (hazard ratio, 1.23 [per 10 mL], P = 0.009) and LRC (hazard ratio: 1.22 [per 10 mL], P = 0.02). The use of different thresholds within a reasonable range (between 5.5 and 7 for an absolute threshold and between 40% and 68% for a relative threshold) seems to have no major impact on the predictive value of MTV. This parameter may be used to identify patient with a high risk of recurrence and who may benefit from treatment intensification.

  6. Finding Top-kappa Unexplained Activities in Video

    DTIC Science & Technology

    2012-03-09

    parameters that define an UAP instance affect the running time by varying the values of each parameter while keeping the others fixed to a default...value. Runtime of Top-k TUA. Table 1 reports the values we considered for each parameter along with the corresponding default value. Parameter Values...Default value k 1, 2, 5, All All τ 0.4, 0.6, 0.8 0.6 L 160, 200, 240, 280 200 # worlds 7 E+04, 4 E+05, 2 E+07 2 E+07 TABLE 1: Parameter values used in

  7. Faculty Grading of Quantitative Problems: A Mismatch between Values and Practice

    NASA Astrophysics Data System (ADS)

    Petcovic, Heather L.; Fynewever, Herb; Henderson, Charles; Mutambuki, Jacinta M.; Barney, Jeffrey A.

    2013-04-01

    Grading practices can send a powerful message to students about course expectations. A study by Henderson et al. (American Journal of Physics 72:164-169, 2004) in physics education has identified a misalignment between what college instructors say they value and their actual scoring of quantitative student solutions. This work identified three values that guide grading decisions: (1) a desire to see students' reasoning, (2) a readiness to deduct points from solutions with obvious errors and a reluctance to deduct points from solutions that might be correct, and (3) a tendency to assume correct reasoning when solutions are ambiguous. These authors propose that when values are in conflict, the conflict is resolved by placing the burden of proof on either the instructor or the student. Here, we extend the results of the physics study to earth science ( n = 7) and chemistry ( n = 10) instructors in a think-aloud interview study. Our results suggest that both the previously identified three values and the misalignment between values and grading practices exist among science faculty more generally. Furthermore, we identified a fourth value not previously recognized. Although all of the faculty across both studies stated that they valued seeing student reasoning, the combined effect suggests that only 49% of faculty across the three disciplines graded work in such a way that would actually encourage students to show their reasoning, and 34% of instructors could be viewed as penalizing students for showing their work. This research may contribute toward a better alignment between values and practice in faculty development.

  8. Determination of representative dimension parameter values of Korean knee joints for knee joint implant design.

    PubMed

    Kwak, Dai Soon; Tao, Quang Bang; Todo, Mitsugu; Jeon, Insu

    2012-05-01

    Knee joint implants developed by western companies have been imported to Korea and used for Korean patients. However, many clinical problems occur in knee joints of Korean patients after total knee joint replacement owing to the geometric mismatch between the western implants and Korean knee joint structures. To solve these problems, a method to determine the representative dimension parameter values of Korean knee joints is introduced to aid in the design of knee joint implants appropriate for Korean patients. Measurements of the dimension parameters of 88 male Korean knee joint subjects were carried out. The distribution of the subjects versus each measured parameter value was investigated. The measured dimension parameter values of each parameter were grouped by suitable intervals called the "size group," and average values of the size groups were calculated. The knee joint subjects were grouped as the "patient group" based on "size group numbers" of each parameter. From the iterative calculations to decrease the errors between the average dimension parameter values of each "patient group" and the dimension parameter values of the subjects, the average dimension parameter values that give less than the error criterion were determined to be the representative dimension parameter values for designing knee joint implants for Korean patients.

  9. 10 CFR 63.304 - Reasonable expectation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Postclosure Public Health and Environmental Standards § 63.304... uncertainties in making long-term projections of the performance of the Yucca Mountain disposal system; (3) Does... the full range of defensible and reasonable parameter distributions rather than only upon extreme...

  10. 10 CFR 63.304 - Reasonable expectation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Postclosure Public Health and Environmental Standards § 63.304... uncertainties in making long-term projections of the performance of the Yucca Mountain disposal system; (3) Does... the full range of defensible and reasonable parameter distributions rather than only upon extreme...

  11. 10 CFR 63.304 - Reasonable expectation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Postclosure Public Health and Environmental Standards § 63.304... uncertainties in making long-term projections of the performance of the Yucca Mountain disposal system; (3) Does... the full range of defensible and reasonable parameter distributions rather than only upon extreme...

  12. 10 CFR 63.304 - Reasonable expectation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Postclosure Public Health and Environmental Standards § 63.304... uncertainties in making long-term projections of the performance of the Yucca Mountain disposal system; (3) Does... the full range of defensible and reasonable parameter distributions rather than only upon extreme...

  13. 10 CFR 63.304 - Reasonable expectation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... REPOSITORY AT YUCCA MOUNTAIN, NEVADA Postclosure Public Health and Environmental Standards § 63.304... uncertainties in making long-term projections of the performance of the Yucca Mountain disposal system; (3) Does... the full range of defensible and reasonable parameter distributions rather than only upon extreme...

  14. Identifying and clarifying values and reason statements that promote effective food parenting practices, using intensive interviews

    USDA-ARS?s Scientific Manuscript database

    The objective was to generate and test parents' understanding of values and associated reason statements to encourage effective food parenting practices. This study was cross-sectional. Sixteen parents from different ethnic groups (African American, white, and Hispanic) living with their 3- to 5-yea...

  15. Attitudes, Values and Moral Reasoning as Predictors of Delinquency

    ERIC Educational Resources Information Center

    Tarry, Hammond; Emler, Nicholas

    2007-01-01

    Attitudes to institutional authority, strength of support for moral values and maturity of socio-moral reasoning have all been identified as potential predictors of adolescent delinquency. In a sample of 12-15-year-old boys (N = 789), after checking for effects of age, IQ, social background and ethnicity, self-reported delinquency was…

  16. Children's, Adolescents', and Adults' Judgments and Reasoning about Different Methods of Teaching Values

    ERIC Educational Resources Information Center

    Helwig, Charles C.; Ryerson, Rachel; Prencipe, Angela

    2008-01-01

    This study investigated children's, adolescents', and young adults' judgments and reasoning about teaching two values (racial equality and patriotism) using methods that varied in provision for children's rational autonomy, active involvement, and choice. Ninety-six participants (7-8-, 10-11-, and 13-14-year-olds, and college students) evaluated…

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsalafoutas, Ioannis A.; Varsamidis, Athanasios; Thalassinou, Stella

    Purpose: To investigate the utility of the nested polymethylacrylate (PMMA) phantom (which is available in many CT facilities for CTDI measurements), as a tool for the presentation and comparison of the ways that two different CT automatic exposure control (AEC) systems respond to a phantom when various scan parameters and AEC protocols are modified.Methods: By offsetting the two phantom's components (the head phantom and the body ring) half-way along their longitudinal axis, a phantom with three sections of different x-ray attenuation was created. Scan projection radiographs (SPRs) and helical scans of the three-section phantom were performed on a Toshiba Aquilionmore » 64 and a Philips Brilliance 64 CT scanners, with different scan parameter selections [scan direction, pitch factor, slice thickness, and reconstruction interval (ST/RI), AEC protocol, and tube potential used for the SPRs]. The dose length product (DLP) values of each scan were recorded and the tube current (mA) values of the reconstructed CT images were plotted against the respective Z-axis positions on the phantom. Furthermore, measurements of the noise levels at the center of each phantom section were performed to assess the impact of mA modulation on image quality.Results: The mA modulation patterns of the two CT scanners were very dissimilar. The mA variations were more pronounced for Aquilion 64, where changes in any of the aforementioned scan parameters affected both the mA modulations curves and DLP values. However, the noise levels were affected only by changes in pitch, ST/RI, and AEC protocol selections. For Brilliance 64, changes in pitch affected the mA modulation curves but not the DLP values, whereas only AEC protocol and SPR tube potential selection variations affected both the mA modulation curves and DLP values. The noise levels increased for smaller ST/RI, larger weight category AEC protocol, and larger SPR tube potential selection.Conclusions: The nested PMMA dosimetry phantom can be effectively utilized for the comprehension of CT AEC systems performance and the way that different scan conditions affect the mA modulation patterns, DLP values, and image noise. However, in depth analysis of the reasons why these two systems exhibited such different behaviors in response to the same phantom requires further investigation which is beyond the scope of this study.« less

  18. 29 CFR 531.4 - Making determinations of “reasonable cost.”

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... REGULATIONS WAGE PAYMENTS UNDER THE FAIR LABOR STANDARDS ACT OF 1938 Determinations of âReasonable Costâ and âFair Valueâ; Effects of Collective Bargaining Agreements § 531.4 Making determinations of “reasonable...

  19. Stability of haematological parameters and its relevance on the athlete's biological passport model.

    PubMed

    Lombardi, Giovanni; Lanteri, Patrizia; Colombini, Alessandra; Lippi, Giuseppe; Banfi, Giuseppe

    2011-12-01

    The stability of haematological parameters is crucial to guarantee accurate and reliable data for implementing and interpreting the athlete's biological passport (ABP). In this model, the values of haemoglobin, reticulocytes and out-of-doping period (OFF)-score (Hb-60√Ret) are used to monitor the possible variations of those parameters, and also to compare the thresholds developed by the statistical model for the single athlete on the basis of its personal values and the variance of parameters in the modal group. Nevertheless, a critical review of the current scientific literature dealing with the stability of the haematological parameters included in the ABP programme, and which are used for evaluating the probability of anomalies in the athlete's profile, is currently lacking. In addition, we collected information from published studies, in order to supply a useful, practical and updated review to sports physicians and haematologists. There are some parameters that are highly stable, such as haemoglobin and erythrocytes (red blood cells [RBCs]), whereas others, (e.g. reticulocytes, mean RBC volume and haematocrit) appear less stable. Regardless of the methodology, the stability of haematological parameters is improved by sample refrigeration. The stability of all parameters is highly affected from high storage temperatures, whereas the stability of RBCs and haematocrit is affected by initial freezing followed by refrigeration. Transport and rotation of tubes do not substantially influence any haematological parameter except for reticulocytes. In all the studies we reviewed that used Sysmex instrumentation, which is recommended for ABP measurements, stability was shown for 72 hours at 4 ° C for haemoglobin, RBCs and mean curpuscular haemoglobin concentration (MCHC); up to 48 hours for reticulocytes; and up to 24 hours for haematocrit. In one study, Sysmex instrumentation shows stability extended up to 72 hours at 4 ° C for all the parameters. There are significant differences among methods and instruments: Siemens Advia shows lower stability than Sysmex as regards to reticulocytes. However, the limit of 36 hours from blood collection to analysis as recommended by ABP scientists is reasonable to guarantee analytical quality, when samples are transported at 4 ° C and are accompanied by a certified steadiness of this temperature. There are some parameters that are highly stable, such as haemoglobin and RBCs; whereas others, such as reticulocytes, mean cell volume and haematocrit are more unstable. The stability of haematological parameters might be improved independently from the analytical methodology, by refrigeration of the specimens.

  20. Analyzing reflective narratives to assess the ethical reasoning of pediatric residents.

    PubMed

    Moon, Margaret; Taylor, Holly A; McDonald, Erin L; Hughes, Mark T; Beach, Mary Catherine; Carrese, Joseph A

    2013-01-01

    A limiting factor in ethics education in medical training has been difficulty in assessing competence in ethics. This study was conducted to test the concept that content analysis of pediatric residents' personal reflections about ethics experiences can identify changes in ethical sensitivity and reasoning over time. Analysis of written narratives focused on two of our ethics curriculum's goals: 1) To raise sensitivity to ethical issues in everyday clinical practice and 2) to enhance critical reflection on personal and professional values as they affect patient care. Content analysis of written reflections was guided by a tool developed to identify and assess the level of ethical reasoning in eight domains determined to be important aspects of ethical competence. Based on the assessment of narratives written at two times (12 to 16 months/apart) during their training, residents showed significant progress in two specific domains: use of professional values, and use of personal values. Residents did not show decline in ethical reasoning in any domain. This study demonstrates that content analysis of personal narratives may provide a useful method for assessment of developing ethical sensitivity and reasoning.

  1. The effect of vibrational autoionization on the H2+ X 2Σg+ state rotationally resolved photoionization dynamics

    NASA Astrophysics Data System (ADS)

    Holland, D. M. P.; Shaw, D. A.

    2014-01-01

    The effect of vibrational autoionization on the H2+ X 2Σg+ v+ = 3, N+ state rotationally resolved photoelectron angular distributions and branching ratios has been investigated with a velocity map imaging spectrometer and synchrotron radiation. In photon excitation regions free from the influence of autoionizing Rydberg states, where direct ionization dominates, the photoelectron anisotropy parameter associated with the X 1Σg+ v″ = 0, N″ = 1 → X 2Σg+ v+ = 3, N+ = 1 transition has a value close to the theoretical maximum. However, in the vicinity of a Rydberg state, vibrational autoionization leads to a substantial reduction in anisotropy. The value of the anisotropy parameter associated with the S-branch of the photoelectron spectrum is found to be considerably higher than that predicted under the assumption that the outgoing electron can be represented solely as a p-wave. This suggests that the f-wave contribution must be taken into account to obtain a proper description of the photoionization dynamics. The observed variations in the rotationally resolved branching ratios, in the vicinity of an autoionizing resonance, depend upon the rotational level of the Rydberg state. The rotationally averaged photoelectron anisotropy parameters have been compared with the corresponding, previously calculated, theoretical results and reasonable agreement has been found. The influence of vibrational autoionization on the H2+ X 2Σg+ v+ = 0, 1, 2, 3 vibrational branching ratios has also been investigated, and the experimental results show that, in energy regions encompassing Rydberg states, these ratios deviate strongly from the Franck-Condon factors for direct ionization.

  2. Combined ellipsometry and refractometry technique for characterisation of liquid crystal based nanocomposites.

    PubMed

    Warenghem, Marc; Henninot, Jean François; Blach, Jean François; Buchnev, Oleksandr; Kaczmarek, Malgosia; Stchakovsky, Michel

    2012-03-01

    Spectroscopic ellipsometry is a technique especially well suited to measure the effective optical properties of a composite material. However, as the sample is optically thick and anisotropic, this technique loses its accuracy for two reasons: anisotropy means that two parameters have to be determined (ordinary and extraordinary indices) and optically thick means a large order of interference. In that case, several dielectric functions can emerge out of the fitting procedure with a similar mean square error and no criterion to discriminate the right solution. In this paper, we develop a methodology to overcome that drawback. It combines ellipsometry with refractometry. The same sample is used in a total internal reflection (TIR) setup and in a spectroscopic ellipsometer. The number of parameters to be determined by the fitting procedure is reduced in analysing two spectra, the correct final solution is found by using the TIR results both as initial values for the parameters and as check for the final dielectric function. A prefitting routine is developed to enter the right initial values in the fitting procedure and so to approach the right solution. As an example, this methodology is used to analyse the optical properties of BaTiO(3) nanoparticles embedded in a nematic liquid crystal. Such a methodology can also be used to analyse experimentally the validity of the mixing laws, since ellipsometry gives the effective dielectric function and thus, can be compared to the dielectric function of the components of the mixture, as it is shown on the example of BaTiO(3)/nematic composite.

  3. An effective medium approach to predict the apparent contact angle of drops on super-hydrophobic randomly rough surfaces.

    PubMed

    Bottiglione, F; Carbone, G

    2015-01-14

    The apparent contact angle of large 2D drops with randomly rough self-affine profiles is numerically investigated. The numerical approach is based upon the assumption of large separation of length scales, i.e. it is assumed that the roughness length scales are much smaller than the drop size, thus making it possible to treat the problem through a mean-field like approach relying on the large-separation of scales. The apparent contact angle at equilibrium is calculated in all wetting regimes from full wetting (Wenzel state) to partial wetting (Cassie state). It was found that for very large values of the roughness Wenzel parameter (r(W) > -1/ cos θ(Y), where θ(Y) is the Young's contact angle), the interface approaches the perfect non-wetting condition and the apparent contact angle is almost equal to 180°. The results are compared with the case of roughness on one single scale (sinusoidal surface) and it is found that, given the same value of the Wenzel roughness parameter rW, the apparent contact angle is much larger for the case of a randomly rough surface, proving that the multi-scale character of randomly rough surfaces is a key factor to enhance superhydrophobicity. Moreover, it is shown that for millimetre-sized drops, the actual drop pressure at static equilibrium weakly affects the wetting regime, which instead seems to be dominated by the roughness parameter. For this reason a methodology to estimate the apparent contact angle is proposed, which relies only upon the micro-scale properties of the rough surface.

  4. Study of Desert Dust Events over the Southwestern Iberian Peninsula in Year 2000: Two Case Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cachorro, V. E.; Vergaz, R.; de Frutos, A. M.

    2006-03-07

    Strong desert dust events occurring in 2000 over the southwestern Atlantic coast of the Iberian Peninsula are detected and evaluated by means of the TOMS Aerosol Index (A.I.) at three different sites, Funchal (Madeira Island, Portugal), Lisboa (Portugal), and El Arenosillo (Huelva, Spain). At the El Arenosillo station, measurements from an AERONET Cimel sunphotometer allow more retrieval of the spectral AOD and the derived alpha ''angstrom'' coefficient. After using different threshold values of these parameters, we conclude that it is difficult to establish reliable and robust criteria for an automatic estimation of the number of dust episodes and the totalmore » number of dusty days per year. As a result, additional information, such as airmass trajectories, were used to improve the estimation, from which reasonable results were obtained (although some manual editing was still needed). A detailed characterization of two selected desert dust episodes, a strong event in winter and another of less intensity in summer, was carried out using AOD derived from Brewer spectrometer measurements. Size distribution parameters and radiative properties, such as refractive index and the aerosol single scattering albedo derived from Cimel data, were analyzed in detail for one of these two case studies. Although specific to this dust episode, the retrieved range of values of these parameters clearly reflect the characteristics of desert aerosols. Back-trajectory analysis, synoptic weather maps and satellite images were also considered together, as supporting data to assess the aerosol desert characterization in this region of study.« less

  5. On the role of the Kelvin wave in the westerly phase of the semiannual zonal wind oscillation

    NASA Technical Reports Server (NTRS)

    Dunkerton, T.

    1979-01-01

    The role of the Kelvin wave, discovered by Hirota (1978), in producing the westerly accelerations of the semiannual zonal wind oscillation in the tropical upper stratosphere is examined quantitatively. It is shown that, for reasonable values of the wave parameters, this Kelvin wave could indeed give rise to the observed accelerations. For the thermal damping rates of Dickinson (1973), the most likely range of phase speeds for a wavenumber 1 disturbance is from 45 to 60 m/sec. For 'photochemically accelerated' damping rates (Blake and Lindzen, 1973), a phase speed in excess of 70 m/sec would be required. The possibility of a significant modulation of the semiannual westerlies by the quasi-biennial oscillation is also suggested.

  6. The normalization heuristic: an untested hypothesis that may misguide medical decisions.

    PubMed

    Aberegg, Scott K; O'Brien, James M

    2009-06-01

    Medical practice is increasingly informed by the evidence from randomized controlled trials. When such evidence is not available, clinical hypotheses based on pathophysiological reasoning and common sense guide clinical decision making. One commonly utilized general clinical hypothesis is the assumption that normalizing abnormal laboratory values and physiological parameters will lead to improved patient outcomes. We refer to the general use of this clinical hypothesis to guide medical therapeutics as the "normalization heuristic". In this paper, we operationally define this heuristic and discuss its limitations as a rule of thumb for clinical decision making. We review historical and contemporaneous examples of normalization practices as empirical evidence for the normalization heuristic and to highlight its frailty as a guide for clinical decision making.

  7. Proceedings of the Symposium on Fluid-Solid Surface Intractions (2nd) Held at the Naval Ship Research and Development Center, Bethesda, Maryland, June 5-7, 1974,

    DTIC Science & Technology

    1974-11-29

    6 . It is reasonable to define 0.01 < C/ 6 ( 4 ) as a boundary to the slip flow regime...stress layer. The result is 7 Z (2- aE RE 100 (42)7 2-a £1 + 7 • 1 6 aE The parameter 6 /Z is the same as that given in section 4 . Again, values below...Selective adsorption data for "He on clean NaF. After Meyers et alE., reference 30. A B 4 4 He (80K) Ho o NoFL 2wr - 6.07 -’ N = 6 . 7 K- F 1.9 2 A Kx (-) 4

  8. Oil and gas reserves estimates

    USGS Publications Warehouse

    Harrell, R.; Gajdica, R.; Elliot, D.; Ahlbrandt, T.S.; Khurana, S.

    2005-01-01

    This article is a summary of a panel session at the 2005 Offshore Technology Conference. Oil and gas reserves estimates are further complicated with the expanding importance of the worldwide deepwater arena. These deepwater reserves can be analyzed, interpreted, and conveyed in a consistent, reliable way to investors and other stakeholders. Continually improving technologies can lead to improved estimates of production and reserves, but the estimates are not necessarily recognized by regulatory authorities as an indicator of "reasonable certainty," a term used since 1964 to describe proved reserves in several venues. Solutions are being debated in the industry to arrive at a reporting mechanism that generates consistency and at the same time leads to useful parameters in assessing a company's value without compromising confidentiality. Copyright 2005 Offshore Technology Conference.

  9. Wavelet analysis of polarization azimuths maps for laser images of myocardial tissue for the purpose of diagnosing acute coronary insufficiency

    NASA Astrophysics Data System (ADS)

    Wanchuliak, O. Ya.; Peresunko, A. P.; Bakko, Bouzan Adel; Kushnerick, L. Ya.

    2011-09-01

    This paper presents the foundations of a large scale - localized wavelet - polarization analysis - inhomogeneous laser images of histological sections of myocardial tissue. Opportunities were identified defining relations between the structures of wavelet coefficients and causes of death. The optical model of polycrystalline networks of myocardium protein fibrils is presented. The technique of determining the coordinate distribution of polarization azimuth of the points of laser images of myocardium histological sections is suggested. The results of investigating the interrelation between the values of statistical (statistical moments of the 1st-4th order) parameters are presented which characterize distributions of wavelet - coefficients polarization maps of myocardium layers and death reasons.

  10. An Evaluation of the Bouwer and Rice Method of Slug Test Analysis

    NASA Astrophysics Data System (ADS)

    Brown, David L.; Narasimhan, T. N.; Demir, Z.

    1995-05-01

    The method of Bouwer and Rice (1976) for analyzing slug test data is widely used to estimate hydraulic conductivity (K). Based on steady state flow assumptions, this method is specifically intended to be applicable to unconfined aquifers. Therefore it is of practical value to investigate the limits of accuracy of the K estimates obtained with this method. Accordingly, using a numerical model for transient flow, we evaluate the method from two perspectives. First, we apply the method to synthetic slug test data and study the error in estimated values of K. Second, we analyze the logical basis of the method. Parametric studies helped assess the role of the effective radius parameter, specific storage, screen length, and well radius on the estimated values of K. The difference between unconfined and confined systems was studied via conditions on the upper boundary of the flow domain. For the cases studied, the Bouwer and Rice analysis was found to give good estimates of K, with errors ranging from 10% to 100%. We found that the estimates of K were consistently superior to those obtained with Hvorslev's (1951) basic time lag method. In general, the Bouwer and Rice method tends to underestimate K, the greatest errors occurring in the presence of a damaged zone around the well or when the top of the screen is close to the water table. When the top of the screen is far removed from the upper boundary of the system, no difference is manifest between confined and unconfined conditions. It is reasonable to infer from the simulated results that when the screen is close to the upper boundary, the results of the Bouwer and Rice method agree more closely with a "confined" idealization than an "unconfined" idealization. In effect, this method treats the aquifer system as an equivalent radial flow permeameter with an effective radius, Re, which is a function of the flow geometry. Our transient simulations suggest that Re varies with time and specific storage. Thus the effective radius may be reasonably viewed as a time-averaged mean value. The fact that the method provides reasonable estimates of hydraulic conductivity suggests that the empirical, electric analog experiments of Bouwer and Rice have yielded shape factors that are better than the shape factors implicit in the Hvorslev method.

  11. Students' flexible use of ontologies and the value of tentative reasoning: Examples of conceptual understanding in three canonical topics of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Hoehn, Jessica R.; Finkelstein, Noah D.

    2018-06-01

    As part of a research study on student reasoning in quantum mechanics, we examine students' use of ontologies, or the way students' categorically organize entities they are reasoning about. In analyzing three episodes of focus group discussions with modern physics students, we present evidence of the dynamic nature of ontologies, and refine prior theoretical frameworks for thinking about dynamic ontologies. We find that in a given reasoning episode ontologies can be dynamic in construction (referring to when the reasoner constructs the ontologies) or application (referring to which ontologies are applied in a given reasoning episode). In our data, we see instances of students flexibly switching back and forth between parallel stable structures as well as constructing and negotiating new ontologies in the moment. Methodologically, we use a collective conceptual blending framework as an analytic tool for capturing student reasoning in groups. In this research, we value the messiness of student reasoning and argue that reasoning in a tentative manner can be productive for students learning quantum mechanics. As such, we shift away from a binary view of student learning which sees students as either having the correct answer or not.

  12. The importance of proper crystal-chemical and geometrical reasoning demonstrated using layered single and double hydroxides

    PubMed Central

    Richardson, Ian G.

    2013-01-01

    Atomistic modelling techniques and Rietveld refinement of X-ray powder diffraction data are widely used but often result in crystal structures that are not realistic, presumably because the authors neglect to check the crystal-chemical plausibility of their structure. The purpose of this paper is to reinforce the importance and utility of proper crystal-chemical and geometrical reasoning in structural studies. It is achieved by using such reasoning to generate new yet fundamental information about layered double hydroxides (LDH), a large, much-studied family of compounds. LDH phases are derived from layered single hydroxides by the substitution of a fraction (x) of the divalent cations by trivalent. Equations are derived that enable calculation of x from the a parameter of the unit cell and vice versa, which can be expected to be of widespread utility as a sanity test for extant and future structure determinations and computer simulation studies. The phase at x = 0 is shown to be an α form of divalent metal hydroxide rather than the β polymorph. Crystal-chemically sensible model structures are provided for β-Zn(OH)2 and Ni- and Mg-based carbonate LDH phases that have any trivalent cation and any value of x, including x = 0 [i.e. for α-M(OH)2·mH2O phases]. PMID:23719702

  13. Perioperative Characteristics of Siblings Undergoing Liver or Kidney Transplant.

    PubMed

    Ersoy, Zeynep; Ozdemirkan, Aycan; Pirat, Arash; Torgay, Adnan; Arslan, Gulnaz; Haberal, Mehmet

    2015-11-01

    Reasons for chronic liver and kidney failure may vary; sometimes more than 1 family member may be affected, and may require a transplant. The aim of this study was to examine the similarities or differences between the perioperative characteristics of siblings undergoing liver or kidney transplant. The medical records of 6 pairs of siblings who underwent liver transplant and 4 pairs of siblings who underwent kidney transplant at Baskent University Hospital between 1989 and 2014 were retrospectively analyzed. Collected data included demographic features; comorbidities; reasons for liver and kidney failure; perioperative laboratory values; intraoperative hemodynamic parameters; use and volume of crystalloids, colloids, blood products, cell saver system, and albumin; duration of anesthesia; urine output; and postoperative follow-up data. The mean age of the 6 sibling pairs who underwent liver transplant was 16.3 ± 12.2 years. All 12 patients had Child-Pugh grade B cirrhosis, with mean disease duration of 7.8 ± 3.9 years. There were no significant differences between siblings with respect to intraoperative blood product transfusion, crystalloid and colloid fluid replacements, hypotension frequency, blood gas analyses, urinary output, duration of anhepatic phase, inotropic agent administration, postoperative laboratory values, need for mechanical ventilation and vasopressors, occurrence of acute renal failure and infections, and duration intensive care unit stay (P > .05). The mean age of the 4 sibling pairs who underwent kidney transplant was 21.3 ± 6.4 years, with mean duration of renal insufficiency of 2.2 ± 1.6 years. There were no significant differences between siblings with respect to intraoperative crystalloid and colloid fluid administration, duration of anesthesia, intraoperative mannitol and furosemide administration, and postoperative laboratory values (P > .05). In conclusion, the 6 sibling pairs who underwent liver transplant and 4 sibling pairs who underwent kidney transplant in our cohort had similar perioperative characteristics.

  14. Effects and mechanistic aspects of absorbing organic compounds by coking coal.

    PubMed

    Ning, Kejia; Wang, Junfeng; Xu, Hongxiang; Sun, Xianfeng; Huang, Gen; Liu, Guowei; Zhou, Lingmei

    2017-11-01

    Coal is a porous medium and natural absorbent. It can be used for its original purpose after adsorbing organic compounds, its value does not reduce and the pollutants are recycled, and then through systemic circulation of coking wastewater zero emissions can be achieved. Thus, a novel method of industrial organic wastewater treatment using adsorption on coal is introduced. Coking coal was used as an adsorbent in batch adsorption experiments. The quinoline, indole, pyridine and phenol removal efficiencies of coal adsorption were investigated. In addition, several operating parameters which impact removal efficiency such as coking coal consumption, oscillation contact time, initial concentration and pH value were also investigated. The coking coal exhibited properties well-suited for organics' adsorption. The experimental data were fitted to Langmuir and Freundlich isotherms as well as Temkin and Redlich-Peterson (R-P) models. The Freundlich isotherm model provided reasonable models of the adsorption process. Furthermore, the purification mechanism of organic compounds' adsorption on coking coal was analysed.

  15. Quasinormal Modes of Charged Dilaton Black Holes and Their Entropy Spectra

    NASA Astrophysics Data System (ADS)

    Sakalli, I.

    2013-08-01

    In this study, we employ the scalar perturbations of the charged dilaton black hole (CDBH) found by Chan, Horne and Mann (CHM), and described with an action which emerges in the low-energy limit of the string theory. A CDBH is neither asymptotically flat (AF) nor non-asymptotically flat (NAF) spacetime. Depending on the value of its dilaton parameter a, it has both Schwarzschild and linear dilaton black hole (LDBH) limits. We compute the complex frequencies of the quasinormal modes (QNMs) of the CDBH by considering small perturbations around its horizon. By using the highly damped QNM in the process prescribed by Maggiore, we obtain the quantum entropy and area spectra of these black holes (BHs). Although the QNM frequencies are tuned by a, we show that the quantum spectra do not depend on a, and they are equally spaced. On the other hand, the obtained value of undetermined dimensionless constant ɛ is the double of Bekenstein's result. The possible reason of this discrepancy is also discussed.

  16. Technology and characterization of Thin-Film Transistors (TFTs) with a-IGZO semiconductor and high-k dielectric layer

    NASA Astrophysics Data System (ADS)

    Mroczyński, R.; Wachnicki, Ł.; Gierałtowska, S.

    2016-12-01

    In this work, we present the design of the technology and fabrication of TFTs with amorphous IGZO semiconductor and high-k gate dielectric layer in the form of hafnium oxide (HfOx). In the course of this work, the IGZO fabrication was optimized by means of Taguchi orthogonal tables approach in order to obtain an active semiconductor with reasonable high concentration of charge carriers, low roughness and relatively high mobility. The obtained Thin-Film Transistors can be characterized by very good electrical parameters, i.e., the effective mobility (μeff ≍ 12.8 cm2V-1s-1) significantly higher than that for a-Si TFTs (μeff ≍ 1 cm2V-1s-1). However, the value of sub-threshold swing (i.e., 640 mV/dec) points that the interfacial properties of IGZO/HfOx stack is characterized by high value of interface states density (Dit) which, in turn, demands further optimization for future applications of the demonstrated TFT structures.

  17. Crystal-Site-Selective Spectrum of Fe3BO6 by Synchrotron Mössbauer Diffraction with Pure Nuclear Bragg Scattering

    NASA Astrophysics Data System (ADS)

    Nakamura, Shin; Mitsui, Takaya; Fujiwara, Kosuke; Ikeda, Naoshi; Kurokuzu, Masayuki; Shimomura, Susumu

    2017-08-01

    We have succeeded in obtaining the crystal-site-selective spectra of the collinear antiferromagnet Fe3BO6 using a synchrotron Mössbauer diffractometer with pure nuclear Bragg scattering at SPring-8 BL11XU. Well-resolved 300, 500, and 700 reflection spectra, having asymmetric line shapes owing to the higher-order interference effect between the nuclear energy levels, were quantitatively analyzed using a formula based on the dynamical theory of diffraction. Reasonable hyperfine parameters were obtained. The intensity ratio of Fe1 to Fe2 subspectra is in accordance with the nuclear structure factor. However, when the spectrum is measured at the peak position of the rocking curve (very near the Bragg position), the value of the center shift deviates from its intrinsic value. This is also due to the dynamical effect of γ-ray diffraction. To avoid this problem, it is necessary to use diffraction angles near the foot of the rocking curve, approximately 0.02° apart from the peak position.

  18. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression

    NASA Astrophysics Data System (ADS)

    Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli

    2018-06-01

    Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.

  19. Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.

  20. Evidential reasoning research on intrusion detection

    NASA Astrophysics Data System (ADS)

    Wang, Xianpei; Xu, Hua; Zheng, Sheng; Cheng, Anyu

    2003-09-01

    In this paper, we mainly aim at D-S theory of evidence and the network intrusion detection these two fields. It discusses the method how to apply this probable reasoning as an AI technology to the Intrusion Detection System (IDS). This paper establishes the application model, describes the new mechanism of reasoning and decision-making and analyses how to implement the model based on the synscan activities detection on the network. The results suggest that if only rational probability values were assigned at the beginning, the engine can, according to the rules of evidence combination and hierarchical reasoning, compute the values of belief and finally inform the administrators of the qualities of the traced activities -- intrusions, normal activities or abnormal activities.

  1. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  2. Structure of the Large Magellanic Cloud from near infrared magnitudes of red clump stars

    NASA Astrophysics Data System (ADS)

    Subramanian, S.; Subramaniam, A.

    2013-04-01

    Context. The structural parameters of the disk of the Large Magellanic Cloud (LMC) are estimated. Aims: We used the JH photometric data of red clump (RC) stars from the Magellanic Cloud Point Source Catalog (MCPSC) obtained from the InfraRed Survey Facility (IRSF) to estimate the structural parameters of the LMC disk, such as the inclination, i, and the position angle of the line of nodes (PAlon), φ. Methods: The observed LMC region is divided into several sub-regions, and stars in each region are cross-identified with the optically identified RC stars to obtain the near infrared magnitudes. The peak values of H magnitude and (J - H) colour of the observed RC distribution are obtained by fitting a profile to the distributions and by taking the average value of magnitude and colour of the RC stars in the bin with largest number. Then the dereddened peak H0 magnitude of the RC stars in each sub-region is obtained from the peak values of H magnitude and (J - H) colour of the observed RC distribution. The right ascension (RA), declination (Dec), and relative distance from the centre of each sub-region are converted into x,y, and z Cartesian coordinates. A weighted least square plane fitting method is applied to this x,y,z data to estimate the structural parameters of the LMC disk. Results: An intrinsic (J - H)0 colour of 0.40 ± 0.03 mag in the Simultaneous three-colour InfraRed Imager for Unbiased Survey (SIRIUS) IRSF filter system is estimated for the RC stars in the LMC and a reddening map based on (J - H) colour of the RC stars is presented. When the peaks of the RC distribution were identified by averaging, an inclination of 25°.7 ± 1°.6 and a PAlon = 141°.5 ± 4°.5 were obtained. We estimate a distance modulus, μ = 18.47 ± 0.1 mag to the LMC. Extra-planar features which are both in front and behind the fitted plane are identified. They match with the optically identified extra-planar features. The bar of the LMC is found to be part of the disk within 500 pc. Conclusions: The estimates of the structural parameters are found to be independent of the photometric bands used for the analysis. The radial variation of the structural parameters are also studied. We find that the inner disk, within ~3°.0, is less inclined and has a larger value of PAlon when compared to the outer disk. Our estimates are compared with the literature values, and the possible reasons for the small discrepancies found are discussed.

  3. Safe space. How you can define fair market value for medical-office building lease agreements with hospitals.

    PubMed

    Murray, Chuck

    2007-04-01

    When entering into office-space lease agreements with hospitals, physician practice administrators need to pay close attention to the federal antikick-back statute and the Stark law. Compliance with these regulations calls for adherence to fair market value and commercial reasonableness--blurry terms open to interpretation. This article provides you with a framework for defining fair market value and commercial reasonableness in regard to real-estate transactions with hospitals.

  4. Organ doses for reference adult male and female undergoing computed tomography estimated by Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel

    2011-03-15

    Purpose: To develop a computed tomography (CT) organ dose estimation method designed to readily provide organ doses in a reference adult male and female for different scan ranges to investigate the degree to which existing commercial programs can reasonably match organ doses defined in these more anatomically realistic adult hybrid phantomsMethods: The x-ray fan beam in the SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code MCNPX2.6. The simulated CT scanner model was validated through comparison with experimentally measured lateral free-in-air dose profiles and computed tomography dose index (CTDI) values. The reference adult malemore » and female hybrid phantoms were coupled with the established CT scanner model following arm removal to simulate clinical head and other body region scans. A set of organ dose matrices were calculated for a series of consecutive axial scans ranging from the top of the head to the bottom of the phantoms with a beam thickness of 10 mm and the tube potentials of 80, 100, and 120 kVp. The organ doses for head, chest, and abdomen/pelvis examinations were calculated based on the organ dose matrices and compared to those obtained from two commercial programs, CT-EXPO and CTDOSIMETRY. Organ dose calculations were repeated for an adult stylized phantom by using the same simulation method used for the adult hybrid phantom. Results: Comparisons of both lateral free-in-air dose profiles and CTDI values through experimental measurement with the Monte Carlo simulations showed good agreement to within 9%. Organ doses for head, chest, and abdomen/pelvis scans reported in the commercial programs exceeded those from the Monte Carlo calculations in both the hybrid and stylized phantoms in this study, sometimes by orders of magnitude. Conclusions: The organ dose estimation method and dose matrices established in this study readily provides organ doses for a reference adult male and female for different CT scan ranges and technical parameters. Organ doses from existing commercial programs do not reasonably match organ doses calculated for the hybrid phantoms due to differences in phantom anatomy, as well as differences in organ dose scaling parameters. The organ dose matrices developed in this study will be extended to cover different technical parameters, CT scanner models, and various age groups.« less

  5. Management, Skills and Creativity: The Purpose and Value of Instrumental Reasoning in Education Discourse

    ERIC Educational Resources Information Center

    Gibson, Howard

    2011-01-01

    Reason is a heterogeneous word with many meanings and functions. Instrumental reasoning is the "useful but blind" variant that, for Horkheimer, presupposes "the adequacy of procedures for purposes more or less taken for granted and supposedly self-explanatory". The paper argues that the root of instrumental reasoning is to be…

  6. Ocean acidification affects parameters of immune response and extracellular pH in tropical sea urchins Lytechinus variegatus and Echinometra luccunter.

    PubMed

    Leite Figueiredo, Débora Alvares; Branco, Paola Cristina; Dos Santos, Douglas Amaral; Emerenciano, Andrews Krupinski; Iunes, Renata Stecca; Shimada Borges, João Carlos; Machado Cunha da Silva, José Roberto

    2016-11-01

    The rising concentration of atmospheric CO 2 by anthropogenic activities is changing the chemistry of the oceans, resulting in a decreased pH. Several studies have shown that the decrease in pH can affect calcification rates and reproduction of marine invertebrates, but little attention has been drawn to their immune response. Thus this study evaluated in two adult tropical sea urchin species, Lytechinus variegatus and Echinometra lucunter, the effects of ocean acidification over a period of 24h and 5days, on parameters of the immune response, the extracellular acid base balance, and the ability to recover these parameters. For this reason, the phagocytic capacity (PC), the phagocytic index (PI), the capacity of cell adhesion, cell spreading, cell spreading area of phagocytic amebocytes in vitro, and the coelomic fluid pH were analyzed in animals exposed to a pH of 8.0 (control group), 7.6 and 7.3. Experimental pH's were predicted by IPCC for the future of the two species. Furthermore, a recovery test was conducted to verify whether animals have the ability to restore these physiological parameters after being re-exposed to control conditions. Both species presented a significant decrease in PC, in the pH of coelomic fluid and in the cell spreading area. Besides that, Echinometra lucunter showed a significant decrease in cell spreading and significant differences in coelomocyte proportions. The recovery test showed that the PC of both species increased, also being below the control values. Even so, they were still significantly higher than those exposed to acidified seawater, indicating that with the re-establishment of the pH value the phagocytic capacity of cells tends to restore control conditions. These results demonstrate that the immune system and the coelomic fluid pH of these animals can be affected by ocean acidification. However, the effects of a short-term exposure can be reversible if the natural values ​​are re-established. Thus, the effects of ocean acidification could lead to consequences for pathogen resistance and survival of these sea urchin species. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Statistical analysis of modal parameters of a suspension bridge based on Bayesian spectral density approach and SHM data

    NASA Astrophysics Data System (ADS)

    Li, Zhijun; Feng, Maria Q.; Luo, Longxi; Feng, Dongming; Xu, Xiuli

    2018-01-01

    Uncertainty of modal parameters estimation appear in structural health monitoring (SHM) practice of civil engineering to quite some significant extent due to environmental influences and modeling errors. Reasonable methodologies are needed for processing the uncertainty. Bayesian inference can provide a promising and feasible identification solution for the purpose of SHM. However, there are relatively few researches on the application of Bayesian spectral method in the modal identification using SHM data sets. To extract modal parameters from large data sets collected by SHM system, the Bayesian spectral density algorithm was applied to address the uncertainty of mode extraction from output-only response of a long-span suspension bridge. The posterior most possible values of modal parameters and their uncertainties were estimated through Bayesian inference. A long-term variation and statistical analysis was performed using the sensor data sets collected from the SHM system of the suspension bridge over a one-year period. The t location-scale distribution was shown to be a better candidate function for frequencies of lower modes. On the other hand, the burr distribution provided the best fitting to the higher modes which are sensitive to the temperature. In addition, wind-induced variation of modal parameters was also investigated. It was observed that both the damping ratios and modal forces increased during the period of typhoon excitations. Meanwhile, the modal damping ratios exhibit significant correlation with the spectral intensities of the corresponding modal forces.

  8. Gaussian process model for extrapolation of scattering observables for complex molecules: From benzene to benzonitrile

    NASA Astrophysics Data System (ADS)

    Cui, Jie; Li, Zhiying; Krems, Roman V.

    2015-10-01

    We consider a problem of extrapolating the collision properties of a large polyatomic molecule A-H to make predictions of the dynamical properties for another molecule related to A-H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A-X. We assume that the effect of the -H →-X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C6H5CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C6H6 collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C6H5CN with He.

  9. Supplemental optical specifications for imaging systems: parameters of phase gradient

    NASA Astrophysics Data System (ADS)

    Xuan, Bin; Li, Jun-Feng; Wang, Peng; Chen, Xiao-Ping; Song, Shu-Mei; Xie, Jing-Jiang

    2009-12-01

    Specifications of phase error, peak to valley (PV), and root mean square (rms) are not able to represent the properties of a wavefront reasonably because of their irresponsibility for spatial frequencies. Power spectral density is a parameter that is especially effective to indicate the frequency regime. However, it seems not convenient for opticians to implement. Parameters of phase gradient, PV gradient, and rms gradient are most correlated with a point-spread function of an imaging system, and they can provide clear instruction of manufacture. The algorithms of gradient parameters have been modified in order to represent the image quality better. In order to demonstrate the analyses, an experimental spherical mirror has been worked out. It is clear that imaging performances can be maintained while manufacture difficulties are decreased when a reasonable trade-off between specifications of phase error and phase gradient is made.

  10. Identifying and Clarifying Values and Reason Statements that Promote Effective Food Parenting Practices, Using Intensive Interviews

    ERIC Educational Resources Information Center

    Beltran, Alicia; Hingle, Melanie D.; Knesek, Jessica; O'Connor, Teresia; Baranowski, Janice; Thompson, Debbe; Baranowski, Tom

    2011-01-01

    Objective: Generate and test parents' understanding of values and associated reason statements to encourage effective food parenting practices. Methods: This study was cross-sectional. Sixteen parents from different ethnic groups (African American, white, and Hispanic) living with their 3- to 5-year-old child were recruited. Interested parents…

  11. The Relationship between Human Values and Moral Reasoning as Components of Moral Behavior.

    ERIC Educational Resources Information Center

    Frost, Lynn Weaver; Michael, William B.; Guarino, Anthony J.

    The relationship of scores on a measure of moral reasoning to the perceived relative importance of statements of value that M. Rokeach (1973) conceptualized as being instrumental (states of being) or terminal (end states of existence) was studied with 66 undergraduate students. The pattern of relationships between the scores and the relative…

  12. Landslide susceptibility estimations in the Gerecse hills (Hungary).

    NASA Astrophysics Data System (ADS)

    Gerzsenyi, Dávid; Gáspár, Albert

    2017-04-01

    Surface movement processes are constantly posing threat to property in populated and agricultural areas in the Gerecse hills (Hungary). The affected geological formations are mainly unconsolidated sediments. Pleistocene loess and alluvial terrace sediments are overwhelmingly present, but fluvio-lacustrine sediments of the latest Miocene, and consolidated Eocene and Mesozoic limestones and marls can also be found in the area. Landslides and other surface movement processes are being studied for a long time in the area, but a comprehensive GIS-based geostatistical analysis have not yet been made for the whole area. This was the reason for choosing the Gerecse as the focus area of the study. However, the base data of our study are freely accessible from online servers, so the used method can be applied to other regions in Hungary. Qualitative data was acquired from the landslide-inventory map of the Hungarian Surface Movement Survey and from the Geological Map of Hungary (1 : 100 000). Morphometric parameters derived from the SRMT-1 DEM were used as quantitative variables. Using these parameters the distribution of elevation, slope gradient, aspect and categorized geological features were computed, both for areas affected and not affected by slope movements. Then likelihood values were computed for each parameters by comparing their distribution in the two areas. With combining the likelihood values of the four parameters relative hazard values were computed for each cell. This method is known as the "empirical probability estimation" originally published by Chung (2005). The map created this way shows each cell's place in their ranking based on the relative hazard values as a percentage for the whole study area (787 km2). These values provide information about how similar is a certain area to the areas already affected by landslides based on the four predictor variables. This map can also serve as a base for more complex landslide vulnerability studies involving economic factors. The landslide-inventory database used in the research provides information regarding the state of activity of the past surface movements, however the activity of many sites are stated as unknown. A complementary field survey have been carried out aiming to categorize these areas - near to Dunaszentmiklós and Neszmély villages - in one of the most landslide-affected part of the Gerecse. Reference: Chung, C. (2005). Using likelihood ratio functions for modeling the conditional probability of occurrence of future landslides for risk assessment. Computers & Geosciences, 32., pp. 1052-1068.

  13. Should Social Value Obligations be Local or Global?

    PubMed

    Nayak, Rahul; Shah, Seema K

    2017-02-01

    According to prominent bioethics scholars and international guidelines, researchers and sponsors have obligations to ensure that the products of their research are reasonably available to research participants and their communities. In other words, the claim is that research is unethical unless it has local social value. In this article, we argue that the existing conception of reasonable availability should be replaced with a social value obligation that extends to the global poor (and not just research participants and host communities). To the extent the social value requirement has been understood as geographically constrained to the communities that host research and the countries that can afford the products of research, it has neglected to include the global poor as members of the relevant society. We argue that a new conception of social value obligations is needed for two reasons. First, duties of global beneficence give reason for researchers, sponsors, and institutions to take steps to make their products more widely accessible. Second, public commitments made by many institutions acknowledge and engender responsibilities to make the products of research more accessible to the global poor. Future research is needed to help researchers and sponsors discharge these obligations in ways that unlock their full potential. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  14. Reflexive reasoning for distributed real-time systems

    NASA Technical Reports Server (NTRS)

    Goldstein, David

    1994-01-01

    This paper discusses the implementation and use of reflexive reasoning in real-time, distributed knowledge-based applications. Recently there has been a great deal of interest in agent-oriented systems. Implementing such systems implies a mechanism for sharing knowledge, goals and other state information among the agents. Our techniques facilitate an agent examining both state information about other agents and the parameters of the knowledge-based system shell implementing its reasoning algorithms. The shell implementing the reasoning is the Distributed Artificial Intelligence Toolkit, which is a derivative of CLIPS.

  15. Inverse gas chromatographic determination of solubility parameters of excipients.

    PubMed

    Adamska, Katarzyna; Voelkel, Adam

    2005-11-04

    The principle aim of this work was an application of inverse gas chromatography (IGC) for the estimation of solubility parameter for pharmaceutical excipients. The retention data of number of test solutes were used to calculate Flory-Huggins interaction parameter (chi1,2infinity) and than solubility parameter (delta2), corrected solubility parameter (deltaT) and its components (deltad, deltap, deltah) by using different procedures. The influence of different values of test solutes solubility parameter (delta1) over calculated values was estimated. The solubility parameter values obtained for all excipients from the slope, from Guillet and co-workers' procedure are higher than that obtained from components according Voelkel and Janas procedure. It was found that solubility parameter's value of the test solutes influences, but not significantly, values of solubility parameter of excipients.

  16. Medical Rapid Response in Psychiatry: Reasons for Activation and Immediate Outcome.

    PubMed

    Manu, Peter; Loewenstein, Kristy; Girshman, Yankel J; Bhatia, Padam; Barnes, Maira; Whelan, Joseph; Solderitch, Victoria A; Rogozea, Liliana; McManus, Marybeth

    2015-12-01

    Rapid response teams are used to improve the recognition of acute deteriorations in medical and surgical settings. They are activated by abnormal physiological parameters, symptoms or clinical concern, and are believed to decrease hospital mortality rates. We evaluated the reasons for activation and the outcome of rapid response interventions in a 222-bed psychiatric hospital in New York City using data obtained at the time of all activations from January through November, 2012. The primary outcome was the admission rate to a medical or surgical unit for each of the main reasons for activation. The 169 activations were initiated by nursing staff (78.7 %) and psychiatrists (13 %) for acute changes in condition (64.5 %), abnormal physiological parameters (27.2 %) and non-specified concern (8.3 %). The most common reasons for activation were chest pain (14.2 %), fluctuating level of consciousness (9.5 %), hypertension (9.5 %), syncope or fall (8.9 %), hypotension (8.3 %), dyspnea (7.7 %) and seizures (5.9 %). The rapid response team transferred 127 (75.2 %) patients to the Emergency Department and 46 (27.2 %) were admitted to a medical or surgical unit. The admission rates were statistically similar for acute changes in condition, abnormal physiological parameters, and clinicians' concern. In conclusion, a majority of rapid response activations in a self-standing psychiatric hospital were initiated by nursing staff for changes in condition, rather than for policy-specified abnormal physiological parameters. The findings suggest that a rapid response system may empower psychiatric nurses to use their clinical skills to identify patients requiring urgent transfer to a general hospital.

  17. Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.

    PubMed

    Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S

    2012-11-01

    One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.

  18. Evaluating a common semi-mechanistic mathematical model of gene-regulatory networks

    PubMed Central

    2015-01-01

    Modeling and simulation of gene-regulatory networks (GRNs) has become an important aspect of modern systems biology investigations into mechanisms underlying gene regulation. A key challenge in this area is the automated inference (reverse-engineering) of dynamic, mechanistic GRN models from gene expression time-course data. Common mathematical formalisms for representing such models capture two aspects simultaneously within a single parameter: (1) Whether or not a gene is regulated, and if so, the type of regulator (activator or repressor), and (2) the strength of influence of the regulator (if any) on the target or effector gene. To accommodate both roles, "generous" boundaries or limits for possible values of this parameter are commonly allowed in the reverse-engineering process. This approach has several important drawbacks. First, in the absence of good guidelines, there is no consensus on what limits are reasonable. Second, because the limits may vary greatly among different reverse-engineering experiments, the concrete values obtained for the models may differ considerably, and thus it is difficult to compare models. Third, if high values are chosen as limits, the search space of the model inference process becomes very large, adding unnecessary computational load to the already complex reverse-engineering process. In this study, we demonstrate that restricting the limits to the [−1, +1] interval is sufficient to represent the essential features of GRN systems and offers a reduction of the search space without loss of quality in the resulting models. To show this, we have carried out reverse-engineering studies on data generated from artificial and experimentally determined from real GRN systems. PMID:26356485

  19. Spatial correlation of hydrometeor occurrence, reflectivity, and rain rate from CloudSat

    NASA Astrophysics Data System (ADS)

    Marchand, Roger

    2012-03-01

    This paper examines the along-track vertical and horizontal structure of hydrometeor occurrence, reflectivity, and column rain rate derived from CloudSat. The analysis assumes hydrometeors statistics in a given region are horizontally invariant, with the probability of hydrometeor co-occurrence obtained simply by determining the relative frequency at which hydrometeors can be found at two points (which may be at different altitudes and offset by a horizontal distance, Δx). A correlation function is introduced (gamma correlation) that normalizes hydrometeor co-occurrence values to the range of 1 to -1, with a value of 0 meaning uncorrelated in the usual sense. This correlation function is a generalization of the alpha overlap parameter that has been used in recent studies to describe the overlap between cloud (or hydrometeor) layers. Examples of joint histograms of reflectivity at two points are also examined. The analysis shows that the traditional linear (or Pearson) correlation coefficient provides a useful one-to-one measure of the strength of the relationship between hydrometeor reflectivity at two points in the horizontal (that is, two points at the same altitude). While also potentially useful in the vertical direction, the relationship between reflectivity values at different altitudes is not as well described by the linear correlation coefficient. The decrease in correlation of hydrometeor occurrence and reflectivity with horizontal distance, as well as precipitation occurrence and column rain rate, can be reasonably well fit with a simple two-parameter exponential model. In this paper, the North Pacific and tropical western Pacific are examined in detail, as is the zonal dependence.

  20. Low-field magnetoresistance up to 400 K in double perovskite Sr{sub 2}FeMoO{sub 6} synthesized by a citrate route

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harnagea, L., E-mail: harnagealuminita@gmail.com; Jurca, B.; Physical Chemistry Department, University of Bucharest, 4-12 Bd. Elisabeta, 030018 Bucharest

    2014-03-15

    A wet-chemistry technique, namely the citrate route, has been used to prepare high-quality polycrystalline samples of double perovskite Sr{sub 2}FeMoO{sub 6}. We report on the evolution of magnetic and magnetoresistive properties of the synthesized samples as a function of three parameters (i) the pH of the starting solution, (ii) the decomposition temperature of the citrate precursors and (iii) the sintering conditions. The low-field magnetoresistance (LFMR) value of our best samples is as high as 5% at room temperature for an applied magnetic field of 1 kOe. Additionally, the distinguishing feature of these samples is the persistence of LFMR, with amore » reasonably large value, up to 400 K which is a crucial parameter for any practical application. Our study indicates that the enhancement of LFMR observed is due to a good compromise between the grain size distribution and their magnetic polarization. -- Graphical abstract: The microstructure (left panel) and corresponding low-field magnetoresistance of one of the Sr{sub 2}FeMoO{sub 6} samples synthesized in the course of this work. Highlights: • Samples of Sr{sub 2}FeMoO{sub 6} are prepared using a citrate route under varying conditions. • Magnetoresistive properties are improved and optimized. • Low-field magnetoresitence values as large as 5% at 300 K/1 kOe are reported. • Persistence of low-field magnetoresistance up to 400 K.« less

  1. Evaluation of coffee roasting degree by using electronic nose and artificial neural network for off-line quality control.

    PubMed

    Romani, Santina; Cevoli, Chiara; Fabbri, Angelo; Alessandrini, Laura; Dalla Rosa, Marco

    2012-09-01

    An electronic nose (EN) based on an array of 10 metal oxide semiconductor sensors was used, jointly with an artificial neural network (ANN), to predict coffee roasting degree. The flavor release evolution and the main physicochemical modifications (weight loss, density, moisture content, and surface color: L*, a*), during the roasting process of coffee, were monitored at different cooking times (0, 6, 8, 10, 14, 19 min). Principal component analysis (PCA) was used to reduce the dimensionality of sensors data set (600 values per sensor). The selected PCs were used as ANN input variables. Two types of ANN methods (multilayer perceptron [MLP] and general regression neural network [GRNN]) were used in order to estimate the EN signals. For both neural networks the input values were represented by scores of sensors data set PCs, while the output values were the quality parameter at different roasting times. Both the ANNs were able to well predict coffee roasting degree, giving good prediction results for both roasting time and coffee quality parameters. In particular, GRNN showed the highest prediction reliability. Actually the evaluation of coffee roasting degree is mainly a manned operation, substantially based on the empirical final color observation. For this reason it requires well-trained operators with a long professional skill. The coupling of e-nose and artificial neural networks (ANNs) may represent an effective possibility to roasting process automation and to set up a more reproducible procedure for final coffee bean quality characterization. © 2012 Institute of Food Technologists®

  2. Stochastic modeling of economic injury levels with respect to yearly trends in price commodity.

    PubMed

    Damos, Petros

    2014-05-01

    The economic injury level (EIL) concept integrates economics and biology and uses chemical applications in crop protection only when economic loss by pests is anticipated. The EIL is defined by five primary variables: the cost of management tactic per production unit, the price of commodity, the injury units per pest, the damage per unit injury, and the proportionate reduction of injury averted by the application of a tactic. The above variables are related according to the formula EIL = C/VIDK. The observable dynamic alteration of the EIL due to its different parameters is a major characteristic of its concept. In this study, the yearly effect of the economic variables is assessed, and in particular the influence of the parameter commodity value on the shape of the EIL function. In addition, to predict the effects of the economic variables on the EIL level, yearly commodity values were incorporated in the EIL formula and the generated outcomes were further modelled with stochastic linear autoregressive models having different orders. According to the AR(1) model, forecasts for the five-year period of 2010-2015 ranged from 2.33 to 2.41 specimens per sampling unit. These values represent a threshold that is in reasonable limits to justify future control actions. Management actions as related to productivity and price commodity significantly affect costs of crop production and thus define the adoption of IPM and sustainable crop production systems at local and international levels. This is an open access paper. We use the Creative Commons Attribution 3.0 license that permits unrestricted use, provided that the paper is properly attributed.

  3. Combined effect of oregano essential oil and modified atmosphere packaging on shelf-life extension of fresh chicken breast meat, stored at 4 degrees C.

    PubMed

    Chouliara, E; Karatapanis, A; Savvaidis, I N; Kontominas, M G

    2007-09-01

    The combined effect of oregano essential oil (0.1% and 1% w/w) and modified atmosphere packaging (MAP) (30% CO2/70% N2 and 70% CO2/30% N2) on shelf-life extension of fresh chicken meat stored at 4 degrees C was investigated. The parameters that were monitored were: microbiological (TVC, Pseudomonas spp., lactic acid bacteria (LAB), yeasts, Brochothrix thermosphacta and Enterobacteriaceae), physico-chemical (pH, TBA, color) and sensory (odor and taste) attributes. Microbial populations were reduced by 1-5 log cfu/g for a given sampling day, with the more pronounced effect being achieved by the combination of MAP and oregano essential oil. TBA values for all treatments remained lower than 1 mg malondialdehyde (MDA) kg(-1) throughout the 25-day storage period. pH values varied between 6.4 (day 0) and 5.9 (day 25). The values of the color parameters L*, a* and b* were not considerably affected by oregano oil or by MAP. Finally, sensory analysis showed that oregano oil at a concentration of 1% imparted a very strong taste to the product for which reason these lots of samples were not scored. On the basis of sensory evaluation a shelf-life extension of breast chicken meat by ca. 3-4 days for samples containing 0.1% oregano oil, 2-3 days for samples under MAP and 5-6 days for samples under MAP containing 0.1% of oregano oil was attained. Thus oregano oil and MAP exhibited an additive preservation effect.

  4. Predicting heavy metals' adsorption edges and adsorption isotherms on MnO2 with the parameters determined from Langmuir kinetics.

    PubMed

    Hu, Qinghai; Xiao, Zhongjin; Xiong, Xinmei; Zhou, Gongming; Guan, Xiaohong

    2015-01-01

    Although surface complexation models have been widely used to describe the adsorption of heavy metals, few studies have verified the feasibility of modeling the adsorption kinetics, edge, and isotherm data with one pH-independent parameter. A close inspection of the derivation process of Langmuir isotherm revealed that the equilibrium constant derived from the Langmuir kinetic model, KS-kinetic, is theoretically equivalent to the adsorption constant in Langmuir isotherm, KS-Langmuir. The modified Langmuir kinetic model (MLK model) and modified Langmuir isotherm model (MLI model) incorporating pH factor were developed. The MLK model was employed to simulate the adsorption kinetics of Cu(II), Co(II), Cd(II), Zn(II) and Ni(II) on MnO2 at pH3.2 or 3.3 to get the values of KS-kinetic. The adsorption edges of heavy metals could be modeled with the modified metal partitioning model (MMP model), and the values of KS-Langmuir were obtained. The values of KS-kinetic and KS-Langmuir are very close to each other, validating that the constants obtained by these two methods are basically the same. The MMP model with KS-kinetic constants could predict the adsorption edges of heavy metals on MnO2 very well at different adsorbent/adsorbate concentrations. Moreover, the adsorption isotherms of heavy metals on MnO2 at various pH levels could be predicted reasonably well by the MLI model with the KS-kinetic constants. Copyright © 2014. Published by Elsevier B.V.

  5. Determining octanol-water partition coefficients for extremely hydrophobic chemicals by combining "slow stirring" and solid-phase microextraction.

    PubMed

    Jonker, Michiel T O

    2016-06-01

    Octanol-water partition coefficients (KOW ) are widely used in fate and effects modeling of chemicals. Still, high-quality experimental KOW data are scarce, in particular for very hydrophobic chemicals. This hampers reliable assessments of several fate and effect parameters and the development and validation of new models. One reason for the limited availability of experimental values may relate to the challenging nature of KOW measurements. In the present study, KOW values for 13 polycyclic aromatic hydrocarbons were determined with the gold standard "slow-stirring" method (log KOW 4.6-7.2). These values were then used as reference data for the development of an alternative method for measuring KOW . This approach combined slow stirring and equilibrium sampling of the extremely low aqueous concentrations with polydimethylsiloxane-coated solid-phase microextraction fibers, applying experimentally determined fiber-water partition coefficients. It resulted in KOW values matching the slow-stirring data very well. Therefore, the method was subsequently applied to a series of 17 moderately to extremely hydrophobic petrochemical compounds. The obtained KOW values spanned almost 6 orders of magnitude, with the highest value measuring 10(10.6) . The present study demonstrates that the hydrophobicity domain within which experimental KOW measurements are possible can be extended with the help of solid-phase microextraction and that experimentally determined KOW values can exceed the proposed upper limit of 10(9) . Environ Toxicol Chem 2016;35:1371-1377. © 2015 SETAC. © 2015 SETAC.

  6. The MIT IGSM-CAM framework for uncertainty studies in global and regional climate change

    NASA Astrophysics Data System (ADS)

    Monier, E.; Scott, J. R.; Sokolov, A. P.; Forest, C. E.; Schlosser, C. A.

    2011-12-01

    The MIT Integrated Global System Model (IGSM) version 2.3 is an intermediate complexity fully coupled earth system model that allows simulation of critical feedbacks among its various components, including the atmosphere, ocean, land, urban processes and human activities. A fundamental feature of the IGSM2.3 is the ability to modify its climate parameters: climate sensitivity, net aerosol forcing and ocean heat uptake rate. As such, the IGSM2.3 provides an efficient tool for generating probabilistic distribution functions of climate parameters using optimal fingerprint diagnostics. A limitation of the IGSM2.3 is its zonal-mean atmosphere model that does not permit regional climate studies. For this reason, the MIT IGSM2.3 was linked to the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM) version 3 and new modules were developed and implemented in CAM in order to modify its climate sensitivity and net aerosol forcing to match that of the IGSM. The IGSM-CAM provides an efficient and innovative framework to study regional climate change where climate parameters can be modified to span the range of uncertainty and various emissions scenarios can be tested. This paper presents results from the cloud radiative adjustment method used to modify CAM's climate sensitivity. We also show results from 21st century simulations based on two emissions scenarios (a median "business as usual" scenario where no policy is implemented after 2012 and a policy scenario where greenhouse-gas are stabilized at 660 ppm CO2-equivalent concentrations by 2100) and three sets of climate parameters. The three values of climate sensitivity chosen are median and the bounds of the 90% probability interval of the probability distribution obtained by comparing the observed 20th century climate change with simulations by the IGSM with a wide range of climate parameters values. The associated aerosol forcing values were chosen to ensure a good agreement of the simulations with the observed climate change over the 20th century. Because the concentrations of sulfate aerosols significantly decrease over the 21st century in both emissions scenarios, climate changes obtained in these six simulations provide a good approximation for the median, and the 5th and 95th percentiles of the probability distribution of 21st century climate change.

  7. [Monocytic parameters in patients with rheumatologic diseases reflect intensity of depressive disorder].

    PubMed

    Buras, Aleksandra; Waszkiewicz, Napoleon; Szulc, Agata; Sierakowski, Stanisław

    2012-12-01

    Last years have brought important informations about the changes in white blood cell parameters in patients with depressive disorder and furthermore changes in the levels of cytokine production. The concept of bidirectional communications between the immune system and the central nervous system has been expressed as the 'macrophage theory of depression' and the 'cytokine hypothesis of depression' that described greater expression of monocyte-associated Interleukin-1beta (IL-1beta), Interleukin-1(see text for symbol) (IL-1(see text for symbol)), tumor necrosis factor-alpha (TNF-alpha), Interleukin-6 (IL-6), Interleukin-8 (IL-8) etc., in patients suffering from depression. It was supported by many findings e.g. administration of proinflammatory cytokines in the treatment of cancer and hepatitis C, that induced depressive symptomatology. Generally Depression accompanies a number of illnesses characterized by chronic inflammatory response. The aim of this study was to investigate the association between intensity of depression and blood cells counts in patients attending Rheumatology Department. Research included 56 patients hospitalized in Department of Rheumatology (Medical University of Bialystok), by the reason of rheumatic arthritis (RA), systemic lupus erythematosus (SLE), ankylosing Spondylitis (AS), systemic scleroderma, Churg-Strauss syndrome (CSS) and psoriatic arthritis (PA). Researched group was presenting by 46 women (mean age 51 years; range 18-73) and 10 men (mean age 50 years; range 27-78). Depressive symptoms were assessed using the Beck Depression Inventory (BDI) and Hamilton Depression Scale (HAM-D). Peripheral blood samples were obtained from all 56 patients for standard blood cell counts. Statistical analysis was performed using Statistica 9.0 pl (Statsoft, Cracov, Poland). Spearman's rank correlation coefficient was used to estimate associations between variables. P-values < 0.05 were considered statistically significant. The mean BDI value was found to be 12 +/- 8 and the mean HAM-D 14 +/- 9. Monocytes ratio significantly correlated with the intensity of depressive symptoms. Stress and pain increase with illness progression are only fragments of the analyzed problem ground. Although monocytes value remained within the upper limit of normal value, their correlation with depressive symptoms suggests that the serious reason for such a depressive mood state is a high level of monocytes. It indicates on necessity of early diagnosis and treatment of depression associate with chronic proinflammatory diseases. It may be also speculated potential efficiency of an adjunctive treatment with cytokine inhibitors and oxidative stress inhibitory factors in the therapy of depressive disorders.

  8. Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model.

    PubMed

    Wako, Hiroshi; Abe, Haruo

    2016-01-01

    The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding.

  9. Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model

    PubMed Central

    Wako, Hiroshi; Abe, Haruo

    2016-01-01

    The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding. PMID:28409079

  10. Determination of sustainable values for the parameters of the construction of residential buildings

    NASA Astrophysics Data System (ADS)

    Grigoreva, Larisa; Grigoryev, Vladimir

    2018-03-01

    For the formation of programs for housing construction and planning of capital investments, when developing the strategic planning companies by construction companies, the norms or calculated indicators of the duration of the construction of high-rise residential buildings and multifunctional complexes are mandatory. Determination of stable values of the parameters for the high-rise construction residential buildings provides an opportunity to establish a reasonable duration of construction at the planning and design stages of residential complexes, taking into account the influence of market conditions factors. The concept of the formation of enlarged models for the high-rise construction residential buildings is based on a real mapping in time and space of the most significant redistribution with their organizational and technological interconnection - the preparatory period, the underground part, the above-ground part, external engineering networks, landscaping. The total duration of the construction of a residential building, depending on the duration of each redistribution and the degree of their overlapping, can be determined by one of the proposed four options. At the same time, a unified approach to determining the overall duration of construction on the basis of the provisions of a streamlined construction organization with the testing of results on the example of high-rise residential buildings of the typical I-155B series was developed, and the coefficients for combining the work and the main redevelopment of the building were determined.

  11. Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup

    NASA Astrophysics Data System (ADS)

    Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.

    2014-11-01

    We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.

  12. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources.more » We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.« less

  13. Preharvest methyl jasmonate and postharvest UVC treatments: increasing stilbenes in wine.

    PubMed

    Fernández-Marín, María Isabel; Puertas, Belén; Guerrero, Raúl F; García-Parrilla, María Carmen; Cantos-Villar, Emma

    2014-03-01

    Stilbene-enriched wine is considered to be an interesting new food product with added value due to its potential health-promoting properties. Stilbene concentration in grape is highly variable and rather scarce. However, it can be increased by stress treatments. For this reason, numerous pre- and postharvest grape treatments, and some combinations of them, have been tested to maximize stilbene content in grapes. In the present manuscript, Syrah grapes were treated with (i) methyl jasmonate (MEJA), (ii) ultraviolet light (UVC), and (iii) methyl jasmonate and ultraviolet light (MEJA-UVC) and compared with untreated grapes. Afterward, winemaking was developed. Wine achieved by combination of both treatments (MEJA-UVC) contained significantly higher stilbene concentration (trans-resveratrol and piceatannol) than its respective control (2.5-fold). Wine quality was improved in color-related parameters (color intensity, L*, a*, b*, ΔE*, anthocyanins, and tannin). Moreover, MEJA-UVC wines obtained the highest score in sensorial analysis. To the best of our knowledge, this is the first time that pre- and postharvest treatments are combined to increase stilbenes in wine. The effect of treatment combination (methyl jasmonate and UVC light) on grape and wine was evaluated. Our results highlight the positive effect of the treatments in stilbene content, color parameters, and sensorial analysis. Moreover, added-value by-products were achieved. © 2014 Institute of Food Technologists®

  14. Analysis of benchmark critical experiments with ENDF/B-VI data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, J. Jr.; Kahler, A.C.

    1991-12-31

    Several clean critical experiments were analyzed with ENDF/B-VI data to assess the adequacy of the data for U{sup 235}, U{sup 238} and oxygen. These experiments were (1) a set of homogeneous U{sup 235}-H{sub 2}O assemblies spanning a wide range of hydrogen/uranium ratio, and (2) TRX-1, a simple, H{sub 2}O-moderated Bettis lattice of slightly-enriched uranium metal rods. The analyses used the Monte Carlo program RCP01, with explicit three-dimensional geometry and detailed representation of cross sections. For the homogeneous criticals, calculated k{sub crit} values for large, thermal assemblies show good agreement with experiment. This supports the evaluated thermal criticality parameters for U{supmore » 235}. However, for assemblies with smaller H/U ratios, k{sub crit} values increase significantly with increasing leakage and flux-spectrum hardness. These trends suggest that leakage is underpredicted and that the resonance eta of the ENDF/B-VI U{sup 235} is too large. For TRX-1, reasonably good agreement is found with measured lattice parameters (reaction-rate ratios). Of primary interest is rho28, the ratio of above-thermal to thermal U{sup 238} capture. Calculated rho28 is 2.3 ({+-} 1.7) % above measurement, suggesting that U{sup 238} resonance capture remains slightly overpredicted with ENDF/B-VI. However, agreement is better than observed with earlier versions of ENDF/B.« less

  15. Discrete breathers for a discrete nonlinear Schrödinger ring coupled to a central site.

    PubMed

    Jason, Peter; Johansson, Magnus

    2016-01-01

    We examine the existence and properties of certain discrete breathers for a discrete nonlinear Schrödinger model where all but one site are placed in a ring and coupled to the additional central site. The discrete breathers we focus on are stationary solutions mainly localized on one or a few of the ring sites and possibly also the central site. By numerical methods, we trace out and study the continuous families the discrete breathers belong to. Our main result is the discovery of a split bifurcation at a critical value of the coupling between neighboring ring sites. Below this critical value, families form closed loops in a certain parameter space, implying that discrete breathers with and without central-site occupation belong to the same family. Above the split bifurcation the families split up into several separate ones, which bifurcate with solutions with constant ring amplitudes. For symmetry reasons, the families have different properties below the split bifurcation for even and odd numbers of sites. It is also determined under which conditions the discrete breathers are linearly stable. The dynamics of some simpler initial conditions that approximate the discrete breathers are also studied and the parameter regimes where the dynamics remain localized close to the initially excited ring site are related to the linear stability of the exact discrete breathers.

  16. Predicting Trihalomethanes (THMs) in the New York City Water Supply

    NASA Astrophysics Data System (ADS)

    Mukundan, R.; Van Dreason, R.

    2013-12-01

    Chlorine, a commonly used disinfectant in most water supply systems, can combine with organic carbon to form disinfectant byproducts including carcinogenic trihalomethanes (THMs). We used water quality data from 24 monitoring sites within the New York City (NYC) water supply distribution system, measured between January 2009 and April 2012, to develop site-specific empirical models for predicting total trihalomethane (TTHM) levels. Terms in the model included various combinations of the following water quality parameters: total organic carbon, pH, specific conductivity, and water temperature. Reasonable estimates of TTHM levels were achieved with overall R2 of about 0.87 and predicted values within 5 μg/L of measured values. The relative importance of factors affecting TTHM formation was estimated by ranking the model regression coefficients. Site-specific models showed improved model performance statistics compared to a single model for the entire system most likely because the single model did not consider locational differences in the water treatment process. Although never out of compliance in 2011, the TTHM levels in the water supply increased following tropical storms Irene and Lee with 45% of the samples exceeding the 80 μg/L Maximum Contaminant Level (MCL) in October and November. This increase was explained by changes in water quality parameters, particularly by the increase in total organic carbon concentration and pH during this period.

  17. Water Quality Criteria for Copper Based on the BLM Approach in the Freshwater in China

    PubMed Central

    Zhang, Yahui; Zang, Wenchao; Qin, Lumei; Zheng, Lei; Cao, Ying; Yan, Zhenguang; Yi, Xianliang; Zeng, Honghu; Liu, Zhengtao

    2017-01-01

    The bioavailability and toxicity of metals to aquatic organisms are highly dependent on water quality parameters in freshwaters. The biotic ligand model (BLM) for copper is an approach to generate the water quality criteria (WQC) with water chemistry in the ambient environment. However, few studies were carried out on the WQCs for copper based on the BLM approach in China. In the present study, the toxicity for copper to native Chinese aquatic organisms was conducted and the published toxicity data with water quality parameters to Chinese aquatic species were collected to derive the WQCs for copper by the BLM approach. The BLM-based WQCs (the criterion maximum criteria (CMC) and the criterion continuous concentration (CCC)) for copper in the freshwater for the nation and in the Taihu Lake were obtained. The CMC and CCC values for copper in China were derived to be 1.391 μg/L and 0.495 μg/L, respectively, and the CMC and CCC in the Taihu Lake were 32.194 μg/L and 9.697 μg/L. The high concentration of dissolved organic carbon might be a main reason which resulted in the higher WQC values in the Taihu Lake. The WQC of copper in the freshwater would provide a scientific foundation for water quality standards and the environment risk assessment in China. PMID:28166229

  18. Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.

    PubMed

    Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui

    2016-08-31

    To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.

  19. Experimental determination of the carboxylate oxygen electric-field-gradient and chemical shielding tensors in L-alanine and L-phenylalanine

    NASA Astrophysics Data System (ADS)

    Yamada, Kazuhiko; Asanuma, Miwako; Honda, Hisashi; Nemoto, Takahiro; Yamazaki, Toshio; Hirota, Hiroshi

    2007-10-01

    We report a solid-state 17O NMR study of the 17O electric-field-gradient (EFG) and chemical shielding (CS) tensors for each carboxylate group in polycrystalline L-alanine and L-phenylalanine. The magic angle spinning (MAS) and stationary 17O NMR spectra of these compounds were obtained at 9.4, 14.1, and 16.4 T. Analyzes of these 17O NMR spectra yielded reliable experimental NMR parameters including 17O CS tensor components, 17O quadrupole coupling parameters, and the relative orientations between the 17O CS and EFG tensors. The extensive quantum chemical calculations at both the restricted Hartree-Fock and density-functional theories were carried out with various basis sets to evaluate the quality of quantum chemical calculations for the 17O NMR tensors in L-alanine. For 17O CS tensors, the calculations at the B3LYP/D95 ∗∗ level could reasonably reproduce 17O CS tensors, but they still showed some discrepancies in the δ11 components by approximately 36 ppm. For 17O EFG calculations, it was advantageous to use calibrated Q value to give acceptable CQ values. The calculated results also demonstrated that not only complete intermolecular hydrogen-bonding networks to target oxygen in L-alanine, but also intermolecular interactions around the NH3+ group were significant to reproduce the 17O NMR tensors.

  20. Adaptive Sampling-Based Information Collection for Wireless Body Area Networks

    PubMed Central

    Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui

    2016-01-01

    To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758

Top