Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
A two-component rain model for the prediction of attenuation statistics
NASA Technical Reports Server (NTRS)
Crane, R. K.
1982-01-01
A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.
A statistical model of operational impacts on the framework of the bridge crane
NASA Astrophysics Data System (ADS)
Antsev, V. Yu; Tolokonnikov, A. S.; Gorynin, A. D.; Reutov, A. A.
2017-02-01
The technical regulations of the Customs Union demands implementation of the risk analysis of the bridge cranes operation at their design stage. The statistical model has been developed for performance of random calculations of risks, allowing us to model possible operational influences on the bridge crane metal structure in their various combination. The statistical model is practically actualized in the software product automated calculation of risks of failure occurrence of bridge cranes.
Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.
A first principles calculation and statistical mechanics modeling of defects in Al-H system
NASA Astrophysics Data System (ADS)
Ji, Min; Wang, Cai-Zhuang; Ho, Kai-Ming
2007-03-01
The behavior of defects and hydrogen in Al was investigated by first principles calculations and statistical mechanics modeling. The formation energy of different defects in Al+H system such as Al vacancy, H in institution and multiple H in Al vacancy were calculated by first principles method. Defect concentration in thermodynamical equilibrium was studied by total free energy calculation including configuration entropy and defect-defect interaction from low concentration limit to hydride limit. In our grand canonical ensemble model, hydrogen chemical potential under different environment plays an important role in determing the defect concentration and properties in Al-H system.
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
Heads Up! a Calculation- & Jargon-Free Approach to Statistics
ERIC Educational Resources Information Center
Giese, Alan R.
2012-01-01
Evaluating the strength of evidence in noisy data is a critical step in scientific thinking that typically relies on statistics. Students without statistical training will benefit from heuristic models that highlight the logic of statistical analysis. The likelihood associated with various coin-tossing outcomes gives students such a model. There…
Murchie, Brent; Tandon, Kanwarpreet; Hakim, Seifeldin; Shah, Kinchit; O'Rourke, Colin; Castro, Fernando J
2017-04-01
Colorectal cancer (CRC) screening guidelines likely over-generalizes CRC risk, 35% of Americans are not up to date with screening, and there is growing incidence of CRC in younger patients. We developed a practical prediction model for high-risk colon adenomas in an average-risk population, including an expanded definition of high-risk polyps (≥3 nonadvanced adenomas), exposing higher than average-risk patients. We also compared results with previously created calculators. Patients aged 40 to 59 years, undergoing first-time average-risk screening or diagnostic colonoscopies were evaluated. Risk calculators for advanced adenomas and high-risk adenomas were created based on age, body mass index, sex, race, and smoking history. Previously established calculators with similar risk factors were selected for comparison of concordance statistic (c-statistic) and external validation. A total of 5063 patients were included. Advanced adenomas, and high-risk adenomas were seen in 5.7% and 7.4% of the patient population, respectively. The c-statistic for our calculator was 0.639 for the prediction of advanced adenomas, and 0.650 for high-risk adenomas. When applied to our population, all previous models had lower c-statistic results although one performed similarly. Our model compares favorably to previously established prediction models. Age and body mass index were used as continuous variables, likely improving the c-statistic. It also reports absolute predictive probabilities of advanced and high-risk polyps, allowing for more individualized risk assessment of CRC.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Effective field theory of statistical anisotropies for primordial bispectrum and gravitational waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rostami, Tahereh; Karami, Asieh; Firouzjahi, Hassan, E-mail: t.rostami@ipm.ir, E-mail: karami@ipm.ir, E-mail: firouz@ipm.ir
2017-06-01
We present the effective field theory studies of primordial statistical anisotropies in models of anisotropic inflation. The general action in unitary gauge is presented to calculate the leading interactions between the gauge field fluctuations, the curvature perturbations and the tensor perturbations. The anisotropies in scalar power spectrum and bispectrum are calculated and the dependence of these anisotropies to EFT couplings are presented. In addition, we calculate the statistical anisotropy in tensor power spectrum and the scalar-tensor cross correlation. Our EFT approach incorporates anisotropies generated in models with non-trivial speed for the gauge field fluctuations and sound speed for scalar perturbationsmore » such as in DBI inflation.« less
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
Gorobets, Yu I; Gorobets, O Yu
2015-01-01
The statistical model is proposed in this paper for description of orientation of trajectories of unicellular diamagnetic organisms in a magnetic field. The statistical parameter such as the effective energy is calculated on basis of this model. The resulting effective energy is the statistical characteristics of trajectories of diamagnetic microorganisms in a magnetic field connected with their metabolism. The statistical model is applicable for the case when the energy of the thermal motion of bacteria is negligible in comparison with their energy in a magnetic field and the bacteria manifest the significant "active random movement", i.e. there is the randomizing motion of the bacteria of non thermal nature, for example, movement of bacteria by means of flagellum. The energy of the randomizing active self-motion of bacteria is characterized by the new statistical parameter for biological objects. The parameter replaces the energy of the randomizing thermal motion in calculation of the statistical distribution. Copyright © 2014 Elsevier Ltd. All rights reserved.
Variability aware compact model characterization for statistical circuit design optimization
NASA Astrophysics Data System (ADS)
Qiao, Ying; Qian, Kun; Spanos, Costas J.
2012-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
New statistical scission-point model to predict fission fragment observables
NASA Astrophysics Data System (ADS)
Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie
2015-09-01
The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.
NASA Astrophysics Data System (ADS)
Skrzypek, Grzegorz; Sadler, Rohan; Wiśniewski, Andrzej
2017-04-01
The stable oxygen isotope composition of phosphates (δ18O) extracted from mammalian bone and teeth material is commonly used as a proxy for paleotemperature. Historically, several different analytical and statistical procedures for determining air paleotemperatures from the measured δ18O of phosphates have been applied. This inconsistency in both stable isotope data processing and the application of statistical procedures has led to large and unwanted differences between calculated results. This study presents the uncertainty associated with two of the most commonly used regression methods: least squares inverted fit and transposed fit. We assessed the performance of these methods by designing and applying calculation experiments to multiple real-life data sets, calculating in reverse temperatures, and comparing them with true recorded values. Our calculations clearly show that the mean absolute errors are always substantially higher for the inverted fit (a causal model), with the transposed fit (a predictive model) returning mean values closer to the measured values (Skrzypek et al. 2015). The predictive models always performed better than causal models, with 12-65% lower mean absolute errors. Moreover, the least-squares regression (LSM) model is more appropriate than Reduced Major Axis (RMA) regression for calculating the environmental water stable oxygen isotope composition from phosphate signatures, as well as for calculating air temperature from the δ18O value of environmental water. The transposed fit introduces a lower overall error than the inverted fit for both the δ18O of environmental water and Tair calculations; therefore, the predictive models are more statistically efficient than the causal models in this instance. The direct comparison of paleotemperature results from different laboratories and studies may only be achieved if a single method of calculation is applied. Reference Skrzypek G., Sadler R., Wiśniewski A., 2016. Reassessment of recommendations for processing mammal phosphate δ18O data for paleotemperature reconstruction. Palaeogeography, Palaeoclimatology, Palaeoecology 446, 162-167.
Interpretation of the results of statistical measurements. [search for basic probability model
NASA Technical Reports Server (NTRS)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler B.
2017-01-01
This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.
Information retrieval from wide-band meteorological data - An example
NASA Technical Reports Server (NTRS)
Adelfang, S. I.; Smith, O. E.
1983-01-01
The methods proposed by Smith and Adelfang (1981) and Smith et al. (1982) are used to calculate probabilities over rectangles and sectors of the gust magnitude-gust length plane; probabilities over the same regions are also calculated from the observed distributions and a comparison is also presented to demonstrate the accuracy of the statistical model. These and other statistical results are calculated from samples of Jimsphere wind profiles at Cape Canaveral. The results are presented for a variety of wavelength bands, altitudes, and seasons. It is shown that wind perturbations observed in Jimsphere wind profiles in various wavelength bands can be analyzed by using digital filters. The relationship between gust magnitude and gust length is modeled with the bivariate gamma distribution. It is pointed out that application of the model to calculate probabilities over specific areas of the gust magnitude-gust length plane can be useful in aerospace design.
A statistical method to estimate low-energy hadronic cross sections
NASA Astrophysics Data System (ADS)
Balassa, Gábor; Kovács, Péter; Wolf, György
2018-02-01
In this article we propose a model based on the Statistical Bootstrap approach to estimate the cross sections of different hadronic reactions up to a few GeV in c.m.s. energy. The method is based on the idea, when two particles collide a so-called fireball is formed, which after a short time period decays statistically into a specific final state. To calculate the probabilities we use a phase space description extended with quark combinatorial factors and the possibility of more than one fireball formation. In a few simple cases the probability of a specific final state can be calculated analytically, where we show that the model is able to reproduce the ratios of the considered cross sections. We also show that the model is able to describe proton-antiproton annihilation at rest. In the latter case we used a numerical method to calculate the more complicated final state probabilities. Additionally, we examined the formation of strange and charmed mesons as well, where we used existing data to fit the relevant model parameters.
BIG BANG NUCLEOSYNTHESIS WITH A NON-MAXWELLIAN DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertulani, C. A.; Fuqua, J.; Hussein, M. S.
The abundances of light elements based on the big bang nucleosynthesis model are calculated using the Tsallis non-extensive statistics. The impact of the variation of the non-extensive parameter q from the unity value is compared to observations and to the abundance yields from the standard big bang model. We find large differences between the reaction rates and the abundance of light elements calculated with the extensive and the non-extensive statistics. We found that the observations are consistent with a non-extensive parameter q = 1{sub -} {sub 0.12}{sup +0.05}, indicating that a large deviation from the Boltzmann-Gibbs statistics (q = 1)more » is highly unlikely.« less
ERIC Educational Resources Information Center
Weerasinghe, Dash; Orsak, Timothy; Mendro, Robert
In an age of student accountability, public school systems must find procedures for identifying effective schools, classrooms, and teachers that help students continue to learn academically. As a result, researchers have been modeling schools and classrooms to calculate productivity indicators that will withstand not only statistical review but…
Statistical properties of several models of fractional random point processes
NASA Astrophysics Data System (ADS)
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
Sub-poissonian photon statistics in the coherent state Jaynes-Cummings model in non-resonance
NASA Astrophysics Data System (ADS)
Zhang, Jia-tai; Fan, An-fu
1992-03-01
We study a model with a two-level atom (TLA) non-resonance interacting with a single-mode quantized cavity field (QCF). The photon number probability function, the mean photon number and Mandel's fluctuation parameter are calculated. The sub-Poissonian distributions of the photon statistics are obtained in non-resonance interaction. This statistical properties are strongly dependent on the detuning parameters.
Reflexion on linear regression trip production modelling method for ensuring good model quality
NASA Astrophysics Data System (ADS)
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
Ensuring Positiveness of the Scaled Difference Chi-Square Test Statistic
ERIC Educational Resources Information Center
Satorra, Albert; Bentler, Peter M.
2010-01-01
A scaled difference test statistic T[tilde][subscript d] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (Psychometrika 66:507-514, 2001). The statistic T[tilde][subscript d] is asymptotically equivalent to the scaled difference test statistic T[bar][subscript…
Evolution of cosmic string networks
NASA Technical Reports Server (NTRS)
Albrecht, Andreas; Turok, Neil
1989-01-01
A discussion of the evolution and observable consequences of a network of cosmic strings is given. A simple model for the evolution of the string network is presented, and related to the statistical mechanics of string networks. The model predicts the long string density throughout the history of the universe from a single parameter, which researchers calculate in radiation era simulations. The statistical mechanics arguments indicate a particular thermal form for the spectrum of loops chopped off the network. Detailed numerical simulations of string networks in expanding backgrounds are performed to test the model. Consequences for large scale structure, the microwave and gravity wave backgrounds, nucleosynthesis and gravitational lensing are calculated.
NASA Astrophysics Data System (ADS)
Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.
2015-11-01
Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.
Nuclear level densities of 64 , 66 Zn from neutron evaporation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramirez, A. P. D.; Voinov, A. V.; Grimes, S. M.
Double differential cross sections of neutrons from d+ 63,65Cu reactions have been measured at deuteron energies of 6 and 7.5 MeV. The cross sections measured at backward angles have been compared to theoretical calculations in the framework of the statistical Hauser-Feshbach model. Three different level density models were tested: the Fermi-gas model, the Gilbert-Cameron model, and the microscopic approach through the Hartree-Fock-Bogoliubov method (HFBM). The calculations using the Gilbert-Cameron model are in best agreement with our experimental data. Level densities of the residual nuclei 64Zn and 66Zn have been obtained from statistical neutron evaporation spectra. In conclusion, the angle-integrated crossmore » sections have been analyzed with the exciton model of nuclear reaction.« less
Nuclear level densities of 64 , 66 Zn from neutron evaporation
Ramirez, A. P. D.; Voinov, A. V.; Grimes, S. M.; ...
2013-12-26
Double differential cross sections of neutrons from d+ 63,65Cu reactions have been measured at deuteron energies of 6 and 7.5 MeV. The cross sections measured at backward angles have been compared to theoretical calculations in the framework of the statistical Hauser-Feshbach model. Three different level density models were tested: the Fermi-gas model, the Gilbert-Cameron model, and the microscopic approach through the Hartree-Fock-Bogoliubov method (HFBM). The calculations using the Gilbert-Cameron model are in best agreement with our experimental data. Level densities of the residual nuclei 64Zn and 66Zn have been obtained from statistical neutron evaporation spectra. In conclusion, the angle-integrated crossmore » sections have been analyzed with the exciton model of nuclear reaction.« less
Statistical dielectronic recombination rates for multielectron ions in plasma
NASA Astrophysics Data System (ADS)
Demura, A. V.; Leont'iev, D. S.; Lisitsa, V. S.; Shurygin, V. A.
2017-10-01
We describe the general analytic derivation of the dielectronic recombination (DR) rate coefficient for multielectron ions in a plasma based on the statistical theory of an atom in terms of the spatial distribution of the atomic electron density. The dielectronic recombination rates for complex multielectron tungsten ions are calculated numerically in a wide range of variation of the plasma temperature, which is important for modern nuclear fusion studies. The results of statistical theory are compared with the data obtained using level-by-level codes ADPAK, FAC, HULLAC, and experimental results. We consider different statistical DR models based on the Thomas-Fermi distribution, viz., integral and differential with respect to the orbital angular momenta of the ion core and the trapped electron, as well as the Rost model, which is an analog of the Frank-Condon model as applied to atomic structures. In view of its universality and relative simplicity, the statistical approach can be used for obtaining express estimates of the dielectronic recombination rate coefficients in complex calculations of the parameters of the thermonuclear plasmas. The application of statistical methods also provides information for the dielectronic recombination rates with much smaller computer time expenditures as compared to available level-by-level codes.
BCM: toolkit for Bayesian analysis of Computational Models using samplers.
Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A
2016-10-21
Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.
Statistical and dynamical modeling of heavy-ion fusion-fission reactions
NASA Astrophysics Data System (ADS)
Eslamizadeh, H.; Razazzadeh, H.
2018-02-01
A modified statistical model and a four dimensional dynamical model based on Langevin equations have been used to simulate the fission process of the excited compound nuclei 207At and 216Ra produced in the fusion 19F + 188Os and 19F + 197Au reactions. The evaporation residue cross section, the fission cross section, the pre-scission neutron, proton and alpha multiplicities and the anisotropy of fission fragments angular distribution have been calculated for the excited compound nuclei 207At and 216Ra. In the modified statistical model the effects of spin K about the symmetry axis and temperature have been considered in calculations of the fission widths and the potential energy surfaces. It was shown that the modified statistical model can reproduce the above mentioned experimental data by using appropriate values of the temperature coefficient of the effective potential equal to λ = 0.0180 ± 0.0055, 0.0080 ± 0.0030 MeV-2 and the scaling factor of the fission barrier height equal to rs = 1.0015 ± 0.0025, 1.0040 ± 0.0020 for the compound nuclei 207At and 216Ra, respectively. Three collective shape coordinates plus the projection of total spin of the compound nucleus on the symmetry axis, K, were considered in the four dimensional dynamical model. In the dynamical calculations, dissipation was generated through the chaos weighted wall and window friction formula. Comparison of the theoretical results with the experimental data showed that two models make it possible to reproduce satisfactorily the above mentioned experimental data for the excited compound nuclei 207At and 216Ra.
Probability of Detection (POD) as a statistical model for the validation of qualitative methods.
Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T
2011-01-01
A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.
Progress in the improved lattice calculation of direct CP-violation in the Standard Model
NASA Astrophysics Data System (ADS)
Kelly, Christopher
2018-03-01
We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.
Pseudo-Boltzmann model for modeling the junctionless transistors
NASA Astrophysics Data System (ADS)
Avila-Herrera, F.; Cerdeira, A.; Roldan, J. B.; Sánchez-Moreno, P.; Tienda-Luna, I. M.; Iñiguez, B.
2014-05-01
Calculation of the carrier concentrations in semiconductors using the Fermi-Dirac integral requires complex numerical calculations; in this context, practically all analytical device models are based on Boltzmann statistics, even though it is known that it leads to an over-estimation of carriers densities for high doping concentrations. In this paper, a new approximation to Fermi-Dirac integral, called Pseudo-Boltzmann model, is presented for modeling junctionless transistors with high doping concentrations.
NASA Astrophysics Data System (ADS)
Goldsworthy, M. J.
2012-10-01
One of the most useful tools for modelling rarefied hypersonic flows is the Direct Simulation Monte Carlo (DSMC) method. Simulator particle movement and collision calculations are combined with statistical procedures to model thermal non-equilibrium flow-fields described by the Boltzmann equation. The Macroscopic Chemistry Method for DSMC simulations was developed to simplify the inclusion of complex thermal non-equilibrium chemistry. The macroscopic approach uses statistical information which is calculated during the DSMC solution process in the modelling procedures. Here it is shown how inclusion of macroscopic information in models of chemical kinetics, electronic excitation, ionization, and radiation can enhance the capabilities of DSMC to model flow-fields where a range of physical processes occur. The approach is applied to the modelling of a 6.4 km/s nitrogen shock wave and results are compared with those from existing shock-tube experiments and continuum calculations. Reasonable agreement between the methods is obtained. The quality of the comparison is highly dependent on the set of vibrational relaxation and chemical kinetic parameters employed.
Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.
2000-01-01
This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1986-01-01
A rain attenuation prediction model is described for use in calculating satellite communication link availability for any specific location in the world that is characterized by an extended record of rainfall. Such a formalism is necessary for the accurate assessment of such availability predictions in the case of the small user-terminal concept of the Advanced Communication Technology Satellite (ACTS) Project. The model employs the theory of extreme value statistics to generate the necessary statistical rainrate parameters from rain data in the form compiled by the National Weather Service. These location dependent rain statistics are then applied to a rain attenuation model to obtain a yearly prediction of the occurrence of attenuation on any satellite link at that location. The predictions of this model are compared to those of the Crane Two-Component Rain Model and some empirical data and found to be very good. The model is then used to calculate rain attenuation statistics at 59 locations in the United States (including Alaska and Hawaii) for the 20 GHz downlinks and 30 GHz uplinks of the proposed ACTS system. The flexibility of this modeling formalism is such that it allows a complete and unified treatment of the temporal aspects of rain attenuation that leads to the design of an optimum stochastic power control algorithm, the purpose of which is to efficiently counter such rain fades on a satellite link.
Conservative Tests under Satisficing Models of Publication Bias.
McCrary, Justin; Christensen, Garret; Fanelli, Daniele
2016-01-01
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%-rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs.
Conservative Tests under Satisficing Models of Publication Bias
McCrary, Justin; Christensen, Garret; Fanelli, Daniele
2016-01-01
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%—rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs. PMID:26901834
Nonelastic nuclear reactions and accompanying gamma radiation
NASA Technical Reports Server (NTRS)
Snow, R.; Rosner, H. R.; George, M. C.; Hayes, J. D.
1971-01-01
Several aspects of nonelastic nuclear reactions which proceed through the formation of a compound nucleus are dealt with. The full statistical model and the partial statistical model are described and computer programs based on these models are presented along with operating instructions and input and output for sample problems. A theoretical development of the expression for the reaction cross section for the hybrid case which involves a combination of the continuum aspects of the full statistical model with the discrete level aspects of the partial statistical model is presented. Cross sections for level excitation and gamma production by neutron inelastic scattering from the nuclei Al-27, Fe-56, Si-28, and Pb-208 are calculated and compared with avaliable experimental data.
Mechanics and statistics of the worm-like chain
NASA Astrophysics Data System (ADS)
Marantan, Andrew; Mahadevan, L.
2018-02-01
The worm-like chain model is a simple continuum model for the statistical mechanics of a flexible polymer subject to an external force. We offer a tutorial introduction to it using three approaches. First, we use a mesoscopic view, treating a long polymer (in two dimensions) as though it were made of many groups of correlated links or "clinks," allowing us to calculate its average extension as a function of the external force via scaling arguments. We then provide a standard statistical mechanics approach, obtaining the average extension by two different means: the equipartition theorem and the partition function. Finally, we work in a probabilistic framework, taking advantage of the Gaussian properties of the chain in the large-force limit to improve upon the previous calculations of the average extension.
Testing the Predictive Power of Coulomb Stress on Aftershock Sequences
NASA Astrophysics Data System (ADS)
Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.
2009-12-01
Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.
NASA Astrophysics Data System (ADS)
Sergeenko, N. P.
2017-11-01
An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.
A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasqualini, Donatella
This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimatedmore » stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.« less
NASA Astrophysics Data System (ADS)
Kumar, Jagadish; Ananthakrishna, G.
2018-01-01
Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.
Kumar, Jagadish; Ananthakrishna, G
2018-01-01
Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.
TSP Symposium 2012 Proceedings
2012-11-01
and Statistical Model 78 7.3 Analysis and Results 79 7.4 Threats to Validity and Limitations 85 7.5 Conclusions 86 7.6 Acknowledgments 87 7.7...Table 12: Overall Statistics of the Experiment 32 Table 13: Results of Pairwise ANOVA Analysis, Highlighting Statistically Significant Differences...we calculated the percentage of defects injected. The distribution statistics are shown in Table 2. Table 2: Mean Lower, Upper Confidence Interval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasper, Ahren
2015-04-14
The appropriateness of treating crossing seams of electronic states of different spins as nonadiabatic transition states in statistical calculations of spin-forbidden reaction rates is considered. We show that the spin-forbidden reaction coordinate, the nuclear coordinate perpendicular to the crossing seam, is coupled to the remaining nuclear degrees of freedom. We found that this coupling gives rise to multidimensional effects that are not typically included in statistical treatments of spin-forbidden kinetics. Three qualitative categories of multidimensional effects may be identified: static multidimensional effects due to the geometry-dependence of the local shape of the crossing seam and of the spin–orbit coupling, dynamicalmore » multidimensional effects due to energy exchange with the reaction coordinate during the seam crossing, and nonlocal(history-dependent) multidimensional effects due to interference of the electronic variables at second, third, and later seam crossings. Nonlocal multidimensional effects are intimately related to electronic decoherence, where electronic dephasing acts to erase the history of the system. A semiclassical model based on short-time full-dimensional trajectories that includes all three multidimensional effects as well as a model for electronic decoherence is presented. The results of this multidimensional nonadiabatic statistical theory (MNST) for the 3O + CO → CO 2 reaction are compared with the results of statistical theories employing one-dimensional (Landau–Zener and weak coupling) models for the transition probability and with those calculated previously using multistate trajectories. The MNST method is shown to accurately reproduce the multistate decay-of-mixing trajectory results, so long as consistent thresholds are used. Furthermore, the MNST approach has several advantages over multistate trajectory approaches and is more suitable in chemical kinetics calculations at low temperatures and for complex systems. The error in statistical calculations that neglect multidimensional effects is shown to be as large as a factor of 2 for this system, with static multidimensional effects identified as the largest source of error.« less
NASA Technical Reports Server (NTRS)
Butler, C. M.; Hogge, J. E.
1978-01-01
Air quality sampling was conducted. Data for air quality parameters, recorded on written forms, punched cards or magnetic tape, are available for 1972 through 1975. Computer software was developed to (1) calculate several daily statistical measures of location, (2) plot time histories of data or the calculated daily statistics, (3) calculate simple correlation coefficients, and (4) plot scatter diagrams. Computer software was developed for processing air quality data to include time series analysis and goodness of fit tests. Computer software was developed to (1) calculate a larger number of daily statistical measures of location, and a number of daily monthly and yearly measures of location, dispersion, skewness and kurtosis, (2) decompose the extended time series model and (3) perform some goodness of fit tests. The computer program is described, documented and illustrated by examples. Recommendations are made for continuation of the development of research on processing air quality data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew
Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less
α -induced reactions on 115In: Cross section measurements and statistical model analysis
NASA Astrophysics Data System (ADS)
Kiss, G. G.; Szücs, T.; Mohr, P.; Török, Zs.; Huszánk, R.; Gyürky, Gy.; Fülöp, Zs.
2018-05-01
Background: α -nucleus optical potentials are basic ingredients of statistical model calculations used in nucleosynthesis simulations. While the nucleon+nucleus optical potential is fairly well known, for the α +nucleus optical potential several different parameter sets exist and large deviations, reaching sometimes even an order of magnitude, are found between the cross section predictions calculated using different parameter sets. Purpose: A measurement of the radiative α -capture and the α -induced reaction cross sections on the nucleus 115In at low energies allows a stringent test of statistical model predictions. Since experimental data are scarce in this mass region, this measurement can be an important input to test the global applicability of α +nucleus optical model potentials and further ingredients of the statistical model. Methods: The reaction cross sections were measured by means of the activation method. The produced activities were determined by off-line detection of the γ rays and characteristic x rays emitted during the electron capture decay of the produced Sb isotopes. The 115In(α ,γ )119Sb and 115In(α ,n )Sb118m reaction cross sections were measured between Ec .m .=8.83 and 15.58 MeV, and the 115In(α ,n )Sb118g reaction was studied between Ec .m .=11.10 and 15.58 MeV. The theoretical analysis was performed within the statistical model. Results: The simultaneous measurement of the (α ,γ ) and (α ,n ) cross sections allowed us to determine a best-fit combination of all parameters for the statistical model. The α +nucleus optical potential is identified as the most important input for the statistical model. The best fit is obtained for the new Atomki-V1 potential, and good reproduction of the experimental data is also achieved for the first version of the Demetriou potentials and the simple McFadden-Satchler potential. The nucleon optical potential, the γ -ray strength function, and the level density parametrization are also constrained by the data although there is no unique best-fit combination. Conclusions: The best-fit calculations allow us to extrapolate the low-energy (α ,γ ) cross section of 115In to the astrophysical Gamow window with reasonable uncertainties. However, still further improvements of the α -nucleus potential are required for a global description of elastic (α ,α ) scattering and α -induced reactions in a wide range of masses and energies.
Efficient calculation of atomic rate coefficients in dense plasmas
NASA Astrophysics Data System (ADS)
Aslanyan, Valentin; Tallents, Greg J.
2017-03-01
Modelling electron statistics in a cold, dense plasma by the Fermi-Dirac distribution leads to complications in the calculations of atomic rate coefficients. The Pauli exclusion principle slows down the rate of collisions as electrons must find unoccupied quantum states and adds a further computational cost. Methods to calculate these coefficients by direct numerical integration with a high degree of parallelism are presented. This degree of optimization allows the effects of degeneracy to be incorporated into a time-dependent collisional-radiative model. Example results from such a model are presented.
NASA Astrophysics Data System (ADS)
Jasper, Ahren W.; Dawes, Richard
2013-10-01
The lowest-energy singlet (1 1A') and two lowest-energy triplet (1 3A' and 1 3A″) electronic states of CO2 are characterized using dynamically weighted multireference configuration interaction (dw-MRCI+Q) electronic structure theory calculations extrapolated to the complete basis set (CBS) limit. Global analytic representations of the dw-MRCI+Q/CBS singlet and triplet surfaces and of their CASSCF/aug-cc-pVQZ spin-orbit coupling surfaces are obtained via the interpolated moving least squares (IMLS) semiautomated surface fitting method. The spin-forbidden kinetics of the title reaction is calculated using the coupled IMLS surfaces and coherent switches with decay of mixing non-Born-Oppenheimer molecular dynamics. The calculated spin-forbidden association rate coefficient (corresponding to the high pressure limit of the rate coefficient) is 7-35 times larger at 1000-5000 K than the rate coefficient used in many detailed chemical models of combustion. A dynamical analysis of the multistate trajectories is presented. The trajectory calculations reveal direct (nonstatistical) and indirect (statistical) spin-forbidden reaction mechanisms and may be used to test the suitability of transition-state-theory-like statistical methods for spin-forbidden kinetics. Specifically, we consider the appropriateness of the "double passage" approximation, of assuming statistical distributions of seam crossings, and of applications of the unified statistical model for spin-forbidden reactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawnsley, K.; Swaby, P.
1996-08-01
It is increasingly acknowledged that in order to understand and forecast the behavior of fracture influenced reservoirs we must attempt to reproduce the fracture system geometry and use this as a basis for fluid flow calculation. This article aims to present a recently developed fracture modelling prototype designed specifically for use in hydrocarbon reservoir environments. The prototype {open_quotes}FRAME{close_quotes} (FRActure Modelling Environment) aims to provide a tool which will allow the generation of realistic 3D fracture systems within a reservoir model, constrained to the known geology of the reservoir by both mechanical and statistical considerations, and which can be used asmore » a basis for fluid flow calculation. Two newly developed modelling techniques are used. The first is an interactive tool which allows complex fault surfaces and their associated deformations to be reproduced. The second is a {open_quotes}genetic{close_quotes} model which grows fracture patterns from seeds using conceptual models of fracture development. The user defines the mechanical input and can retrieve all the statistics of the growing fractures to allow comparison to assumed statistical distributions for the reservoir fractures. Input parameters include growth rate, fracture interaction characteristics, orientation maps and density maps. More traditional statistical stochastic fracture models are also incorporated. FRAME is designed to allow the geologist to input hard or soft data including seismically defined surfaces, well fractures, outcrop models, analogue or numerical mechanical models or geological {open_quotes}feeling{close_quotes}. The geologist is not restricted to {open_quotes}a priori{close_quotes} models of fracture patterns that may not correspond to the data.« less
Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P
1999-10-01
A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.
Surman, Rebecca; Mumpower, Matthew; McLaughlin, Gail
2017-02-27
Unknown nuclear masses are a major source of nuclear physics uncertainty for r-process nucleosynthesis calculations. Here we examine the systematic and statistical uncertainties that arise in r-process abundance predictions due to uncertainties in the masses of nuclear species on the neutron-rich side of stability. There is a long history of examining systematic uncertainties by the application of a variety of different mass models to r-process calculations. Here we expand upon such efforts by examining six DFT mass models, where we capture the full impact of each mass model by updating the other nuclear properties — including neutron capture rates, β-decaymore » lifetimes, and β-delayed neutron emission probabilities — that depend on the masses. Unlike systematic effects, statistical uncertainties in the r-process pattern have just begun to be explored. Here we apply a global Monte Carlo approach, starting from the latest FRDM masses and considering random mass variations within the FRDM rms error. Here, we find in each approach that uncertain nuclear masses produce dramatic uncertainties in calculated r-process yields, which can be reduced in upcoming experimental campaigns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Surman, Rebecca; Mumpower, Matthew; McLaughlin, Gail
Unknown nuclear masses are a major source of nuclear physics uncertainty for r-process nucleosynthesis calculations. Here we examine the systematic and statistical uncertainties that arise in r-process abundance predictions due to uncertainties in the masses of nuclear species on the neutron-rich side of stability. There is a long history of examining systematic uncertainties by the application of a variety of different mass models to r-process calculations. Here we expand upon such efforts by examining six DFT mass models, where we capture the full impact of each mass model by updating the other nuclear properties — including neutron capture rates, β-decaymore » lifetimes, and β-delayed neutron emission probabilities — that depend on the masses. Unlike systematic effects, statistical uncertainties in the r-process pattern have just begun to be explored. Here we apply a global Monte Carlo approach, starting from the latest FRDM masses and considering random mass variations within the FRDM rms error. Here, we find in each approach that uncertain nuclear masses produce dramatic uncertainties in calculated r-process yields, which can be reduced in upcoming experimental campaigns.« less
The Use of Programmable Calculators in the Teaching of Economics, Part II
ERIC Educational Resources Information Center
Addis, G. H.
1978-01-01
Describes the use of programmable calculators to perform classroom controlled experiments on economic models. The complete program for exploring the dynamics of the Harrod-Domar equation is given. Some difficulties encountered and statistical uses are mentioned. (BC)
Humeniuk, Stephan; Büchler, Hans Peter
2017-12-08
We present a method for computing the full probability distribution function of quadratic observables such as particle number or magnetization for the Fermi-Hubbard model within the framework of determinantal quantum Monte Carlo calculations. Especially in cold atom experiments with single-site resolution, such a full counting statistics can be obtained from repeated projective measurements. We demonstrate that the full counting statistics can provide important information on the size of preformed pairs. Furthermore, we compute the full counting statistics of the staggered magnetization in the repulsive Hubbard model at half filling and find excellent agreement with recent experimental results. We show that current experiments are capable of probing the difference between the Hubbard model and the limiting Heisenberg model.
NASA Technical Reports Server (NTRS)
Garneau, S.; Plaut, J. J.
2000-01-01
The surface roughness of the Vastitas Borealis Formation on Mars was analyzed with fractal statistics. Root mean square slopes and fractal dimensions were calculated for 74 topographic profiles. Results have implications for radar scattering models.
NASA Astrophysics Data System (ADS)
Lin, Shu; Wang, Rui; Xia, Ning; Li, Yongdong; Liu, Chunliang
2018-01-01
Statistical multipactor theories are critical prediction approaches for multipactor breakdown determination. However, these approaches still require a negotiation between the calculation efficiency and accuracy. This paper presents an improved stationary statistical theory for efficient threshold analysis of two-surface multipactor. A general integral equation over the distribution function of the electron emission phase with both the single-sided and double-sided impacts considered is formulated. The modeling results indicate that the improved stationary statistical theory can not only obtain equally good accuracy of multipactor threshold calculation as the nonstationary statistical theory, but also achieve high calculation efficiency concurrently. By using this improved stationary statistical theory, the total time consumption in calculating full multipactor susceptibility zones of parallel plates can be decreased by as much as a factor of four relative to the nonstationary statistical theory. It also shows that the effect of single-sided impacts is indispensable for accurate multipactor prediction of coaxial lines and also more significant for the high order multipactor. Finally, the influence of secondary emission yield (SEY) properties on the multipactor threshold is further investigated. It is observed that the first cross energy and the energy range between the first cross and the SEY maximum both play a significant role in determining the multipactor threshold, which agrees with the numerical simulation results in the literature.
Connell, J.F.; Bailey, Z.C.
1989-01-01
A total of 338 single-well aquifer tests from Bear Creek and Melton Valley, Tennessee were statistically grouped to estimate hydraulic conductivities for the geologic formations in the valleys. A cross-sectional simulation model linked to a regression model was used to further refine the statistical estimates for each of the formations and to improve understanding of ground-water flow in Bear Creek Valley. Median hydraulic-conductivity values were used as initial values in the model. Model-calculated estimates of hydraulic conductivity were generally lower than the statistical estimates. Simulations indicate that (1) the Pumpkin Valley Shale controls groundwater flow between Pine Ridge and Bear Creek; (2) all the recharge on Chestnut Ridge discharges to the Maynardville Limestone; (3) the formations having smaller hydraulic gradients may have a greater tendency for flow along strike; (4) local hydraulic conditions in the Maynardville Limestone cause inaccurate model-calculated estimates of hydraulic conductivity; and (5) the conductivity of deep bedrock neither affects the results of the model nor does it add information on the flow system. Improved model performance would require: (1) more water level data for the Copper Ridge Dolomite; (2) improved estimates of hydraulic conductivity in the Copper Ridge Dolomite and Maynardville Limestone; and (3) more water level data and aquifer tests in deep bedrock. (USGS)
The statistical multifragmentation model: Origins and recent advances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donangelo, R., E-mail: donangel@fing.edu.uy; Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21941-972 Rio de Janeiro - RJ; Souza, S. R., E-mail: srsouza@if.ufrj.br
2016-07-07
We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures reliesmore » on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.« less
The statistical multifragmentation model: Origins and recent advances
NASA Astrophysics Data System (ADS)
Donangelo, R.; Souza, S. R.
2016-07-01
We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.
The potential of composite cognitive scores for tracking progression in Huntington's disease.
Jones, Rebecca; Stout, Julie C; Labuschagne, Izelle; Say, Miranda; Justo, Damian; Coleman, Allison; Dumas, Eve M; Hart, Ellen; Owen, Gail; Durr, Alexandra; Leavitt, Blair R; Roos, Raymund; O'Regan, Alison; Langbehn, Doug; Tabrizi, Sarah J; Frost, Chris
2014-01-01
Composite scores derived from joint statistical modelling of individual risk factors are widely used to identify individuals who are at increased risk of developing disease or of faster disease progression. We investigated the ability of composite measures developed using statistical models to differentiate progressive cognitive deterioration in Huntington's disease (HD) from natural decline in healthy controls. Using longitudinal data from TRACK-HD, the optimal combinations of quantitative cognitive measures to differentiate premanifest and early stage HD individuals respectively from controls was determined using logistic regression. Composite scores were calculated from the parameters of each statistical model. Linear regression models were used to calculate effect sizes (ES) quantifying the difference in longitudinal change over 24 months between premanifest and early stage HD groups respectively and controls. ES for the composites were compared with ES for individual cognitive outcomes and other measures used in HD research. The 0.632 bootstrap was used to eliminate biases which result from developing and testing models in the same sample. In early HD, the composite score from the HD change prediction model produced an ES for difference in rate of 24-month change relative to controls of 1.14 (95% CI: 0.90 to 1.39), larger than the ES for any individual cognitive outcome and UHDRS Total Motor Score and Total Functional Capacity. In addition, this composite gave a statistically significant difference in rate of change in premanifest HD compared to controls over 24-months (ES: 0.24; 95% CI: 0.04 to 0.44), even though none of the individual cognitive outcomes produced statistically significant ES over this period. Composite scores developed using appropriate statistical modelling techniques have the potential to materially reduce required sample sizes for randomised controlled trials.
Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model.
Wako, Hiroshi; Abe, Haruo
2016-01-01
The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding.
Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model
Wako, Hiroshi; Abe, Haruo
2016-01-01
The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding. PMID:28409079
A simple rain attenuation model for earth-space radio links operating at 10-35 GHz
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Yon, K. M.
1986-01-01
The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duggar, William Neil, E-mail: wduggar@umc.edu; Nguyen, Alex; Stanford, Jason
This study is to demonstrate the importance and a method of properly modeling the treatment couch for dose calculation in patient treatment using arc therapy. The 2 treatment couch tops—Aktina AK550 and Elekta iBEAM evo—of Elekta LINACs were scanned using Philips Brilliance Big Bore CT Simulator. Various parts of the couch tops were contoured, and their densities were measured and recorded on the Pinnacle treatment planning system (TPS) using the established computed tomography density table. These contours were saved as organ models to be placed beneath the patient during planning. Relative attenuation measurements were performed following procedures outlined by TG-176more » as well as absolute dose comparison of static fields of 10 × 10 cm{sup 2} that were delivered through the couch tops with that calculated in the TPS with the couch models. A total of 10 random arc therapy treatment plans (5 volumetric-modulated arc therapy [VMAT] and 5 stereotactic body radiation therapy [SBRT]), using 24 beams, were selected for this study. All selected plans were calculated with and without couch modeling. Each beam was evaluated using the Delta{sup 4} dosimetry system (Delta{sup 4}). The Student t-test was used to determine statistical significance. Independent reviews were exploited as per the Imaging and Radiation Oncology Core head and neck credentialing phantom. The selected plans were calculated on the actual patient anatomies with and without couch modeling to determine potential clinical effects. Large relative beam attenuations were noted dependent on which part of the couch top beams were passing through. Substantial improvements were also noted for static fields both calculated with the TPS and delivered physically when the couch models were included in the calculation. A statistically significant increase in agreement was noted for dose difference, distance to agreement, and γ-analysis with the Delta{sup 4} on VMAT and SBRT plans. A credentialing review showed improvement in treatment delivery after couch modeling with both thermoluminescent dosimeter doses and film analysis. Furthermore, analysis of treatment plans with and without using the couch model showed a statistically significant reduction in planning target volume coverage and increase in skin dose. In conclusion, ignoring the treatment couch, a common practice when generating a patient treatment plan, can overestimate the dose delivered especially for arc therapy. This work shows that explicitly modeling the couch during planning can meaningfully improve the agreement between calculated and measured dose distributions. Because of this project, we have implemented the couch models clinically across all treatment plans.« less
Calculating solar photovoltaic potential on residential rooftops in Kailua Kona, Hawaii
NASA Astrophysics Data System (ADS)
Carl, Caroline
As carbon based fossil fuels become increasingly scarce, renewable energy sources are coming to the forefront of policy discussions around the globe. As a result, the State of Hawaii has implemented aggressive goals to achieve energy independence by 2030. Renewable electricity generation using solar photovoltaic technologies plays an important role in these efforts. This study utilizes geographic information systems (GIS) and Light Detection and Ranging (LiDAR) data with statistical analysis to identify how much solar photovoltaic potential exists for residential rooftops in the town of Kailua Kona on Hawaii Island. This study helps to quantify the magnitude of possible solar photovoltaic (PV) potential for Solar World SW260 monocrystalline panels on residential rooftops within the study area. Three main areas were addressed in the execution of this research: (1) modeling solar radiation, (2) estimating available rooftop area, and (3) calculating PV potential from incoming solar radiation. High resolution LiDAR data and Esri's solar modeling tools and were utilized to calculate incoming solar radiation on a sample set of digitized rooftops. Photovoltaic potential for the sample set was then calculated with the equations developed by Suri et al. (2005). Sample set rooftops were analyzed using a statistical model to identify the correlation between rooftop area and lot size. Least squares multiple linear regression analysis was performed to identify the influence of slope, elevation, rooftop area, and lot size on the modeled PV potential values. The equations built from these statistical analyses of the sample set were applied to the entire study region to calculate total rooftop area and PV potential. The total study area statistical analysis findings estimate photovoltaic electric energy generation potential for rooftops is approximately 190,000,000 kWh annually. This is approximately 17 percent of the total electricity the utility provided to the entire island in 2012. Based on these findings, full rooftop PV installations on the 4,460 study area homes could provide enough energy to power over 31,000 homes annually. The methods developed here suggest a means to calculate rooftop area and PV potential in a region with limited available data. The use of LiDAR point data offers a major opportunity for future research in both automating rooftop inventories and calculating incoming solar radiation and PV potential for homeowners.
NASA Technical Reports Server (NTRS)
Stutzman, Warren L.
1989-01-01
This paper reviews the effects of precipitation on earth-space communication links operating the 10 to 35 GHz frequency range. Emphasis is on the quantitative prediction of rain attenuation and depolarization. Discussions center on the models developed at Virginia Tech. Comments on other models are included as well as literature references to key works. Also included is the system level modeling for dual polarized communication systems with techniques for calculating antenna and propagation medium effects. Simple models for the calculation of average annual attenuation and cross-polarization discrimination (XPD) are presented. Calculation of worst month statistics are also presented.
Dose-Response Calculator for ArcGIS
Hanser, Steven E.; Aldridge, Cameron L.; Leu, Matthias; Nielsen, Scott E.
2011-01-01
The Dose-Response Calculator for ArcGIS is a tool that extends the Environmental Systems Research Institute (ESRI) ArcGIS 10 Desktop application to aid with the visualization of relationships between two raster GIS datasets. A dose-response curve is a line graph commonly used in medical research to examine the effects of different dosage rates of a drug or chemical (for example, carcinogen) on an outcome of interest (for example, cell mutations) (Russell and others, 1982). Dose-response curves have recently been used in ecological studies to examine the influence of an explanatory dose variable (for example, percentage of habitat cover, distance to disturbance) on a predicted response (for example, survival, probability of occurrence, abundance) (Aldridge and others, 2008). These dose curves have been created by calculating the predicted response value from a statistical model at different levels of the explanatory dose variable while holding values of other explanatory variables constant. Curves (plots) developed using the Dose-Response Calculator overcome the need to hold variables constant by using values extracted from the predicted response surface of a spatially explicit statistical model fit in a GIS, which include the variation of all explanatory variables, to visualize the univariate response to the dose variable. Application of the Dose-Response Calculator can be extended beyond the assessment of statistical model predictions and may be used to visualize the relationship between any two raster GIS datasets (see example in tool instructions). This tool generates tabular data for use in further exploration of dose-response relationships and a graph of the dose-response curve.
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
NASA Astrophysics Data System (ADS)
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
NASA Technical Reports Server (NTRS)
Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.
2016-01-01
Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.
Model Error Estimation for the CPTEC Eta Model
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; daSilva, Arlindo
1999-01-01
Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.
Dėdelė, Audrius; Miškinytė, Auksė
2015-09-01
In many countries, road traffic is one of the main sources of air pollution associated with adverse effects on human health and environment. Nitrogen dioxide (NO2) is considered to be a measure of traffic-related air pollution, with concentrations tending to be higher near highways, along busy roads, and in the city centers, and the exceedances are mainly observed at measurement stations located close to traffic. In order to assess the air quality in the city and the air pollution impact on public health, air quality models are used. However, firstly, before the model can be used for these purposes, it is important to evaluate the accuracy of the dispersion modelling as one of the most widely used method. The monitoring and dispersion modelling are two components of air quality monitoring system (AQMS), in which statistical comparison was made in this research. The evaluation of the Atmospheric Dispersion Modelling System (ADMS-Urban) was made by comparing monthly modelled NO2 concentrations with the data of continuous air quality monitoring stations in Kaunas city. The statistical measures of model performance were calculated for annual and monthly concentrations of NO2 for each monitoring station site. The spatial analysis was made using geographic information systems (GIS). The calculation of statistical parameters indicated a good ADMS-Urban model performance for the prediction of NO2. The results of this study showed that the agreement of modelled values and observations was better for traffic monitoring stations compared to the background and residential stations.
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
[Micro-simulation of firms' heterogeneity on pollution intensity and regional characteristics].
Zhao, Nan; Liu, Yi; Chen, Ji-Ning
2009-11-01
In the same industrial sector, heterogeneity of pollution intensity exists among firms. There are some errors if using sector's average pollution intensity, which are calculated by limited number of firms in environmental statistic database to represent the sector's regional economic-environmental status. Based on the production function which includes environmental depletion as input, a micro-simulation model on firms' operational decision making is proposed. Then the heterogeneity of firms' pollution intensity can be mechanically described. Taking the mechanical manufacturing sector in Deyang city, 2005 as the case, the model's parameters were estimated. And the actual COD emission intensities of environmental statistic firms can be properly matched by the simulation. The model's results also show that the regional average COD emission intensity calculated by the environmental statistic firms (0.002 6 t per 10 000 yuan fixed asset, 0.001 5 t per 10 000 yuan production value) is lower than the regional average intensity calculated by all the firms in the region (0.003 0 t per 10 000 yuan fixed asset, 0.002 3 t per 10 000 yuan production value). The difference among average intensities in the six counties is significant as well. These regional characteristics of pollution intensity attribute to the sector's inner-structure (firms' scale distribution, technology distribution) and its spatial deviation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bochicchio, Davide; Panizon, Emanuele; Ferrando, Riccardo
2015-10-14
We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation inmore » metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tratnyek, Paul G.; Bylaska, Eric J.; Weber, Eric J.
2017-01-01
Quantitative structure–activity relationships (QSARs) have long been used in the environmental sciences. More recently, molecular modeling and chemoinformatic methods have become widespread. These methods have the potential to expand and accelerate advances in environmental chemistry because they complement observational and experimental data with “in silico” results and analysis. The opportunities and challenges that arise at the intersection between statistical and theoretical in silico methods are most apparent in the context of properties that determine the environmental fate and effects of chemical contaminants (degradation rate constants, partition coefficients, toxicities, etc.). The main example of this is the calibration of QSARs usingmore » descriptor variable data calculated from molecular modeling, which can make QSARs more useful for predicting property data that are unavailable, but also can make them more powerful tools for diagnosis of fate determining pathways and mechanisms. Emerging opportunities for “in silico environmental chemical science” are to move beyond the calculation of specific chemical properties using statistical models and toward more fully in silico models, prediction of transformation pathways and products, incorporation of environmental factors into model predictions, integration of databases and predictive models into more comprehensive and efficient tools for exposure assessment, and extending the applicability of all the above from chemicals to biologicals and materials.« less
USDA-ARS?s Scientific Manuscript database
Watershed models are calibrated to simulate stream discharge as accurately as possible. Modelers will often calculate model validation statistics on aggregate (often monthly) time periods, rather than the daily step at which models typically operate. This is because daily hydrologic data exhibit lar...
Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo
2014-09-01
Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.
Light clusters and pasta phases in warm and dense nuclear matter
NASA Astrophysics Data System (ADS)
Avancini, Sidney S.; Ferreira, Márcio; Pais, Helena; Providência, Constança; Röpke, Gerd
2017-04-01
The pasta phases are calculated for warm stellar matter in a framework of relativistic mean-field models, including the possibility of light cluster formation. Results from three different semiclassical approaches are compared with a quantum statistical calculation. Light clusters are considered as point-like particles, and their abundances are determined from the minimization of the free energy. The couplings of the light clusters to mesons are determined from experimental chemical equilibrium constants and many-body quantum statistical calculations. The effect of these light clusters on the chemical potentials is also discussed. It is shown that, by including heavy clusters, light clusters are present up to larger nucleonic densities, although with smaller mass fractions.
1981-06-01
TI - 59 programmable calculator to aid...training. The Texas Instruments TI - 59 Programmable Calculator has only ten lettered registers that would be simple for clerical personnel to use (A...SASSY Management Units. Appendix C is a set of user instructions written for the Texas Instrument TI - 59 Programmable Calculator . The TI-59 was
Using Bayes' theorem for free energy calculations
NASA Astrophysics Data System (ADS)
Rogers, David M.
Statistical mechanics is fundamentally based on calculating the probabilities of molecular-scale events. Although Bayes' theorem has generally been recognized as providing key guiding principals for setup and analysis of statistical experiments [83], classical frequentist models still predominate in the world of computational experimentation. As a starting point for widespread application of Bayesian methods in statistical mechanics, we investigate the central quantity of free energies from this perspective. This dissertation thus reviews the basics of Bayes' view of probability theory, and the maximum entropy formulation of statistical mechanics before providing examples of its application to several advanced research areas. We first apply Bayes' theorem to a multinomial counting problem in order to determine inner shell and hard sphere solvation free energy components of Quasi-Chemical Theory [140]. We proceed to consider the general problem of free energy calculations from samples of interaction energy distributions. From there, we turn to spline-based estimation of the potential of mean force [142], and empirical modeling of observed dynamics using integrator matching. The results of this research are expected to advance the state of the art in coarse-graining methods, as they allow a systematic connection from high-resolution (atomic) to low-resolution (coarse) structure and dynamics. In total, our work on these problems constitutes a critical starting point for further application of Bayes' theorem in all areas of statistical mechanics. It is hoped that the understanding so gained will allow for improvements in comparisons between theory and experiment.
Ionospheric scintillation studies
NASA Technical Reports Server (NTRS)
Rino, C. L.; Freemouw, E. J.
1973-01-01
The diffracted field of a monochromatic plane wave was characterized by two complex correlation functions. For a Gaussian complex field, these quantities suffice to completely define the statistics of the field. Thus, one can in principle calculate the statistics of any measurable quantity in terms of the model parameters. The best data fits were achieved for intensity statistics derived under the Gaussian statistics hypothesis. The signal structure that achieved the best fit was nearly invariant with scintillation level and irregularity source (ionosphere or solar wind). It was characterized by the fact that more than 80% of the scattered signal power is in phase quadrature with the undeviated or coherent signal component. Thus, the Gaussian-statistics hypothesis is both convenient and accurate for channel modeling work.
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.
2000-01-01
A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
Evaporation residue cross-section measurements for 48Ti-induced reactions
NASA Astrophysics Data System (ADS)
Sharma, Priya; Behera, B. R.; Mahajan, Ruchi; Thakur, Meenu; Kaur, Gurpreet; Kapoor, Kushal; Rani, Kavita; Madhavan, N.; Nath, S.; Gehlot, J.; Dubey, R.; Mazumdar, I.; Patel, S. M.; Dhibar, M.; Hosamani, M. M.; Khushboo, Kumar, Neeraj; Shamlath, A.; Mohanto, G.; Pal, Santanu
2017-09-01
Background: A significant research effort is currently aimed at understanding the synthesis of heavy elements. For this purpose, heavy ion induced fusion reactions are used and various experimental observations have indicated the influence of shell and deformation effects in the compound nucleus (CN) formation. There is a need to understand these two effects. Purpose: To investigate the effect of proton shell closure and deformation through the comparison of evaporation residue (ER) cross sections for the systems involving heavy compound nuclei around the ZCN=82 region. Methods: A systematic study of ER cross-section measurements was carried out for the 48Ti+Nd,150142 , 144Sm systems in the energy range of 140 -205 MeV . The measurement has been performed using the gas-filled mode of the hybrid recoil mass analyzer present at the Inter University Accelerator Centre (IUAC), New Delhi. Theoretical calculations based on a statistical model were carried out incorporating an adjustable barrier scaling factor to fit the experimental ER cross section. Coupled-channel calculations were also performed using the ccfull code to obtain the spin distribution of the CN, which was used as an input in the calculations. Results: Experimental ER cross sections for 48Ti+Nd,150142 were found to be considerably smaller than the statistical model predictions whereas experimental and statistical model predictions for 48Ti+144Sm were of comparable magnitudes. Conclusion: Though comparison of experimental ER cross sections with statistical model predictions indicate considerable non-compound-nuclear processes for 48Ti+Nd,150142 reactions, no such evidence is found for the 48Ti+144Sm system. Further investigations are required to understand the difference in fusion probabilities of 48Ti+142Nd and 48Ti+144Sm systems.
REVIEW OF THE ATTRIBUTES AND PERFORMANCE OF SIX URBAN DIFFUSION MODELS
The American Meteorological Society conducted a scientific review of a set of six urban diffusion models. TRC Environmental Consultants, Inc. calculated and tabulated a uniform set of statistics for all the models. The report consists of a summary and copies of the three independ...
USDA-ARS?s Scientific Manuscript database
The importance of measurement uncertainty in terms of calculation of model evaluation error statistics has been recently stated in the literature. The impact of measurement uncertainty on calibration results indicates the potential vague zone in the field of watershed modeling where the assumption ...
Path statistics, memory, and coarse-graining of continuous-time random walks on networks
Kion-Crosby, Willow; Morozov, Alexandre V.
2015-01-01
Continuous-time random walks (CTRWs) on discrete state spaces, ranging from regular lattices to complex networks, are ubiquitous across physics, chemistry, and biology. Models with coarse-grained states (for example, those employed in studies of molecular kinetics) or spatial disorder can give rise to memory and non-exponential distributions of waiting times and first-passage statistics. However, existing methods for analyzing CTRWs on complex energy landscapes do not address these effects. Here we use statistical mechanics of the nonequilibrium path ensemble to characterize first-passage CTRWs on networks with arbitrary connectivity, energy landscape, and waiting time distributions. Our approach can be applied to calculating higher moments (beyond the mean) of path length, time, and action, as well as statistics of any conservative or non-conservative force along a path. For homogeneous networks, we derive exact relations between length and time moments, quantifying the validity of approximating a continuous-time process with its discrete-time projection. For more general models, we obtain recursion relations, reminiscent of transfer matrix and exact enumeration techniques, to efficiently calculate path statistics numerically. We have implemented our algorithm in PathMAN (Path Matrix Algorithm for Networks), a Python script that users can apply to their model of choice. We demonstrate the algorithm on a few representative examples which underscore the importance of non-exponential distributions, memory, and coarse-graining in CTRWs. PMID:26646868
Gibson, Eli; Fenster, Aaron; Ward, Aaron D
2013-10-01
Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.
A model for multiple-drop-impact erosion of brittle solids
NASA Technical Reports Server (NTRS)
Engel, O. G.
1971-01-01
A statistical model for the multiple-drop-impact erosion of brittle solids was developed. An equation for calculating the rate of erosion is given. The development is not complete since two quantities that are needed to calculate the rate of erosion with use of the equation must be assessed from experimental data. A partial test of the equation shows that it gives results that are in good agreement with experimental observation.
Vriesendorp, Pieter A; Schinkel, Arend F L; Liebregts, Max; Theuns, Dominic A M J; van Cleemput, Johan; Ten Cate, Folkert J; Willems, Rik; Michels, Michelle
2015-08-01
The recently released 2014 European Society of Cardiology guidelines of hypertrophic cardiomyopathy (HCM) use a new clinical risk prediction model for sudden cardiac death (SCD), based on the HCM Risk-SCD study. Our study is the first external and independent validation of this new risk prediction model. The study population consisted of a consecutive cohort of 706 patients with HCM without prior SCD event, from 2 tertiary referral centers. The primary end point was a composite of SCD and appropriate implantable cardioverter-defibrillator therapy, identical to the HCM Risk-SCD end point. The 5-year SCD risk was calculated using the HCM Risk-SCD formula. Receiver operating characteristic curves and C-statistics were calculated for the 2014 European Society of Cardiology guidelines, and risk stratification methods of the 2003 American College of Cardiology/European Society of Cardiology guidelines and 2011 American College of Cardiology Foundation/American Heart Association guidelines. During follow-up of 7.7±5.3 years, SCD occurred in 42 (5.9%) of 706 patients (ages 49±16 years; 34% women). The C-statistic of the new model was 0.69 (95% CI, 0.57-0.82; P=0.008), which performed significantly better than the conventional risk factor models based on the 2003 guidelines (C-statistic of 0.55: 95% CI, 0.47-0.63; P=0.3), and 2011 guidelines (C-statistic of 0.60: 95% CI, 0.50-0.70; P=0.07). The HCM Risk-SCD model improves the risk stratification of patients with HCM for primary prevention of SCD, and calculating an individual risk estimate contributes to the clinical decision-making process. Improved risk stratification is important for the decision making before implantable cardioverter-defibrillator implantation for the primary prevention of SCD. © 2015 American Heart Association, Inc.
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
Spatial Accessibility and Availability Measures and Statistical Properties in the Food Environment
Van Meter, E.; Lawson, A.B.; Colabianchi, N.; Nichols, M.; Hibbert, J.; Porter, D.; Liese, A.D.
2010-01-01
Spatial accessibility is of increasing interest in the health sciences. This paper addresses the statistical use of spatial accessibility and availability indices. These measures are evaluated via an extensive simulation based on cluster models for local food outlet density. We derived Monte Carlo critical values for several statistical tests based on the indices. In particular we are interested in the ability to make inferential comparisons between different study areas where indices of accessibility and availability are to be calculated. We derive tests of mean difference as well as tests for differences in Moran's I for spatial correlation for each of the accessibility and availability indices. We also apply these new statistical tests to a data example based on two counties in South Carolina for various accessibility and availability measures calculated for food outlets, stores, and restaurants. PMID:21499528
Chaikh, Abdulhamid; Balosso, Jacques
2016-12-01
To apply the statistical bootstrap analysis and dosimetric criteria's to assess the change of prescribed dose (PD) for lung cancer to maintain the same clinical results when using new generations of dose calculation algorithms. Nine lung cancer cases were studied. For each patient, three treatment plans were generated using exactly the same beams arrangements. In plan 1, the dose was calculated using pencil beam convolution (PBC) algorithm turning on heterogeneity correction with modified batho (PBC-MB). In plan 2, the dose was calculated using anisotropic analytical algorithm (AAA) and the same PD, as plan 1. In plan 3, the dose was calculated using AAA with monitor units (MUs) obtained from PBC-MB, as input. The dosimetric criteria's include MUs, delivered dose at isocentre (Diso) and calculated dose to 95% of the target volume (D95). The bootstrap method was used to assess the significance of the dose differences and to accurately estimate the 95% confidence interval (95% CI). Wilcoxon and Spearman's rank tests were used to calculate P values and the correlation coefficient (ρ). Statistically significant for dose difference was found using point kernel model. A good correlation was observed between both algorithms types, with ρ>0.9. Using AAA instead of PBC-MB, an adjustment of the PD in the isocentre is suggested. For a given set of patients, we assessed the need to readjust the PD for lung cancer using dosimetric indices and bootstrap statistical method. Thus, if the goal is to keep on with the same clinical results, the PD for lung tumors has to be adjusted with AAA. According to our simulation we suggest to readjust the PD by 5% and an optimization for beam arrangements to better protect the organs at risks (OARs).
The USGS=s SPARROW Model is a statistical model with mechanistic features that has been used to calculate annual nutrient fluxes in nontidal streams nationally on the basis of nitrogen sources, landscape characteristics, and stream properties. This model has been useful for asses...
Popov, I; Valašková, J; Štefaničková, J; Krásnik, V
2017-01-01
A substantial part of the population suffers from some kind of refractive errors. It is envisaged that their prevalence may change with the development of society. The aim of this study is to determine the prevalence of refractive errors using calculations based on the Gullstrand schematic eye model. We used the Gullstrand schematic eye model to calculate refraction retrospectively. Refraction was presented as the need for glasses correction at a vertex distance of 12 mm. The necessary data was obtained using the optical biometer Lenstar LS900. Data which could not be obtained due to the limitations of the device was substituted by theoretical data from the Gullstrand schematic eye model. Only analyses from the right eyes were presented. The data was interpreted using descriptive statistics, Pearson correlation and t-test. The statistical tests were conducted at a level of significance of 5%. Our sample included 1663 patients (665 male, 998 female) within the age range of 19 to 96 years. Average age was 70.8 ± 9.53 years. Average refraction of the eye was 2.73 ± 2.13D (males 2.49 ± 2.34, females 2.90 ± 2.76). The mean absolute error from emmetropia was 3.01 ± 1.58 (males 2.83 ± 2.95, females 3.25 ± 3.35). 89.06% of the sample was hyperopic, 6.61% was myopic and 4.33% emmetropic. We did not find any correlation between refraction and age. Females were more hyperopic than males. We did not find any statistically significant hypermetopic shift of refraction with age. According to our estimation, the calculations of refractive errors using the Gullstrand schematic eye model showed a significant hypermetropic shift of more than +2D. Our results could be used in future for comparing the prevalence of refractive errors using same methods we used.Key words: refractive errors, refraction, Gullstrand schematic eye model, population, emmetropia.
Shah, Neomi; Hanna, David B; Teng, Yanping; Sotres-Alvarez, Daniela; Hall, Martica; Loredo, Jose S; Zee, Phyllis; Kim, Mimi; Yaggi, H Klar; Redline, Susan; Kaplan, Robert C
2016-06-01
We developed and validated the first-ever sleep apnea (SA) risk calculator in a large population-based cohort of Hispanic/Latino subjects. Cross-sectional data on adults from the Hispanic Community Health Study/Study of Latinos (2008-2011) were analyzed. Subjective and objective sleep measurements were obtained. Clinically significant SA was defined as an apnea-hypopnea index ≥ 15 events per hour. Using logistic regression, four prediction models were created: three sex-specific models (female-only, male-only, and a sex × covariate interaction model to allow differential predictor effects), and one overall model with sex included as a main effect only. Models underwent 10-fold cross-validation and were assessed by using the C statistic. SA and its predictive variables; a total of 17 variables were considered. A total of 12,158 participants had complete sleep data available; 7,363 (61%) were women. The population-weighted prevalence of SA (apnea-hypopnea index ≥ 15 events per hour) was 6.1% in female subjects and 13.5% in male subjects. Male-only (C statistic, 0.808) and female-only (C statistic, 0.836) prediction models had the same predictor variables (ie, age, BMI, self-reported snoring). The sex-interaction model (C statistic, 0.836) contained sex, age, age × sex, BMI, BMI × sex, and self-reported snoring. The final overall model (C statistic, 0.832) contained age, BMI, snoring, and sex. We developed two websites for our SA risk calculator: one in English (https://www.montefiore.org/sleepapneariskcalc.html) and another in Spanish (http://www.montefiore.org/sleepapneariskcalc-es.html). We created an internally validated, highly discriminating, well-calibrated, and parsimonious prediction model for SA. Contrary to the study hypothesis, the variables did not have different predictive magnitudes in male and female subjects. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
Simulation of laser beam reflection at the sea surface
NASA Astrophysics Data System (ADS)
Schwenger, Frédéric; Repasi, Endre
2011-05-01
A 3D simulation of the reflection of a Gaussian shaped laser beam on the dynamic sea surface is presented. The simulation is suitable for both the calculation of images of SWIR (short wave infrared) imaging sensor and for determination of total detected power of reflected laser light for a bistatic configuration of laser source and receiver at different atmospheric conditions. Our computer simulation comprises the 3D simulation of a maritime scene (open sea/clear sky) and the simulation of laser light reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. The propagation model for water waves is applied for sea surface animation. To predict the view of a camera in the spectral band SWIR the sea surface radiance must be calculated. This is done by considering the emitted sea surface radiance and the reflected sky radiance, calculated by MODTRAN. Additionally, the radiances of laser light specularly reflected at the wind-roughened sea surface are modeled in the SWIR band considering an analytical statistical sea surface BRDF (bidirectional reflectance distribution function). This BRDF model considers the statistical slope statistics of waves and accounts for slope-shadowing of waves that especially occurs at flat incident angles of the laser beam and near horizontal detection angles of reflected irradiance at rough seas. Simulation results are presented showing the variation of the detected laser power dependent on the geometric configuration of laser, sensor and wind characteristics.
ERIC Educational Resources Information Center
Jackson, Paul R.
1972-01-01
The probabilities of certain English football teams winning different playoffs are determined. In each case, a mathematical model is fitted to the observed data, assumptions are verified, and the calculations performed. (LS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-14
This package contains statistical routines for extracting features from multivariate time-series data which can then be used for subsequent multivariate statistical analysis to identify patterns and anomalous behavior. It calculates local linear or quadratic regression model fits to moving windows for each series and then summarizes the model coefficients across user-defined time intervals for each series. These methods are domain agnostic-but they have been successfully applied to a variety of domains, including commercial aviation and electric power grid data.
The statistical average of optical properties for alumina particle cluster in aircraft plume
NASA Astrophysics Data System (ADS)
Li, Jingying; Bai, Lu; Wu, Zhensen; Guo, Lixin
2018-04-01
We establish a model for lognormal distribution of monomer radius and number of alumina particle clusters in plume. According to the Multi-Sphere T Matrix (MSTM) theory, we provide a method for finding the statistical average of optical properties for alumina particle clusters in plume, analyze the effect of different distributions and different detection wavelengths on the statistical average of optical properties for alumina particle cluster, and compare the statistical average optical properties under the alumina particle cluster model established in this study and those under three simplified alumina particle models. The calculation results show that the monomer number of alumina particle cluster and its size distribution have a considerable effect on its statistical average optical properties. The statistical average of optical properties for alumina particle cluster at common detection wavelengths exhibit obvious differences, whose differences have a great effect on modeling IR and UV radiation properties of plume. Compared with the three simplified models, the alumina particle cluster model herein features both higher extinction and scattering efficiencies. Therefore, we may find that an accurate description of the scattering properties of alumina particles in aircraft plume is of great significance in the study of plume radiation properties.
Cross-Section Measurements via the Activation Technique at the Cologne Clover Counting Setup
NASA Astrophysics Data System (ADS)
Heim, Felix; Mayer, Jan; Netterdon, Lars; Scholz, Philipp; Zilges, Andreas
The activation technique is a widely used method for the determination of cross-section values for charged-particle induced reactions at astrophysically relevant energies. Since network calculations of nucleosynthesis processes often depend on reaction rates calculated in the scope of the Hauser-Feshbach statistical model, these cross-sections can be used to improve the nuclear-physics input-parameters like optical-model potentials (OMP), γ-ray strength functions, and nuclear level densities. In order to extend the available experimental database, the 108Cd(α, n)111Sn reaction cross section was investigated at ten energies between 10.2 and 13.5 MeV. As this reaction at these energies is almost only sensitive on the α-decay width, the results were compared to statistical model calculations using different models for the α-OMP. The irradiation as well as the consecutive γ-ray counting were performed at the Institute for Nuclear Physics of the University of Cologne using the 10 MV FN-Tandem accelerator and the Cologne Clover Counting Setup. This setup consists of two clover- type high purity germanium (HPGe) detectors in a close face-to-face geometry to cover a solid angle of almost 4π.
Nonlinear Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Everhart, Joel L.; Badavi, Forooz F.
1989-01-01
Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.
Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system
NASA Astrophysics Data System (ADS)
Freitas, Rodrigo; Frolov, Timofey; Asta, Mark
2017-10-01
Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.
NASA Astrophysics Data System (ADS)
Kumar, Rohit; Puri, Rajeev K.
2018-03-01
Employing the quantum molecular dynamics (QMD) approach for nucleus-nucleus collisions, we test the predictive power of the energy-based clusterization algorithm, i.e., the simulating annealing clusterization algorithm (SACA), to describe the experimental data of charge distribution and various event-by-event correlations among fragments. The calculations are constrained into the Fermi-energy domain and/or mildly excited nuclear matter. Our detailed study spans over different system masses, and system-mass asymmetries of colliding partners show the importance of the energy-based clusterization algorithm for understanding multifragmentation. The present calculations are also compared with the other available calculations, which use one-body models, statistical models, and/or hybrid models.
Comparison of thermal signatures of a mine buried in mineral and organic soils
NASA Astrophysics Data System (ADS)
Lamorski, K.; Pregowski, Piotr; Swiderski, Waldemar; Usowicz, B.; Walczak, R. T.
2001-10-01
Values of thermal signature of a mine buried in soils, which ave different properties, were compared using mathematical- statistical modeling. There was applied a model of transport phenomena in the soil, which takes into consideration water and energy transfer. The energy transport is described using Fourier's equation. Liquid phase transport of water is calculated using Richard's model of water flow in porous medium. For the comparison, there were selected two soils: mineral and organic, which differs significantly in thermal and hydrological properties. The heat capacity of soil was estimated using de Vries model. The thermal conductivity was calculated using a statistical model, which incorprates fundamental soil physical properties. The model of soil thermal conductivity was built on the base of heat resistance, two Kirchhoff's laws and polynomial distribution. Soil hydrological properties were described using Mualem-van Genuchten model. The impact of thermal properties of the medium in which a mien had been placed on its thermal signature in the conditions of heat input was presented. The dependence was stated between observed thermal signature of a mine and thermal parameters of the medium.
Assessment of NDE reliability data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.
1975-01-01
Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.
Effect of clustering on the emission of light charged particles
NASA Astrophysics Data System (ADS)
Kundu, Samir; Bhattacharya, C.; Rana, T. K.; Bhattacharya, S.; Pandey, R.; Banerjee, K.; Roy, Pratap; Meena, J. K.; Mukherjee, G.; Ghosh, T. K.; Mukhopadhyay, S.; Saha, A. K.; Sahoo, J. K.; Mandal Saha, R.; Srivastava, V.; Sinha, M.; Asgar, Md. A.
2018-04-01
Energy spectra of light charged particles emitted in the reaction p + {}^{27}Al → {}^{28}Si^{\\ast} have been studied and compared with statistical model calculation. Unlike 16O + 12C where a large deformation was observed in 28Si*, the energy spectra of α-particles were well explained by the statistical model calculation with standard "deformability parameters" obtained using the rotating liquid drop model. It seems that the α-clustering in the entrance channel causes extra deformation in 28Si* in the case of 16O + 12C, but the reanalysis of other published data shows that there are several cases where extra deformation was observed for composites produced via non-α-cluster entrance channels also. An empirical relation was found between mass-asymmetry in the entrance channel and deformation, which indicates that along with α-clustering, mass-asymmetry may also affect the emission of a light charged particle.
Quantifying falsifiability of scientific theories
NASA Astrophysics Data System (ADS)
Nemenman, Ilya
I argue that the notion of falsifiability, a key concept in defining a valid scientific theory, can be quantified using Bayesian Model Selection, which is a standard tool in modern statistics. This relates falsifiability to the quantitative version of the statistical Occam's razor, and allows transforming some long-running arguments about validity of scientific theories from philosophical discussions to rigorous mathematical calculations.
The calculation of neutron capture gamma-ray yields for space shielding applications
NASA Technical Reports Server (NTRS)
Yost, K. J.
1972-01-01
The application of nuclear models to the calculation of neutron capture and inelastic scattering gamma yields is discussed. The gamma ray cascade model describes the cascade process in terms of parameters which either: (1) embody statistical assumptions regarding electric and magnetic multipole transition strengths, level densities, and spin and parity distributions or (2) are fixed by experiment such as measured energies, spin and parity values, and transition probabilities for low lying states.
NASA Astrophysics Data System (ADS)
Zaichik, Leonid I.; Alipchenkov, Vladimir M.
2009-10-01
The purpose of this paper is twofold: (i) to advance and extend the statistical two-point models of pair dispersion and particle clustering in isotropic turbulence that were previously proposed by Zaichik and Alipchenkov (2003 Phys. Fluids15 1776-87 2007 Phys. Fluids 19, 113308) and (ii) to present some applications of these models. The models developed are based on a kinetic equation for the two-point probability density function of the relative velocity distribution of two particles. These models predict the pair relative velocity statistics and the preferential accumulation of heavy particles in stationary and decaying homogeneous isotropic turbulent flows. Moreover, the models are applied to predict the effect of particle clustering on turbulent collisions, sedimentation and intensity of microwave radiation as well as to calculate the mean filtered subgrid stress of the particulate phase. Model predictions are compared with direct numerical simulations and experimental measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
More, R.M.
A new statistical model (the quantum-statistical model (QSM)) was recently introduced by Kalitkin and Kuzmina for the calculation of thermodynamic properties of compressed matter. This paper examines the QSM and gives (i) a numerical QSM calculation of pressure and energy for aluminum and comparison to existing augmented-plane-wave data; (ii) display of separate kinetic, exchange, and quantum pressure terms; (iii) a study of electron density at the nucleus; (iv) a study of the effects of the Kirzhnitz-Weizsacker parameter controlling the gradient terms; (v) an analytic expansion for very high densities; and (vi) rigorous pressure theorems including a general version of themore » virial theorem which applies to an arbitrary microscopic volume. It is concluded that the QSM represents the most accurate and consistent theory of the Thomas-Fermi type.« less
Quantitative assessment model for gastric cancer screening
Chen, Kun; Yu, Wei-Ping; Song, Liang; Zhu, Yi-Min
2005-01-01
AIM: To set up a mathematic model for gastric cancer screening and to evaluate its function in mass screening for gastric cancer. METHODS: A case control study was carried on in 66 patients and 198 normal people, then the risk and protective factors of gastric cancer were determined, including heavy manual work, foods such as small yellow-fin tuna, dried small shrimps, squills, crabs, mothers suffering from gastric diseases, spouse alive, use of refrigerators and hot food, etc. According to some principles and methods of probability and fuzzy mathematics, a quantitative assessment model was established as follows: first, we selected some factors significant in statistics, and calculated weight coefficient for each one by two different methods; second, population space was divided into gastric cancer fuzzy subset and non gastric cancer fuzzy subset, then a mathematic model for each subset was established, we got a mathematic expression of attribute degree (AD). RESULTS: Based on the data of 63 patients and 693 normal people, AD of each subject was calculated. Considering the sensitivity and specificity, the thresholds of AD values calculated were configured with 0.20 and 0.17, respectively. According to these thresholds, the sensitivity and specificity of the quantitative model were about 69% and 63%. Moreover, statistical test showed that the identification outcomes of these two different calculation methods were identical (P>0.05). CONCLUSION: The validity of this method is satisfactory. It is convenient, feasible, economic and can be used to determine individual and population risks of gastric cancer. PMID:15655813
Condensate statistics and thermodynamics of weakly interacting Bose gas: Recursion relation approach
NASA Astrophysics Data System (ADS)
Dorfman, K. E.; Kim, M.; Svidzinsky, A. A.
2011-03-01
We study condensate statistics and thermodynamics of weakly interacting Bose gas with a fixed total number N of particles in a cubic box. We find the exact recursion relation for the canonical ensemble partition function. Using this relation, we calculate the distribution function of condensate particles for N=200. We also calculate the distribution function based on multinomial expansion of the characteristic function. Similar to the ideal gas, both approaches give exact statistical moments for all temperatures in the framework of Bogoliubov model. We compare them with the results of unconstraint canonical ensemble quasiparticle formalism and the hybrid master equation approach. The present recursion relation can be used for any external potential and boundary conditions. We investigate the temperature dependence of the first few statistical moments of condensate fluctuations as well as thermodynamic potentials and heat capacity analytically and numerically in the whole temperature range.
NASA Astrophysics Data System (ADS)
Pham, M. T.; Vanhaute, W. J.; Vandenberghe, S.; De Baets, B.; Verhoest, N. E. C.
2013-12-01
Of all natural disasters, the economic and environmental consequences of droughts are among the highest because of their longevity and widespread spatial extent. Because of their extreme behaviour, studying droughts generally requires long time series of historical climate data. Rainfall is a very important variable for calculating drought statistics, for quantifying historical droughts or for assessing the impact on other hydrological (e.g. water stage in rivers) or agricultural (e.g. irrigation requirements) variables. Unfortunately, time series of historical observations are often too short for such assessments. To circumvent this, one may rely on the synthetic rainfall time series from stochastic point process rainfall models, such as Bartlett-Lewis models. The present study investigates whether drought statistics are preserved when simulating rainfall with Bartlett-Lewis models. Therefore, a 105 yr 10 min rainfall time series obtained at Uccle, Belgium is used as a test case. First, drought events were identified on the basis of the Effective Drought Index (EDI), and each event was characterized by two variables, i.e. drought duration (D) and drought severity (S). As both parameters are interdependent, a multivariate distribution function, which makes use of a copula, was fitted. Based on the copula, four types of drought return periods are calculated for observed as well as simulated droughts and are used to evaluate the ability of the rainfall models to simulate drought events with the appropriate characteristics. Overall, all Bartlett-Lewis model types studied fail to preserve extreme drought statistics, which is attributed to the model structure and to the model stationarity caused by maintaining the same parameter set during the whole simulation period.
NASA Astrophysics Data System (ADS)
Pham, M. T.; Vanhaute, W. J.; Vandenberghe, S.; De Baets, B.; Verhoest, N. E. C.
2013-06-01
Of all natural disasters, the economic and environmental consequences of droughts are among the highest because of their longevity and widespread spatial extent. Because of their extreme behaviour, studying droughts generally requires long time series of historical climate data. Rainfall is a very important variable for calculating drought statistics, for quantifying historical droughts or for assessing the impact on other hydrological (e.g. water stage in rivers) or agricultural (e.g. irrigation requirements) variables. Unfortunately, time series of historical observations are often too short for such assessments. To circumvent this, one may rely on the synthetic rainfall time series from stochastic point process rainfall models, such as Bartlett-Lewis models. The present study investigates whether drought statistics are preserved when simulating rainfall with Bartlett-Lewis models. Therefore, a 105 yr 10 min rainfall time series obtained at Uccle, Belgium is used as test case. First, drought events were identified on the basis of the Effective Drought Index (EDI), and each event was characterized by two variables, i.e. drought duration (D) and drought severity (S). As both parameters are interdependent, a multivariate distribution function, which makes use of a copula, was fitted. Based on the copula, four types of drought return periods are calculated for observed as well as simulated droughts and are used to evaluate the ability of the rainfall models to simulate drought events with the appropriate characteristics. Overall, all Bartlett-Lewis type of models studied fail in preserving extreme drought statistics, which is attributed to the model structure and to the model stationarity caused by maintaining the same parameter set during the whole simulation period.
Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T
2016-05-01
Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.
2009-12-01
events. Work associated with aperiodic tasks have the same statistical behavior and the same timing requirements. The timing deadlines are soft. • Sporadic...answers, but it is possible to calculate how precise the estimates are. Simulation-based performance analysis of a model includes a statistical ...to evaluate all pos- sible states in a timely manner. This is the principle reason for resorting to simulation and statistical analysis to evaluate
Systematic Error Modeling and Bias Estimation
Zhang, Feihu; Knoll, Alois
2016-01-01
This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, C.; Potts, I.; Reeks, M. W., E-mail: mike.reeks@ncl.ac.uk
We present a simple stochastic quadrant model for calculating the transport and deposition of heavy particles in a fully developed turbulent boundary layer based on the statistics of wall-normal fluid velocity fluctuations obtained from a fully developed channel flow. Individual particles are tracked through the boundary layer via their interactions with a succession of random eddies found in each of the quadrants of the fluid Reynolds shear stress domain in a homogeneous Markov chain process. In this way, we are able to account directly for the influence of ejection and sweeping events as others have done but without resorting tomore » the use of adjustable parameters. Deposition rate predictions for a wide range of heavy particles predicted by the model compare well with benchmark experimental measurements. In addition, deposition rates are compared with those obtained from continuous random walk models and Langevin equation based ejection and sweep models which noticeably give significantly lower deposition rates. Various statistics related to the particle near wall behavior are also presented. Finally, we consider the model limitations in using the model to calculate deposition in more complex flows where the near wall turbulence may be significantly different.« less
Modeling Fission Product Sorption in Graphite Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szlufarska, Izabela; Morgan, Dane; Allen, Todd
2013-04-08
The goal of this project is to determine changes in adsorption and desorption of fission products to/from nuclear-grade graphite in response to a changing chemical environment. First, the project team will employ principle calculations and thermodynamic analysis to predict stability of fission products on graphite in the presence of structural defects commonly observed in very high- temperature reactor (VHTR) graphites. Desorption rates will be determined as a function of partial pressure of oxygen and iodine, relative humidity, and temperature. They will then carry out experimental characterization to determine the statistical distribution of structural features. This structural information will yield distributionsmore » of binding sites to be used as an input for a sorption model. Sorption isotherms calculated under this project will contribute to understanding of the physical bases of the source terms that are used in higher-level codes that model fission product transport and retention in graphite. The project will include the following tasks: Perform structural characterization of the VHTR graphite to determine crystallographic phases, defect structures and their distribution, volume fraction of coke, and amount of sp2 versus sp3 bonding. This information will be used as guidance for ab initio modeling and as input for sorptivity models; Perform ab initio calculations of binding energies to determine stability of fission products on the different sorption sites present in nuclear graphite microstructures. The project will use density functional theory (DFT) methods to calculate binding energies in vacuum and in oxidizing environments. The team will also calculate stability of iodine complexes with fission products on graphite sorption sites; Model graphite sorption isotherms to quantify concentration of fission products in graphite. The binding energies will be combined with a Langmuir isotherm statistical model to predict the sorbed concentration of fission products on each type of graphite site. The model will include multiple simultaneous adsorbing species, which will allow for competitive adsorption effects between different fission product species and O and OH (for modeling accident conditions).« less
The statistics of primordial density fluctuations
NASA Astrophysics Data System (ADS)
Barrow, John D.; Coles, Peter
1990-05-01
The statistical properties of the density fluctuations produced by power-law inflation are investigated. It is found that, even the fluctuations present in the scalar field driving the inflation are Gaussian, the resulting density perturbations need not be, due to stochastic variations in the Hubble parameter. All the moments of the density fluctuations are calculated, and is is argued that, for realistic parameter choices, the departures from Gaussian statistics are small and would have a negligible effect on the large-scale structure produced in the model. On the other hand, the model predicts a power spectrum with n not equal to 1, and this could be good news for large-scale structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chukbar, B. K., E-mail: bchukbar@mail.ru
Two methods of modeling a double-heterogeneity fuel are studied: the deterministic positioning and the statistical method CORN of the MCU software package. The effect of distribution of microfuel in a pebble bed on the calculation results is studied. The results of verification of the statistical method CORN for the cases of the microfuel concentration up to 170 cm{sup –3} in a pebble bed are presented. The admissibility of homogenization of the microfuel coating with the graphite matrix is studied. The dependence of the reactivity on the relative location of fuel and graphite spheres in a pebble bed is found.
A statistical approach to develop a detailed soot growth model using PAH characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raj, Abhijeet; Celnik, Matthew; Shirley, Raphael
A detailed PAH growth model is developed, which is solved using a kinetic Monte Carlo algorithm. The model describes the structure and growth of planar PAH molecules, and is referred to as the kinetic Monte Carlo-aromatic site (KMC-ARS) model. A detailed PAH growth mechanism based on reactions at radical sites available in the literature, and additional reactions obtained from quantum chemistry calculations are used to model the PAH growth processes. New rates for the reactions involved in the cyclodehydrogenation process for the formation of 6-member rings on PAHs are calculated in this work based on density functional theory simulations. Themore » KMC-ARS model is validated by comparing experimentally observed ensembles on PAHs with the computed ensembles for a C{sub 2}H{sub 2} and a C{sub 6}H{sub 6} flame at different heights above the burner. The motivation for this model is the development of a detailed soot particle population balance model which describes the evolution of an ensemble of soot particles based on their PAH structure. However, at present incorporating such a detailed model into a population balance is computationally unfeasible. Therefore, a simpler model referred to as the site-counting model has been developed, which replaces the structural information of the PAH molecules by their functional groups augmented with statistical closure expressions. This closure is obtained from the KMC-ARS model, which is used to develop correlations and statistics in different flame environments which describe such PAH structural information. These correlations and statistics are implemented in the site-counting model, and results from the site-counting model and the KMC-ARS model are in good agreement. Additionally the effect of steric hindrance in large PAH structures is investigated and correlations for sites unavailable for reaction are presented. (author)« less
NASA Technical Reports Server (NTRS)
Slobin, S. D.
1982-01-01
The microwave attenuation and noise temperature effects of clouds can result in serious degradation of telecommunications link performance, especially for low-noise systems presently used in deep-space communications. Although cloud effects are generally less than rain effects, the frequent presence of clouds will cause some amount of link degradation a large portion of the time. This paper presents a general review of cloud types and their water particle densities, attenuation and noise temperature calculations, and basic link signal-to-noise ratio calculations. Tabular results of calculations for 12 different cloud models are presented for frequencies in the range 10-50 GHz. Curves of average-year attenuation and noise temperature statistics at frequencies ranging from 10 to 90 GHz, calculated from actual surface and radiosonde observations, are given for 15 climatologically distinct regions in the contiguous United States, Alaska, and Hawaii. Nonuniform sky cover is considered in these calculations.
Grosz, R; Stephanopoulos, G
1983-09-01
The need for the determination of the free energy of formation of biomass in bioreactor second law balances is well established. A statistical mechanical method for the calculation of the free energy of formation of E. coli biomass is introduced. In this method, biomass is modelled to consist of a system of biopolymer networks. The partition function of this system is proposed to consist of acoustic and optical modes of vibration. Acoustic modes are described by Tarasov's model, the parameters of which are evaluated with the aid of low-temperature calorimetric data for the crystalline protein bovine chymotrypsinogen A. The optical modes are described by considering the low-temperature thermodynamic properties of biological monomer crystals such as amino acid crystals. Upper and lower bounds are placed on the entropy to establish the maximum error associated with the statistical method. The upper bound is determined by endowing the monomers in biomass with ideal gas properties. The lower bound is obtained by limiting the monomers to complete immobility. On this basis, the free energy of formation is fixed to within 10%. Proposals are made with regard to experimental verification of the calculated value and extension of the calculation to other types of biomass.
Calculations of proton-binding thermodynamics in proteins.
Beroza, P; Case, D A
1998-01-01
Computational models of proton binding can range from the chemically complex and statistically simple (as in the quantum calculations) to the chemically simple and statistically complex. Much progress has been made in the multiple-site titration problem. Calculations have improved with the inclusion of more flexibility in regard to both the geometry of the proton binding and the larger scale protein motions associated with titration. This article concentrated on the principles of current calculations, but did not attempt to survey their quantitative performance. This is (1) because such comparisons are given in the cited papers and (2) because continued developments in understanding conformational flexibility and interaction energies will be needed to develop robust methods with strong predictive power. Nevertheless, the advances achieved over the past few years should not be underestimated: serious calculations of protonation behavior and its coupling to conformational change can now be confidently pursued against a backdrop of increasing understanding of the strengths and limitations of such models. It is hoped that such theoretical advances will also spur renewed experimental interest in measuring both overall titration curves and individual pKa values or pKa shifts. Exploration of the shapes of individual titration curves (as measured by Hill coefficients and other parameters) would also be useful in assessing the accuracy of computations and in drawing connections to functional behavior.
An emission-weighted proximity model for air pollution exposure assessment.
Zou, Bin; Wilson, J Gaines; Zhan, F Benjamin; Zeng, Yongnian
2009-08-15
Among the most common spatial models for estimating personal exposure are Traditional Proximity Models (TPMs). Though TPMs are straightforward to configure and interpret, they are prone to extensive errors in exposure estimates and do not provide prospective estimates. To resolve these inherent problems with TPMs, we introduce here a novel Emission Weighted Proximity Model (EWPM) to improve the TPM, which takes into consideration the emissions from all sources potentially influencing the receptors. EWPM performance was evaluated by comparing the normalized exposure risk values of sulfur dioxide (SO(2)) calculated by EWPM with those calculated by TPM and monitored observations over a one-year period in two large Texas counties. In order to investigate whether the limitations of TPM in potential exposure risk prediction without recorded incidence can be overcome, we also introduce a hybrid framework, a 'Geo-statistical EWPM'. Geo-statistical EWPM is a synthesis of Ordinary Kriging Geo-statistical interpolation and EWPM. The prediction results are presented as two potential exposure risk prediction maps. The performance of these two exposure maps in predicting individual SO(2) exposure risk was validated with 10 virtual cases in prospective exposure scenarios. Risk values for EWPM were clearly more agreeable with the observed concentrations than those from TPM. Over the entire study area, the mean SO(2) exposure risk from EWPM was higher relative to TPM (1.00 vs. 0.91). The mean bias of the exposure risk values of 10 virtual cases between EWPM and 'Geo-statistical EWPM' are much smaller than those between TPM and 'Geo-statistical TPM' (5.12 vs. 24.63). EWPM appears to more accurately portray individual exposure relative to TPM. The 'Geo-statistical EWPM' effectively augments the role of the standard proximity model and makes it possible to predict individual risk in future exposure scenarios resulting in adverse health effects from environmental pollution.
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829
Climate Considerations Of The Electricity Supply Systems In Industries
NASA Astrophysics Data System (ADS)
Asset, Khabdullin; Zauresh, Khabdullina
2014-12-01
The study is focused on analysis of climate considerations of electricity supply systems in a pellet industry. The developed analysis model consists of two modules: statistical data of active power losses evaluation module and climate aspects evaluation module. The statistical data module is presented as a universal mathematical model of electrical systems and components of industrial load. It forms a basis for detailed accounting of power loss from the voltage levels. On the basis of the universal model, a set of programs is designed to perform the calculation and experimental research. It helps to obtain the statistical characteristics of the power losses and loads of the electricity supply systems and to define the nature of changes in these characteristics. Within the module, several methods and algorithms for calculating parameters of equivalent circuits of low- and high-voltage ADC and SD with a massive smooth rotor with laminated poles are developed. The climate aspects module includes an analysis of the experimental data of power supply system in pellet production. It allows identification of GHG emission reduction parameters: operation hours, type of electrical motors, values of load factor and deviation of standard value of voltage.
Jones, Hayley E; Hickman, Matthew; Kasprzyk-Hordern, Barbara; Welton, Nicky J; Baker, David R; Ades, A E
2014-07-15
Concentrations of metabolites of illicit drugs in sewage water can be measured with great accuracy and precision, thanks to the development of sensitive and robust analytical methods. Based on assumptions about factors including the excretion profile of the parent drug, routes of administration and the number of individuals using the wastewater system, the level of consumption of a drug can be estimated from such measured concentrations. When presenting results from these 'back-calculations', the multiple sources of uncertainty are often discussed, but are not usually explicitly taken into account in the estimation process. In this paper we demonstrate how these calculations can be placed in a more formal statistical framework by assuming a distribution for each parameter involved, based on a review of the evidence underpinning it. Using a Monte Carlo simulations approach, it is then straightforward to propagate uncertainty in each parameter through the back-calculations, producing a distribution for instead of a single estimate of daily or average consumption. This can be summarised for example by a median and credible interval. To demonstrate this approach, we estimate cocaine consumption in a large urban UK population, using measured concentrations of two of its metabolites, benzoylecgonine and norbenzoylecgonine. We also demonstrate a more sophisticated analysis, implemented within a Bayesian statistical framework using Markov chain Monte Carlo simulation. Our model allows the two metabolites to simultaneously inform estimates of daily cocaine consumption and explicitly allows for variability between days. After accounting for this variability, the resulting credible interval for average daily consumption is appropriately wider, representing additional uncertainty. We discuss possibilities for extensions to the model, and whether analysis of wastewater samples has potential to contribute to a prevalence model for illicit drug use. Copyright © 2014. Published by Elsevier B.V.
Bhatnagar, Navendu; Kamath, Ganesh; Chelst, Issac; Potoff, Jeffrey J
2012-07-07
The 1-octanol-water partition coefficient log K(ow) of a solute is a key parameter used in the prediction of a wide variety of complex phenomena such as drug availability and bioaccumulation potential of trace contaminants. In this work, adaptive biasing force molecular dynamics simulations are used to determine absolute free energies of hydration, solvation, and 1-octanol-water partition coefficients for n-alkanes from methane to octane. Two approaches are evaluated; the direct transfer of the solute from 1-octanol to water phase, and separate transfers of the solute from the water or 1-octanol phase to vacuum, with both methods yielding statistically indistinguishable results. Calculations performed with the TIP4P and SPC∕E water models and the TraPPE united-atom force field for n-alkanes show that the choice of water model has a negligible effect on predicted free energies of transfer and partition coefficients for n-alkanes. A comparison of calculations using wet and dry octanol phases shows that the predictions for log K(ow) using wet octanol are 0.2-0.4 log units lower than for dry octanol, although this is within the statistical uncertainty of the calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, R.; Sudarshan, K.; Sharma, S. K.
2009-06-15
Fission fragment angular distributions have been measured in the reactions {sup 16}O+{sup 188}Os and {sup 28}Si+{sup 176}Yb to investigate the contribution from noncompound nucleus fission. Parameters for statistical model calculations were fixed using fission cross section data in the {sup 16}O+{sup 188}Os reaction. Experimental anisotropies were in reasonable agreement with those calculated using the statistical saddle point model for both reactions. The present results are also consistent with those of mass distribution studies in the fission of {sup 202}Po, formed in the reactions with varying entrance channel mass asymmetry. However, the present studies do not show a large fusion hindrancemore » as reported in the pre-actinide region based on the measurement of evaporation residue cross section.« less
NASA Technical Reports Server (NTRS)
Forbes, G. S.; Pielke, R. A.
1985-01-01
Various empirical and statistical weather-forecasting studies which utilize stratification by weather regime are described. Objective classification was used to determine weather regime in some studies. In other cases the weather pattern was determined on the basis of a parameter representing the physical and dynamical processes relevant to the anticipated mesoscale phenomena, such as low level moisture convergence and convective precipitation, or the Froude number and the occurrence of cold-air damming. For mesoscale phenomena already in existence, new forecasting techniques were developed. The use of cloud models in operational forecasting is discussed. Models to calculate the spatial scales of forcings and resultant response for mesoscale systems are presented. The use of these models to represent the climatologically most prevalent systems, and to perform case-by-case simulations is reviewed. Operational implementation of mesoscale data into weather forecasts, using both actual simulation output and method-output statistics is discussed.
NASA Astrophysics Data System (ADS)
Mumpower, M. R.; Kawano, T.; Ullmann, J. L.; Krtička, M.; Sprouse, T. M.
2017-08-01
Radiative neutron capture is an important nuclear reaction whose accurate description is needed for many applications ranging from nuclear technology to nuclear astrophysics. The description of such a process relies on the Hauser-Feshbach theory which requires the nuclear optical potential, level density, and γ -strength function as model inputs. It has recently been suggested that the M 1 scissors mode may explain discrepancies between theoretical calculations and evaluated data. We explore statistical model calculations with the strength of the M 1 scissors mode estimated to be dependent on the nuclear deformation of the compound system. We show that the form of the M 1 scissors mode improves the theoretical description of evaluated data and the match to experiment in both the fission product and actinide regions. Since the scissors mode occurs in the range of a few keV to a few MeV, it may also impact the neutron capture cross sections of neutron-rich nuclei that participate in the rapid neutron capture process of nucleosynthesis. We comment on the possible impact to nucleosynthesis by evaluating neutron capture rates for neutron-rich nuclei with the M 1 scissors mode active.
2015-09-30
into acoustic fluctuation calculations. In the Philippine Sea, models of eddies, internal tides, internal waves, and fine structure ( spice ) are...needed, while in the shallow water case a models of the random linear internal waves and spice are lacking. APPROACH The approach to this research is to
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1987-01-01
A dynamic rain attenuation prediction model is developed for use in obtaining the temporal characteristics, on time scales of minutes or hours, of satellite communication link availability. Analagous to the associated static rain attenuation model, which yields yearly attenuation predictions, this dynamic model is applicable at any location in the world that is characterized by the static rain attenuation statistics peculiar to the geometry of the satellite link and the rain statistics of the location. Such statistics are calculated by employing the formalism of Part I of this report. In fact, the dynamic model presented here is an extension of the static model and reduces to the static model in the appropriate limit. By assuming that rain attenuation is dynamically described by a first-order stochastic differential equation in time and that this random attenuation process is a Markov process, an expression for the associated transition probability is obtained by solving the related forward Kolmogorov equation. This transition probability is then used to obtain such temporal rain attenuation statistics as attenuation durations and allowable attenuation margins versus control system delay.
Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.
Satorra, Albert; Bentler, Peter M
2010-06-01
A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.
Collisional-radiative switching - A powerful technique for converging non-LTE calculations
NASA Technical Reports Server (NTRS)
Hummer, D. G.; Voels, S. A.
1988-01-01
A very simple technique has been developed to converge statistical equilibrium and model atmospheric calculations in extreme non-LTE conditions when the usual iterative methods fail to converge from an LTE starting model. The proposed technique is based on a smooth transition from a collision-dominated LTE situation to the desired non-LTE conditions in which radiation dominates, at least in the most important transitions. The proposed approach was used to successfully compute stellar models with He abundances of 0.20, 0.30, and 0.50; Teff = 30,000 K, and log g = 2.9.
An Analytical Thermal Model for Autonomous Soaring Research
NASA Technical Reports Server (NTRS)
Allen, Michael
2006-01-01
A viewgraph presentation describing an analytical thermal model used to enable research on autonomous soaring for a small UAV aircraft is given. The topics include: 1) Purpose; 2) Approach; 3) SURFRAD Data; 4) Convective Layer Thickness; 5) Surface Heat Budget; 6) Surface Virtual Potential Temperature Flux; 7) Convective Scaling Velocity; 8) Other Calculations; 9) Yearly trends; 10) Scale Factors; 11) Scale Factor Test Matrix; 12) Statistical Model; 13) Updraft Strength Calculation; 14) Updraft Diameter; 15) Updraft Shape; 16) Smoothed Updraft Shape; 17) Updraft Spacing; 18) Environment Sink; 19) Updraft Lifespan; 20) Autonomous Soaring Research; 21) Planned Flight Test; and 22) Mixing Ratio.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
NASA Astrophysics Data System (ADS)
Burrello, S.; Gulminelli, F.; Aymard, F.; Colonna, M.; Raduta, Ad. R.
2015-11-01
Background: Superfluidity in the crust is a key ingredient for the cooling properties of proto-neutron stars. Present theoretical calculations employ the quasiparticle mean-field Hartree-Fock-Bogoliubov theory with temperature-dependent occupation numbers for the quasiparticle states. Purpose: Finite temperature stellar matter is characterized by a whole distribution of different nuclear species. We want to assess the importance of this distribution on the calculation of heat capacity in the inner crust. Method: Following a recent work, the Wigner-Seitz cell is mapped into a model with cluster degrees of freedom. The finite temperature distribution is then given by a statistical collection of Wigner-Seitz cells. We additionally introduce pairing correlations in the local density BCS approximation both in the homogeneous unbound neutron component, and in the interface region between clusters and neutrons. Results: The heat capacity is calculated in the different baryonic density conditions corresponding to the inner crust, and in a temperature range varying from 100 KeV to 2 MeV. We show that accounting for the cluster distribution has a small effect at intermediate densities, but it considerably affects the heat capacity both close to the outer crust and close to the core. We additionally show that it is very important to consider the temperature evolution of the proton fraction for a quantitatively reliable estimation of the heat capacity. Conclusions: We present the first modelization of stellar matter containing at the same time a statistical distribution of clusters at finite temperature, and pairing correlations in the unbound neutron component. The effect of the nuclear distribution on the superfluid properties can be easily added in future calculations of the neutron star cooling curves. A strong influence of resonance population on the heat capacity at high temperature is observed, which deserves to be further studied within more microscopic calculations.
A comparison of linear and nonlinear statistical techniques in performance attribution.
Chan, N H; Genovese, C R
2001-01-01
Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.
NASA Astrophysics Data System (ADS)
Turkoglu, Danyal
Precise knowledge of prompt gamma-ray intensities following neutron capture is critical for elemental and isotopic analyses, homeland security, modeling nuclear reactors, etc. A recently-developed database of prompt gamma-ray production cross sections and nuclear structure information in the form of a decay scheme, called the Evaluated Gamma-ray Activation File (EGAF), is under revision. Statistical model calculations are useful for checking the consistency of the decay scheme, providing insight on its completeness and accuracy. Furthermore, these statistical model calculations are necessary to estimate the contribution of continuum gamma-rays, which cannot be experimentally resolved due to the high density of excited states in medium- and heavy-mass nuclei. Decay-scheme improvements in EGAF lead to improvements to other databases (Evaluated Nuclear Structure Data File, Reference Input Parameter Library) that are ultimately used in nuclear-reaction models to generate the Evaluated Nuclear Data File (ENDF). Gamma-ray transitions following neutron capture in 93Nb have been studied at the cold-neutron beam facility at the Budapest Research Reactor. Measurements have been performed using a coaxial HPGe detector with Compton suppression. Partial gamma-ray production capture cross sections at a neutron velocity of 2200 m/s have been deduced relative to that of the 255.9-keV transition after cold-neutron capture by 93Nb. With the measurement of a niobium chloride target, this partial cross section was internally standardized to the cross section for the 1951-keV transition after cold-neutron capture by 35Cl. The resulting (0.1377 +/- 0.0018) barn (b) partial cross section produced a calibration factor that was 23% lower than previously measured for the EGAF database. The thermal-neutron cross sections were deduced for the 93Nb(n,gamma ) 94mNb and 93Nb(n,gamma) 94gNb reactions by summing the experimentally-measured partial gamma-ray production cross sections associated with the ground-state transitions below the 396-keV level and combining that summation with the contribution to the ground state from the quasi-continuum above 396 keV, determined with Monte Carlo statistical model calculations using the DICEBOX computer code. These values, sigmam and sigma 0, were (0.83 +/- 0.05) b and (1.16 +/- 0.11) b, respectively, and found to be in agreement with literature values. Comparison of the modeled population and experimental depopulation of individual levels confirmed tentative spin assignments and suggested changes where imbalances existed.
NASA Astrophysics Data System (ADS)
Atıcı, Ramazan; Sağır, Selçuk
2017-07-01
In the present work, the relationship with QBO of difference (ΔfoE = foEmea - foEIRI) between critical frequency (foE) values of ionospheric E-region, measured at Darwin and Casos Island stations and calculated by IRI-2012 ionospheric model, is statistically investigated. A multiple regression model is used as statistical tool. The ;Dummy; variables (;DummyWest; and ;DummyEast; represent westerly QBO values and easterly QBO values, respectively) are added to model in order to see the effect of westerly and easterly QBO. In the result of calculations, it is observed that the changes in ΔfoE about 50-52% can be explained by QBO at both stations. The relationship between QBO and ΔfoE is negative at both stations. The change of 1 ms-1 in whole set of QBO leads to a decrease of 0.008 MHz at Casos Island station and 0.017 MHz at Darwin station in ΔfoE. Directions of QBO have an effect on ΔfoE at the Darwin station, but they've not any effect on ΔfoE at Casos Island station. It is thought that the difference values in the foE are due to not to be included in the IRI-model of all parameters affecting the critical frequency value. Thus, QBO which is not included to IRI-model can have an effect on foE and more accurate results can be obtained by IRI model if the QBO is included in this model calculations.
Probability of identity by descent in metapopulations.
Kaj, I; Lascoux, M
1999-01-01
Equilibrium probabilities of identity by descent (IBD), for pairs of genes within individuals, for genes between individuals within subpopulations, and for genes between subpopulations are calculated in metapopulation models with fixed or varying colony sizes. A continuous-time analog to the Moran model was used in either case. For fixed-colony size both propagule and migrant pool models were considered. The varying population size model is based on a birth-death-immigration (BDI) process, to which migration between colonies is added. Wright's F statistics are calculated and compared to previous results. Adding between-island migration to the BDI model can have an important effect on the equilibrium probabilities of IBD and on Wright's index. PMID:10388835
Teaching Statistics Online Using "Excel"
ERIC Educational Resources Information Center
Jerome, Lawrence
2011-01-01
As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…
On Using the Weimer Statistical Model for Real-Time Ionospheric Specifications and Forecasts
NASA Astrophysics Data System (ADS)
Bekerat, H. A.; Schunk, R. W.; Scherliess, L.
2002-12-01
The Weimer statistical model (Weimer, 2001) for the high-latitude convection pattern was tested with regard to its ability to produce real-time convection patterns. This work is being conducted under the polar section of GAIM (Global Assimilation of Ionospheric Measurements). The method adopted involves the comparison of the cross-track ion drift velocities measured by DMSP satellites with those calculated from the Weimer model. Starting with a Weimer pattern obtained using real-time IMF and solar wind data at the time of a DMSP satellite pass in the high-latitude ionosphere, the cross-track ion drift velocities along the DMSP track were calculated from the Weimer convection model and compared to those measured by the DMSP satellite. Then, in order to improve the agreement between the measurement and the model, two of the input parameters to the model, the IMF clock-angle and the solar wind speed, were varied to get the pattern that gives the best agreement with the DMSP satellite measurements. Four months of data (March, July, September, and December 1998) were used to test the Weimer model. The result shows that the agreement between the measurement and the Weimer model is improved by using this procedure. The Weimer model is good in a statistical sense, it was able to produce the large-scale structure in most cases. However, it is not good enough to be used for real-time ionospheric specifications and forecasts because it failed to produce a lot of the mesoscale structure measured along most DMSP satellite passes. Reference Weimer, D. R., J. Geophys. Res., 106, 407,2001
Non-stationary background intensity and Caribbean seismic events
NASA Astrophysics Data System (ADS)
Valmy, Larissa; Vaillant, Jean
2014-05-01
We consider seismic risk calculation based on models with non-stationary background intensity. The aim is to improve predictive strategies in the framework of seismic risk assessment from models describing at best the seismic activity in the Caribbean arc. Appropriate statistical methods are required for analyzing the volumes of data collected. The focus is on calculating earthquakes occurrences probability and analyzing spatiotemporal evolution of these probabilities. The main modeling tool is the point process theory in order to take into account past history prior to a given date. Thus, the seismic event conditional intensity is expressed by means of the background intensity and the self exciting component. This intensity can be interpreted as the expected event rate per time and / or surface unit. The most popular intensity model in seismology is the ETAS (Epidemic Type Aftershock Sequence) model introduced and then generalized by Ogata [2, 3]. We extended this model and performed a comparison of different probability density functions for the triggered event times [4]. We illustrate our model by considering the CDSA (Centre de Données Sismiques des Antilles) catalog [1] which contains more than 7000 seismic events occurred in the Lesser Antilles arc. Statistical tools for testing the background intensity stationarity and for dynamical segmentation are presented. [1] Bengoubou-Valérius M., Bazin S., Bertil D., Beauducel F. and Bosson A. (2008). CDSA: a new seismological data center for the French Lesser Antilles, Seismol. Res. Lett., 79 (1), 90-102. [2] Ogata Y. (1998). Space-time point-process models for earthquake occurrences, Annals of the Institute of Statistical Mathematics, 50 (2), 379-402. [3] Ogata, Y. (2011). Significant improvements of the space-time ETAS model for forecasting of accurate baseline seismicity, Earth, Planets and Space, 63 (3), 217-229. [4] Valmy L. and Vaillant J. (2013). Statistical models in seismology: Lesser Antilles arc case, Bull. Soc. géol. France, 2013, 184 (1), 61-67.
Numerical investigation of turbulent channel flow
NASA Technical Reports Server (NTRS)
Moin, P.; Kim, J.
1981-01-01
Fully developed turbulent channel flow was simulated numerically at Reynolds number 13800, based on centerline velocity and channel halt width. The large-scale flow field was obtained by directly integrating the filtered, three dimensional, time dependent, Navier-Stokes equations. The small-scale field motions were simulated through an eddy viscosity model. The calculations were carried out on the ILLIAC IV computer with up to 516,096 grid points. The computed flow field was used to study the statistical properties of the flow as well as its time dependent features. The agreement of the computed mean velocity profile, turbulence statistics, and detailed flow structures with experimental data is good. The resolvable portion of the statistical correlations appearing in the Reynolds stress equations are calculated. Particular attention is given to the examination of the flow structure in the vicinity of the wall.
Nonlinear wave chaos: statistics of second harmonic fields.
Zhou, Min; Ott, Edward; Antonsen, Thomas M; Anlage, Steven M
2017-10-01
Concepts from the field of wave chaos have been shown to successfully predict the statistical properties of linear electromagnetic fields in electrically large enclosures. The Random Coupling Model (RCM) describes these properties by incorporating both universal features described by Random Matrix Theory and the system-specific features of particular system realizations. In an effort to extend this approach to the nonlinear domain, we add an active nonlinear frequency-doubling circuit to an otherwise linear wave chaotic system, and we measure the statistical properties of the resulting second harmonic fields. We develop an RCM-based model of this system as two linear chaotic cavities coupled by means of a nonlinear transfer function. The harmonic field strengths are predicted to be the product of two statistical quantities and the nonlinearity characteristics. Statistical results from measurement-based calculation, RCM-based simulation, and direct experimental measurements are compared and show good agreement over many decades of power.
Computational simulation of the creep-rupture process in filamentary composite materials
NASA Technical Reports Server (NTRS)
Slattery, Kerry T.; Hackett, Robert M.
1991-01-01
A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.
VizieR Online Data Catalog: Brussels nuclear reaction rate library (Aikawa+, 2005)
NASA Astrophysics Data System (ADS)
Aikawa, M.; Arnould, M.; Goriely, S.; Jorissen, A.; Takahashi, K.
2005-07-01
The present data is part of the Brussels nuclear reaction rate library (BRUSLIB) for astrophysics applications and concerns nuclear reaction rate predictions calculated within the statistical Hauser-Feshbach approximation and making use of global and coherent microscopic nuclear models for the quantities (nuclear masses, nuclear structure properties, nuclear level densities, gamma-ray strength functions, optical potentials) entering the rate calculations. (4 data files).
NASA Astrophysics Data System (ADS)
Smolina, Irina Yu.
2015-10-01
Mechanical properties of a cable are of great importance in design and strength calculation of flexible cables. The problem of determination of elastic properties and rigidity characteristics of a cable modeled by anisotropic helical elastic rod is considered. These characteristics are calculated indirectly by means of the parameters received from statistical processing of experimental data. These parameters are considered as random quantities. With taking into account probable nature of these parameters the formulas for estimation of the macroscopic elastic moduli of a cable are obtained. The calculating expressions for macroscopic flexural rigidity, shear rigidity and torsion rigidity using the macroscopic elastic characteristics obtained before are presented. Statistical estimations of the rigidity characteristics of some cable grades are adduced. A comparison with those characteristics received on the basis of deterministic approach is given.
NASA Astrophysics Data System (ADS)
Szücs, T.; Kiss, G. G.; Gyürky, Gy.; Halász, Z.; Fülöp, Zs.; Rauscher, T.
2018-01-01
The stellar reaction rates of radiative α-capture reactions on heavy isotopes are of crucial importance for the γ process network calculations. These rates are usually derived from statistical model calculations, which need to be validated, but the experimental database is very scarce. This paper presents the results of α-induced reaction cross section measurements on iridium isotopes carried out at first close to the astrophysically relevant energy region. Thick target yields of 191Ir(α,γ)195Au, 191Ir(α,n)194Au, 193Ir(α,n)196mAu, 193Ir(α,n)196Au reactions have been measured with the activation technique between Eα = 13.4 MeV and 17 MeV. For the first time the thick target yield was determined with X-ray counting. This led to a previously unprecedented sensitivity. From the measured thick target yields, reaction cross sections are derived and compared with statistical model calculations. The recently suggested energy-dependent modification of the α + nucleus optical potential gives a good description of the experimental data.
Wang, Juan; Wang, Jian Lin; Liu, Jia Bin; Jiang, Wen; Zhao, Chang Xing
2017-06-18
The dynamic variations of evapotranspiration (ET) and weather data during summer maize growing season in 2013-2015 were monitored with eddy covariance system, and the applicability of two operational models (FAO-PM model and KP-PM model) based on the Penman-Monteith model were analyzed. Firstly, the key parameters in the two models were calibrated with the measured data in 2013 and 2014; secondly, the daily ET in 2015 calculated by the FAO-PM model and KP-PM model was compared to the observed ET, respectively. Finally, the coefficients in the KP-PM model were further revised with the coefficients calculated according to the different growth stages, and the performance of the revised KP-PM model was also evaluated. These statistical parameters indicated that the calculated daily ET for 2015 by the FAO-PM model was closer to the observed ET than that by the KP-PM model. The daily ET calculated from the revised KP-PM model for daily ET was more accurate than that from the FAO-PM model. It was also found that the key parameters in the two models were correlated with weather conditions, so the calibration was necessary before using the models to predict the ET. The above results could provide some guidelines on predicting ET with the two models.
Comparing The Effectiveness of a90/95 Calculations (Preprint)
2006-09-01
Nachtsheim, John Neter, William Li, Applied Linear Statistical Models , 5th ed., McGraw-Hill/Irwin, 2005 5. Mood, Graybill and Boes, Introduction...curves is based on methods that are only valid for ordinary linear regression. Requirements for a valid Ordinary Least-Squares Regression Model There... linear . For example is a linear model ; is not. 2. Uniform variance (homoscedasticity
Yield modeling of acoustic charge transport transversal filters
NASA Technical Reports Server (NTRS)
Kenney, J. S.; May, G. S.; Hunt, W. D.
1995-01-01
This paper presents a yield model for acoustic charge transport transversal filters. This model differs from previous IC yield models in that it does not assume that individual failures of the nondestructive sensing taps necessarily cause a device failure. A redundancy in the number of taps included in the design is explained. Poisson statistics are used to describe the tap failures, weighted over a uniform defect density distribution. A representative design example is presented. The minimum number of taps needed to realize the filter is calculated, and tap weights for various numbers of redundant taps are calculated. The critical area for device failure is calculated for each level of redundancy. Yield is predicted for a range of defect densities and redundancies. To verify the model, a Monte Carlo simulation is performed on an equivalent circuit model of the device. The results of the yield model are then compared to the Monte Carlo simulation. Better than 95% agreement was obtained for the Poisson model with redundant taps ranging from 30% to 150% over the minimum.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.
2014-10-01
Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.
2015-03-01
Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
Regression modeling of ground-water flow
Cooley, R.L.; Naff, R.L.
1985-01-01
Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)
ERIC Educational Resources Information Center
1971
Computers have effected a comprehensive transformation of chemistry. Computers have greatly enhanced the chemist's ability to do model building, simulations, data refinement and reduction, analysis of data in terms of models, on-line data logging, automated control of experiments, quantum chemistry and statistical and mechanical calculations, and…
Statistical Evaluation of Utilization of the ISS
NASA Technical Reports Server (NTRS)
Andrews, Ross; Andrews, Alida
2006-01-01
PayLoad Utilization Modeler (PLUM) is a statistical-modeling computer program used to evaluate the effectiveness of utilization of the International Space Station (ISS) in terms of the number of research facilities that can be operated within a specified interval of time. PLUM is designed to balance the requirements of research facilities aboard the ISS against the resources available on the ISS. PLUM comprises three parts: an interface for the entry of data on constraints and on required and available resources, a database that stores these data as well as the program output, and a modeler. The modeler comprises two subparts: one that generates tens of thousands of random combinations of research facilities and another that calculates the usage of resources for each of those combinations. The results of these calculations are used to generate graphical and tabular reports to determine which facilities are most likely to be operable on the ISS, to identify which ISS resources are inadequate to satisfy the demands upon them, and to generate other data useful in allocation of and planning of resources.
Microscopic calculations of liquid and solid neutron star matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakravarty, Sudip; Miller, Michael D.; Chia-Wei, Woo
1974-02-01
As the first step to a microscopic determination of the solidification density of neutron star matter, variational calculations are performed for both liquid and solid phases using a very simple model potential. The potential, containing only the repulsive part of the Reid /sup 1/S/sub o/ interaction, together with Boltzmann statistics defines a homework problem'' which several groups involved in solidification calculations have agreed to solve. The results were to be compared for the purpose of checking calculational techniques. For the solid energy good agreement with Canuto and Chitre was found. Both the liquid and solid energies are much lower thanmore » those of Pandharipande. It is shown that for this oversimplified model, neutron star matter will remain solid down to ordinary nuclear matter density.« less
Empirical estimation of astrophysical photodisintegration rates of 106Cd
NASA Astrophysics Data System (ADS)
Belyshev, S. S.; Kuznetsov, A. A.; Stopani, K. A.
2017-09-01
It has been noted in previous experiments that the ratio between the photoneutron and photoproton disintegration channels of 106Cd might be considerably different from predictions of statistical models. The thresholds of these reactions differ by several MeV and the total astrophysical rate of photodisintegration of 106Cd, which is mostly produced in photonuclear reactions during the p-process nucleosynthesis, might be noticeably different from the calculated value. In this work the bremsstrahlung beam of a 55.6 MeV microtron and the photon activation technique is used to measure yields of photonuclear reaction products on isotopically-enriched cadmium targets. The obtained results are compared with predictions of statistical models. The experimental yields are used to estimate photodisintegration reaction rates on 106Cd, which are then used in nuclear network calculations to examine the effects of uncertainties on the produced abundences of p-nuclei.
Liu, Jian; Pedroza, Luana S; Misch, Carissa; Fernández-Serra, Maria V; Allen, Philip B
2014-07-09
We present total energy and force calculations for the (GaN)1-x(ZnO)x alloy. Site-occupancy configurations are generated from Monte Carlo (MC) simulations, on the basis of a cluster expansion model proposed in a previous study. Local atomic coordinate relaxations of surprisingly large magnitude are found via density-functional calculations using a 432-atom periodic supercell, for three representative configurations at x = 0.5. These are used to generate bond-length distributions. The configurationally averaged composition- and temperature-dependent short-range order (SRO) parameters of the alloys are discussed. The entropy is approximated in terms of pair distribution statistics and thus related to SRO parameters. This approximate entropy is compared with accurate numerical values from MC simulations. An empirical model for the dependence of the bond length on the local chemical environments is proposed.
Multi-fidelity machine learning models for accurate bandgap predictions of solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
Multi-fidelity machine learning models for accurate bandgap predictions of solids
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
2016-12-28
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
NASA Technical Reports Server (NTRS)
Macfarlane, J. J.
1984-01-01
A model free energy is developed for hydrogen-helium mixtures based on solid-state Thomas-Fermi-Dirac calculations at pressures relevant to the interiors of giant planets. Using a model potential similar to that for a two-component plasma, effective charges for the nuclei (which are in general smaller than the actual charges because of screening effects) are parameterized, being constrained by calculations at a number of densities, compositions, and lattice structures. These model potentials are then used to compute the equilibrium properties of H-He fluids using a charged hard-sphere model. The results find critical temperatures of about 0 K, 500 K, and 1500 K, for pressures of 10, 100, and 1000 Mbar, respectively. These phase separation temperatures are considerably lower (approximately 6,000-10,000 K) than those found from calculations using free electron perturbation theory, and suggest that H-He solutions should be stable against phase separation in the metallic zones of Jupiter and Saturn.
NASA Astrophysics Data System (ADS)
Beecken, B. P.; Fossum, E. R.
1996-07-01
Standard statistical theory is used to calculate how the accuracy of a conversion-gain measurement depends on the number of samples. During the development of a theoretical basis for this calculation, a model is developed that predicts how the noise levels from different elements of an ideal detector array are distributed. The model can also be used to determine what dependence the accuracy of measured noise has on the size of the sample. These features have been confirmed by experiment, thus enhancing the credibility of the method for calculating the uncertainty of a measured conversion gain. detector-array uniformity, charge coupled device, active pixel sensor.
Pressure calculation in hybrid particle-field simulations
NASA Astrophysics Data System (ADS)
Milano, Giuseppe; Kawakatsu, Toshihiro
2010-12-01
In the framework of a recently developed scheme for a hybrid particle-field simulation techniques where self-consistent field (SCF) theory and particle models (molecular dynamics) are combined [J. Chem. Phys. 130, 214106 (2009)], we developed a general formulation for the calculation of instantaneous pressure and stress tensor. The expressions have been derived from statistical mechanical definition of the pressure starting from the expression for the free energy functional in the SCF theory. An implementation of the derived formulation suitable for hybrid particle-field molecular dynamics-self-consistent field simulations is described. A series of test simulations on model systems are reported comparing the calculated pressure with those obtained from standard molecular dynamics simulations based on pair potentials.
Ignjatović, Aleksandra; Stojanović, Miodrag; Milošević, Zoran; Anđelković Apostolović, Marija
2017-12-02
The interest in developing risk models in medicine not only is appealing, but also associated with many obstacles in different aspects of predictive model development. Initially, the association of biomarkers or the association of more markers with the specific outcome was proven by statistical significance, but novel and demanding questions required the development of new and more complex statistical techniques. Progress of statistical analysis in biomedical research can be observed the best through the history of the Framingham study and development of the Framingham score. Evaluation of predictive models comes from a combination of the facts which are results of several metrics. Using logistic regression and Cox proportional hazards regression analysis, the calibration test, and the ROC curve analysis should be mandatory and eliminatory, and the central place should be taken by some new statistical techniques. In order to obtain complete information related to the new marker in the model, recently, there is a recommendation to use the reclassification tables by calculating the net reclassification index and the integrated discrimination improvement. Decision curve analysis is a novel method for evaluating the clinical usefulness of a predictive model. It may be noted that customizing and fine-tuning of the Framingham risk score initiated the development of statistical analysis. Clinically applicable predictive model should be a trade-off between all abovementioned statistical metrics, a trade-off between calibration and discrimination, accuracy and decision-making, costs and benefits, and quality and quantity of patient's life.
Partial Coordination Numbers in Binary Metallic Glasses (Postprint)
2011-12-07
structural differences related to relative atom size and quench rate. The magnitude of chemical interactions between the atoms, eij, might also influence...vious calculations.[2] A statistical approach is used to develop the Zij equations from the product of four terms: (1) the number of reference sites...within experimental scatter. The development of equations for Zij from the ECP model uses a statistical view of topology, and the Zij values
Statistics of Lyapunov exponents of quasi-one-dimensional disordered systems
NASA Astrophysics Data System (ADS)
Zhang, Yan-Yang; Xiong, Shi-Jie
2005-10-01
Statistical properties of Lyapunov exponents (LE) are numerically calculated in a quasi-one-dimensional (1D) Anderson model, which is in a 2D or 3D lattice with a finite cross section. The single-parameter scaling (SPS) variable τ relating the Lyapunov exponents γ and their variances σ by τ≡σ2L/⟨γ⟩ is calculated for different lateral coupling t and disorder strength W . In a wide range of t , τ is approximately independent of W , but it has different values for LEs in different channels. For small t , the distribution of the smallest LE is non-Gaussian and τ strongly depends on W , remarkably different from the 1D SPS hypothesis.
PyEvolve: a toolkit for statistical modelling of molecular evolution.
Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A
2004-01-05
Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.
Probabilistic arithmetic automata and their applications.
Marschall, Tobias; Herms, Inke; Kaltenbach, Hans-Michael; Rahmann, Sven
2012-01-01
We present a comprehensive review on probabilistic arithmetic automata (PAAs), a general model to describe chains of operations whose operands depend on chance, along with two algorithms to numerically compute the distribution of the results of such probabilistic calculations. PAAs provide a unifying framework to approach many problems arising in computational biology and elsewhere. We present five different applications, namely 1) pattern matching statistics on random texts, including the computation of the distribution of occurrence counts, waiting times, and clump sizes under hidden Markov background models; 2) exact analysis of window-based pattern matching algorithms; 3) sensitivity of filtration seeds used to detect candidate sequence alignments; 4) length and mass statistics of peptide fragments resulting from enzymatic cleavage reactions; and 5) read length statistics of 454 and IonTorrent sequencing reads. The diversity of these applications indicates the flexibility and unifying character of the presented framework. While the construction of a PAA depends on the particular application, we single out a frequently applicable construction method: We introduce deterministic arithmetic automata (DAAs) to model deterministic calculations on sequences, and demonstrate how to construct a PAA from a given DAA and a finite-memory random text model. This procedure is used for all five discussed applications and greatly simplifies the construction of PAAs. Implementations are available as part of the MoSDi package. Its application programming interface facilitates the rapid development of new applications based on the PAA framework.
Liver segmentation from CT images using a sparse priori statistical shape model (SP-SSM).
Wang, Xuehu; Zheng, Yongchang; Gan, Lan; Wang, Xuan; Sang, Xinting; Kong, Xiangfeng; Zhao, Jie
2017-01-01
This study proposes a new liver segmentation method based on a sparse a priori statistical shape model (SP-SSM). First, mark points are selected in the liver a priori model and the original image. Then, the a priori shape and its mark points are used to obtain a dictionary for the liver boundary information. Second, the sparse coefficient is calculated based on the correspondence between mark points in the original image and those in the a priori model, and then the sparse statistical model is established by combining the sparse coefficients and the dictionary. Finally, the intensity energy and boundary energy models are built based on the intensity information and the specific boundary information of the original image. Then, the sparse matching constraint model is established based on the sparse coding theory. These models jointly drive the iterative deformation of the sparse statistical model to approximate and accurately extract the liver boundaries. This method can solve the problems of deformation model initialization and a priori method accuracy using the sparse dictionary. The SP-SSM can achieve a mean overlap error of 4.8% and a mean volume difference of 1.8%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 0.8 mm and 1.4 mm, respectively.
NASA Astrophysics Data System (ADS)
Antoniuk, Oleg; Sprik, Rudolf
2010-03-01
We developed a random matrix model to describe the statistics of resonances in an acoustic cavity with broken time-reversal invariance. Time-reversal invariance braking is achieved by connecting an amplified feedback loop between two transducers on the surface of the cavity. The model is based on approach [1] that describes time- reversal properties of the cavity without a feedback loop. Statistics of eigenvalues (nearest neighbor resonance spacing distributions and spectral rigidity) has been calculated and compared to the statistics obtained from our experimental data. Experiments have been performed on aluminum block of chaotic shape confining ultrasound waves. [1] Carsten Draeger and Mathias Fink, One-channel time- reversal in chaotic cavities: Theoretical limits, Journal of Acoustical Society of America, vol. 105, Nr. 2, pp. 611-617 (1999)
Clark, Matthew T.; Calland, James Forrest; Enfield, Kyle B.; Voss, John D.; Lake, Douglas E.; Moorman, J. Randall
2017-01-01
Background Charted vital signs and laboratory results represent intermittent samples of a patient’s dynamic physiologic state and have been used to calculate early warning scores to identify patients at risk of clinical deterioration. We hypothesized that the addition of cardiorespiratory dynamics measured from continuous electrocardiography (ECG) monitoring to intermittently sampled data improves the predictive validity of models trained to detect clinical deterioration prior to intensive care unit (ICU) transfer or unanticipated death. Methods and findings We analyzed 63 patient-years of ECG data from 8,105 acute care patient admissions at a tertiary care academic medical center. We developed models to predict deterioration resulting in ICU transfer or unanticipated death within the next 24 hours using either vital signs, laboratory results, or cardiorespiratory dynamics from continuous ECG monitoring and also evaluated models using all available data sources. We calculated the predictive validity (C-statistic), the net reclassification improvement, and the probability of achieving the difference in likelihood ratio χ2 for the additional degrees of freedom. The primary outcome occurred 755 times in 586 admissions (7%). We analyzed 395 clinical deteriorations with continuous ECG data in the 24 hours prior to an event. Using only continuous ECG measures resulted in a C-statistic of 0.65, similar to models using only laboratory results and vital signs (0.63 and 0.69 respectively). Addition of continuous ECG measures to models using conventional measurements improved the C-statistic by 0.01 and 0.07; a model integrating all data sources had a C-statistic of 0.73 with categorical net reclassification improvement of 0.09 for a change of 1 decile in risk. The difference in likelihood ratio χ2 between integrated models with and without cardiorespiratory dynamics was 2158 (p value: <0.001). Conclusions Cardiorespiratory dynamics from continuous ECG monitoring detect clinical deterioration in acute care patients and improve performance of conventional models that use only laboratory results and vital signs. PMID:28771487
Moss, Travis J; Clark, Matthew T; Calland, James Forrest; Enfield, Kyle B; Voss, John D; Lake, Douglas E; Moorman, J Randall
2017-01-01
Charted vital signs and laboratory results represent intermittent samples of a patient's dynamic physiologic state and have been used to calculate early warning scores to identify patients at risk of clinical deterioration. We hypothesized that the addition of cardiorespiratory dynamics measured from continuous electrocardiography (ECG) monitoring to intermittently sampled data improves the predictive validity of models trained to detect clinical deterioration prior to intensive care unit (ICU) transfer or unanticipated death. We analyzed 63 patient-years of ECG data from 8,105 acute care patient admissions at a tertiary care academic medical center. We developed models to predict deterioration resulting in ICU transfer or unanticipated death within the next 24 hours using either vital signs, laboratory results, or cardiorespiratory dynamics from continuous ECG monitoring and also evaluated models using all available data sources. We calculated the predictive validity (C-statistic), the net reclassification improvement, and the probability of achieving the difference in likelihood ratio χ2 for the additional degrees of freedom. The primary outcome occurred 755 times in 586 admissions (7%). We analyzed 395 clinical deteriorations with continuous ECG data in the 24 hours prior to an event. Using only continuous ECG measures resulted in a C-statistic of 0.65, similar to models using only laboratory results and vital signs (0.63 and 0.69 respectively). Addition of continuous ECG measures to models using conventional measurements improved the C-statistic by 0.01 and 0.07; a model integrating all data sources had a C-statistic of 0.73 with categorical net reclassification improvement of 0.09 for a change of 1 decile in risk. The difference in likelihood ratio χ2 between integrated models with and without cardiorespiratory dynamics was 2158 (p value: <0.001). Cardiorespiratory dynamics from continuous ECG monitoring detect clinical deterioration in acute care patients and improve performance of conventional models that use only laboratory results and vital signs.
Hill, Mary C.
2010-01-01
Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.
[Sem: a suitable statistical software adaptated for research in oncology].
Kwiatkowski, F; Girard, M; Hacene, K; Berlie, J
2000-10-01
Many softwares have been adapted for medical use; they rarely enable conveniently both data management and statistics. A recent cooperative work ended up in a new software, Sem (Statistics Epidemiology Medicine), which allows data management of trials and, as well, statistical treatments on them. Very convenient, it can be used by non professional in statistics (biologists, doctors, researchers, data managers), since usually (excepted with multivariate models), the software performs by itself the most adequate test, after what complementary tests can be requested if needed. Sem data base manager (DBM) is not compatible with usual DBM: this constitutes a first protection against loss of privacy. Other shields (passwords, cryptage...) strengthen data security, all the more necessary today since Sem can be run on computers nets. Data organization enables multiplicity: forms can be duplicated by patient. Dates are treated in a special but transparent manner (sorting, date and delay calculations...). Sem communicates with common desktop softwares, often with a simple copy/paste. So, statistics can be easily performed on data stored in external calculation sheets, and slides by pasting graphs with a single mouse click (survival curves...). Already used over fifty places in different hospitals for daily work, this product, combining data management and statistics, appears to be a convenient and innovative solution.
NASA Astrophysics Data System (ADS)
Ruggles, Adam J.
2015-11-01
This paper presents improved statistical insight regarding the self-similar scalar mixing process of atmospheric hydrogen jets and the downstream region of under-expanded hydrogen jets. Quantitative planar laser Rayleigh scattering imaging is used to probe both jets. The self-similarity of statistical moments up to the sixth order (beyond the literature established second order) is documented in both cases. This is achieved using a novel self-similar normalization method that facilitated a degree of statistical convergence that is typically limited to continuous, point-based measurements. This demonstrates that image-based measurements of a limited number of samples can be used for self-similar scalar mixing studies. Both jets exhibit the same radial trends of these moments demonstrating that advanced atmospheric self-similarity can be applied in the analysis of under-expanded jets. Self-similar histograms away from the centerline are shown to be the combination of two distributions. The first is attributed to turbulent mixing. The second, a symmetric Poisson-type distribution centered on zero mass fraction, progressively becomes the dominant and eventually sole distribution at the edge of the jet. This distribution is attributed to shot noise-affected pure air measurements, rather than a diffusive superlayer at the jet boundary. This conclusion is reached after a rigorous measurement uncertainty analysis and inspection of pure air data collected with each hydrogen data set. A threshold based upon the measurement noise analysis is used to separate the turbulent and pure air data, and thusly estimate intermittency. Beta-distributions (four parameters) are used to accurately represent the turbulent distribution moments. This combination of measured intermittency and four-parameter beta-distributions constitutes a new, simple approach to model scalar mixing. Comparisons between global moments from the data and moments calculated using the proposed model show excellent agreement. This was attributed to the high quality of the measurements which reduced the width of the correctly identified, noise-affected pure air distribution, with respect to the turbulent mixing distribution. The ignitability of the atmospheric jet is determined using the flammability factor calculated from both kernel density estimated (KDE) PDFs and PDFs generated using the newly proposed model. Agreement between contours from both approaches is excellent. Ignitability of the under-expanded jet is also calculated using KDE PDFs. Contours are compared with those calculated by applying the atmospheric model to the under-expanded jet. Once again, agreement is excellent. This work demonstrates that self-similar scalar mixing statistics and ignitability of atmospheric jets can be accurately described by the proposed model. This description can be applied with confidence to under-expanded jets, which are more realistic of leak and fuel injection scenarios.
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler
2017-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) performs orbit determination (OD) for the Aqua and Aura satellites. Both satellites are located in low Earth orbit (LEO), and are part of what is considered the A-Train satellite constellation. Both spacecraft are currently in the science phase of their respective missions. The FDF has recently been tasked with delivering definitive covariance for each satellite.The main source of orbit determination used for these missions is the Orbit Determination Toolkit developed by Analytical Graphics Inc. (AGI). This software uses an Extended Kalman Filter (EKF) to estimate the states of both spacecraft. The filter incorporates force modelling, ground station and space network measurements to determine spacecraft states. It also generates a covariance at each measurement. This covariance can be useful for evaluating the overall performance of the tracking data measurements and the filter itself. An accurate covariance is also useful for covariance propagation which is utilized in collision avoidance operations. It is also valuable when attempting to determine if the current orbital solution will meet mission requirements in the future.This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The Chi-square statistic is calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance.For the EKF to correctly calculate the covariance, error models associated with tracking data measurements must be accurately tuned. Over estimating or under estimating these error values can have detrimental effects on the overall filter performance. The filter incorporates ground station measurements, which can be tuned based on the accuracy of the individual ground stations. It also includes measurements from the NASA space network (SN), which can be affected by the assumed accuracy of the TDRS satellite state at the time of the measurement.The force modelling in the EKF is also an important factor that affects the propagation accuracy and covariance sizing. The dominant force in the LEO orbit regime is the drag force caused by atmospheric drag. Accurate accounting of the drag force is especially important for the accuracy of the propagated state. The implementation of a box and wing model to improve drag estimation accuracy, and its overall effect on the covariance state is explored.The process of tuning the EKF for Aqua and Aura support is described, including examination of the measurement errors of available observation types (Doppler and range), and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-square statistic, calculated based of the ODTK EKF solutions, are assessed versus accepted norms for the orbit regime.
Espelt, Albert; Marí-Dell'Olmo, Marc; Penelo, Eva; Bosque-Prous, Marina
2016-06-14
To examine the differences between Prevalence Ratio (PR) and Odds Ratio (OR) in a cross-sectional study and to provide tools to calculate PR using two statistical packages widely used in substance use research (STATA and R). We used cross-sectional data from 41,263 participants of 16 European countries participating in the Survey on Health, Ageing and Retirement in Europe (SHARE). The dependent variable, hazardous drinking, was calculated using the Alcohol Use Disorders Identification Test - Consumption (AUDIT-C). The main independent variable was gender. Other variables used were: age, educational level and country of residence. PR of hazardous drinking in men with relation to women was estimated using Mantel-Haenszel method, log-binomial regression models and poisson regression models with robust variance. These estimations were compared to the OR calculated using logistic regression models. Prevalence of hazardous drinkers varied among countries. Generally, men have higher prevalence of hazardous drinking than women [PR=1.43 (1.38-1.47)]. Estimated PR was identical independently of the method and the statistical package used. However, OR overestimated PR, depending on the prevalence of hazardous drinking in the country. In cross-sectional studies, where comparisons between countries with differences in the prevalence of the disease or condition are made, it is advisable to use PR instead of OR.
Beck, H J; Birch, G F
2013-06-01
Stormwater contaminant loading estimates using event mean concentration (EMC), rainfall/runoff relationship calculations and computer modelling (Model of Urban Stormwater Infrastructure Conceptualisation--MUSIC) demonstrated high variability in common methods of water quality assessment. Predictions of metal, nutrient and total suspended solid loadings for three highly urbanised catchments in Sydney estuary, Australia, varied greatly within and amongst methods tested. EMC and rainfall/runoff relationship calculations produced similar estimates (within 1 SD) in a statistically significant number of trials; however, considerable variability within estimates (∼50 and ∼25 % relative standard deviation, respectively) questions the reliability of these methods. Likewise, upper and lower default inputs in a commonly used loading model (MUSIC) produced an extensive range of loading estimates (3.8-8.3 times above and 2.6-4.1 times below typical default inputs, respectively). Default and calibrated MUSIC simulations produced loading estimates that agreed with EMC and rainfall/runoff calculations in some trials (4-10 from 18); however, they were not frequent enough to statistically infer that these methods produced the same results. Great variance within and amongst mean annual loads estimated by common methods of water quality assessment has important ramifications for water quality managers requiring accurate estimates of the quantities and nature of contaminants requiring treatment.
Mumpower, Matthew Ryan; Kawano, Toshihiko; Ullmann, John Leonard; ...
2017-08-17
Radiative neutron capture is an important nuclear reaction whose accurate description is needed for many applications ranging from nuclear technology to nuclear astrophysics. The description of such a process relies on the Hauser-Feshbach theory which requires the nuclear optical potential, level density, and γ-strength function as model inputs. It has recently been suggested that the M1 scissors mode may explain discrepancies between theoretical calculations and evaluated data. We explore statistical model calculations with the strength of the M1 scissors mode estimated to be dependent on the nuclear deformation of the compound system. We show that the form of the M1more » scissors mode improves the theoretical description of evaluated data and the match to experiment in both the fission product and actinide regions. Since the scissors mode occurs in the range of a few keV to a few MeV, it may also impact the neutron capture cross sections of neutron-rich nuclei that participate in the rapid neutron capture process of nucleosynthesis. As a result, we comment on the possible impact to nucleosynthesis by evaluating neutron capture rates for neutron-rich nuclei with the M1 scissors mode active.« less
Kinetics of First-Row Transition Metal Cations (V+, Fe+, Co+) with OCS at Thermal Energies.
Sweeny, Brendan C; Ard, Shaun G; Shuman, Nicholas S; Viggiano, Albert A
2018-05-03
The temperature-dependent kinetics for reactions of V + , Fe + , and Co + with OCS are measured using a selected ion flow tube apparatus heated to 300-600 K. All three reactions proceed solely by C-S activation at thermal energies, resulting in metal sulfide cation formation. Previously calculated reaction pathways were employed to inform statistical modeling of these reactions for comparison to the data. As surmised previously, all three reactions at thermal energies require spin crossing, with the Fe + reaction crossing once circumventing a prohibitive transition state, before crossing again to form ground state products. The Fe + and Co + reaction efficiencies increase with energy. For the Co + reaction, and to a lesser extent the Fe + reaction, the apparent activation energies are less than the reaction endothermicities, possibly indicating increasing diabatic behavior of the spin crossings with energy. The V + reaction was well modeled assuming an entirely adiabatic spin crossing, such that the resultant avoided crossing behaves similarly to a tight transition state. The subsequent reaction of VS + with OCS producing VS 2 + is also investigated; the rate-limiting transition state energy derived from statistical modeling is poorly reproduced by quantum calculations using a variety of methods, highlighting the large (1-2 eV) uncertainty in calculated energetics of transition-metal containing species.
NASA Astrophysics Data System (ADS)
Vitali, Lina; Righini, Gaia; Piersanti, Antonio; Cremona, Giuseppe; Pace, Giandomenico; Ciancarella, Luisella
2017-12-01
Air backward trajectory calculations are commonly used in a variety of atmospheric analyses, in particular for source attribution evaluation. The accuracy of backward trajectory analysis is mainly determined by the quality and the spatial and temporal resolution of the underlying meteorological data set, especially in the cases of complex terrain. This work describes a new tool for the calculation and the statistical elaboration of backward trajectories. To take advantage of the high-resolution meteorological database of the Italian national air quality model MINNI, a dedicated set of procedures was implemented under the name of M-TraCE (MINNI module for Trajectories Calculation and statistical Elaboration) to calculate and process the backward trajectories of air masses reaching a site of interest. Some outcomes from the application of the developed methodology to the Italian Network of Special Purpose Monitoring Stations are shown to assess its strengths for the meteorological characterization of air quality monitoring stations. M-TraCE has demonstrated its capabilities to provide a detailed statistical assessment of transport patterns and region of influence of the site under investigation, which is fundamental for correctly interpreting pollutants measurements and ascertaining the official classification of the monitoring site based on meta-data information. Moreover, M-TraCE has shown its usefulness in supporting other assessments, i.e., spatial representativeness of a monitoring site, focussing specifically on the analysis of the effects due to meteorological variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, R D; Kelley, K; Dietrich, F S
2006-06-13
We have developed a set of modeled nuclear reaction cross sections for use in radiochemical diagnostics. Systematics for the input parameters required by the Hauser-Feshbach statistical model were developed and used to calculate neutron, proton, and deuteron induced nuclear reaction cross sections for targets ranging from strontium (Z = 38) to rhodium (Z = 45).
Highway runoff quality models for the protection of environmentally sensitive areas
NASA Astrophysics Data System (ADS)
Trenouth, William R.; Gharabaghi, Bahram
2016-11-01
This paper presents novel highway runoff quality models using artificial neural networks (ANN) which take into account site-specific highway traffic and seasonal storm event meteorological factors to predict the event mean concentration (EMC) statistics and mean daily unit area load (MDUAL) statistics of common highway pollutants for the design of roadside ditch treatment systems (RDTS) to protect sensitive receiving environs. A dataset of 940 monitored highway runoff events from fourteen sites located in five countries (Canada, USA, Australia, New Zealand, and China) was compiled and used to develop ANN models for the prediction of highway runoff suspended solids (TSS) seasonal EMC statistical distribution parameters, as well as the MDUAL statistics for four different heavy metal species (Cu, Zn, Cr and Pb). TSS EMCs are needed to estimate the minimum required removal efficiency of the RDTS needed in order to improve highway runoff quality to meet applicable standards and MDUALs are needed to calculate the minimum required capacity of the RDTS to ensure performance longevity.
Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses
Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy
2015-01-01
Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2013-09-01
We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.
Confidence Intervals from Realizations of Simulated Nuclear Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, W.; Ratkiewicz, A.; Ressler, J. J.
2017-09-28
Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.
Detailed noise statistics for an optically preamplified direct detection receiver
NASA Astrophysics Data System (ADS)
Danielsen, Soeren Lykke; Mikkelsen, Benny; Durhuus, Terji; Joergensen, Carsten; Stubkjaer, Kristian E.
We describe the exact statistics of an optically preamplified direct detection receiver by means of the moment generating function. The theory allows an arbitrary shaped electrical filter in the receiver circuit. The moment generating function (MGF) allows for a precise calculation of the error rate by using the inverse Fast Fourier transform (FFT). The exact results are compared with the usual Gaussian approximation (GA), the saddlepoint approximation (SAP) and the modified Chernoff bound (MCB). This comparison shows that the noise is not Gaussian distributed for all values of the optical amplifier gain. In the region from 20-30 dB gain, calculations shows that the GA underestimates the receiver sensitivity while the SAP is very close to the results of our exact model. Using the MGF derived in the article we then find the optimal bandwidth of the electrical filter in the receiver circuit and calculate the sensitivity degradation due to inter symbol interference (ISI).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gevorkyan, A. S., E-mail: g-ashot@sci.am; Sahakyan, V. V.
We study the classical 1D Heisenberg spin glasses in the framework of nearest-neighboring model. Based on the Hamilton equations we obtained the system of recurrence equations which allows to perform node-by-node calculations of a spin-chain. It is shown that calculations from the first principles of classical mechanics lead to ℕℙ hard problem, that however in the limit of the statistical equilibrium can be calculated by ℙ algorithm. For the partition function of the ensemble a new representation is offered in the form of one-dimensional integral of spin-chains’ energy distribution.
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-07
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007)]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H(5)(+) complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H(5)(+) complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011)] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
NASA Astrophysics Data System (ADS)
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-01
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007), 10.1063/1.2430711]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H_5^+ complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H_5^+ complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011), 10.1063/1.3587246] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
Aeronautical Engineering. A Continuing Bibliography with Indexes
1987-09-01
engines 482 01 AERONAUTICS (GENERAL) i-10 aircraft equipped with turbine engine ...rate adaptive control with applications to lateral Statistics on aircraft gas turbine engine rotor failures Unified model for the calculation of blade ...PUMPS p 527 A87-35669 to test data for a composite prop-tan model Gas turbine combustor and engine augmentor tube GENERAL AVIATION AIRCRAFT
Government Expenditures on Education as the Percentage of GDP in the EU
ERIC Educational Resources Information Center
Galetic, Fran
2015-01-01
This paper analyzes the government expenditures as the percentage of gross domestic product across countries of the European Union. There is a statistical model based on Z-score, whose aim is to calculate how much each EU country deviates from the average value. The model shows that government expenditures on education vary significantly between…
2006-05-31
dynamics (MD) and kinetic Monte Carlo ( KMC ) procedures. In 2D surface modeling our calculations project speedups of 9 orders of magnitude at 300 degrees...programming is used to perform customized statistical mechanics by bridging the different time scales of MD and KMC quickly and well. Speedups in
Conformal anomaly of some 2-d Z (n) models
NASA Astrophysics Data System (ADS)
William, Peter
1991-01-01
We describe a numerical calculation of the conformal anomaly in the case of some two-dimensional statistical models undergoing a second-order phase transition, utilizing a recently developed method to compute the partition function exactly. This computation is carried out on a massively parallel CM2 machine, using the finite size scaling behaviour of the free energy.
Upgrades to the REA method for producing probabilistic climate change projections
NASA Astrophysics Data System (ADS)
Xu, Ying; Gao, Xuejie; Giorgi, Filippo
2010-05-01
We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3
Toropov, A A; Toropova, A P; Raska, I
2008-04-01
Simplified molecular input line entry system (SMILES) has been utilized in constructing quantitative structure-property relationships (QSPR) for octanol/water partition coefficient of vitamins and organic compounds of different classes by optimal descriptors. Statistical characteristics of the best model (vitamins) are the following: n=17, R(2)=0.9841, s=0.634, F=931 (training set); n=7, R(2)=0.9928, s=0.773, F=690 (test set). Using this approach for modeling octanol/water partition coefficient for a set of organic compounds gives a model that is statistically characterized by n=69, R(2)=0.9872, s=0.156, F=5184 (training set) and n=70, R(2)=0.9841, s=0.179, F=4195 (test set).
Geospace environment modeling 2008--2009 challenge: Dst index
Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.
2013-01-01
This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.
Improving an Empirical Formula for the Absorption of Sound in the Sea
2008-05-01
Onderwater propagatie Advanced acoustic modelling Auteur (s) ir. C.A.M van Moll Programmanummer Projectnummer dr. M.A. Ainslie V512 032.11648 ing. J...versus calculated absorption .................................................................... 9 3 Inverse theory ...45 7.1 Statistical theory
LANDSCAPE ASSESSMENT TOOLS FOR WATERSHED CHARACTERIZATION
A combination of process-based, empirical and statistical models has been developed to assist states in their efforts to assess water quality, locate impairments over large areas, and calculate TMDL allocations. By synthesizing outputs from a number of these tools, LIPS demonstr...
Nuclear deformation in the laboratory frame
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.; Bertsch, G. F.
2018-01-01
We develop a formalism for calculating the distribution of the axial quadrupole operator in the laboratory frame within the rotationally invariant framework of the configuration-interaction shell model. The calculation is carried out using a finite-temperature auxiliary-field quantum Monte Carlo method. We apply this formalism to isotope chains of even-mass samarium and neodymium nuclei and show that the quadrupole distribution provides a model-independent signature of nuclear deformation. Two technical advances are described that greatly facilitate the calculations. The first is to exploit the rotational invariance of the underlying Hamiltonian to reduce the statistical fluctuations in the Monte Carlo calculations. The second is to determine quadruple invariants from the distribution of the axial quadrupole operator in the laboratory frame. This allows us to extract effective values of the intrinsic quadrupole shape parameters without invoking an intrinsic frame or a mean-field approximation.
An Improved Shock Model for Bare and Covered Explosives
NASA Astrophysics Data System (ADS)
Scholtes, Gert; Bouma, Richard
2017-06-01
TNO developed a toolbox to estimate the probability of a violent event on a ship or other platform, when the munition bunker is hit by e.g. a bullet or fragment from a missile attack. To obtain the proper statistical output, several millions of calculations are needed to obtain a reliable estimate. Because millions of different scenarios have to be calculated, hydrocode calculations cannot be used for this type of application, but a fast and good engineering solutions is needed. At this moment the Haskins and Cook-model is used for this purpose. To obtain a better estimate for covered explosives and munitions, TNO has developed a new model which is a combination of the shock wave model at high pressure, as described by Haskins and Cook, in combination with the expanding shock wave model of Green. This combined model gives a better fit with the experimental values for explosives response calculations, using the same critical energy fluence values for covered as well as for bare explosives. In this paper the theory is explained and results of the calculations for several bare and covered explosives will be presented. To show this, the results will be compared with the experimental values from literature for composition B, Composition B-3 and PBX-9404.
Millimeter wave attenuation prediction using a piecewise uniform rain rate model
NASA Technical Reports Server (NTRS)
Persinger, R. R.; Stutzman, W. L.; Bostian, C. W.; Castle, R. E., Jr.
1980-01-01
A piecewise uniform rain rate distribution model is introduced as a quasi-physical model of real rain along earth-space millimeter wave propagation paths. It permits calculation of the total attenuation from specific attenuation in a simple fashion. The model predications are verified by comparison with direct attenuation measurements for several frequencies, elevation angles, and locations. Also, coupled with the Rice-Holmberg rain rate model, attenuation statistics are predicated from rainfall accumulation data.
Unthank, Michael D.; Newson, Jeremy K.; Williamson, Tanja N.; Nelson, Hugh L.
2012-01-01
Flow- and load-duration curves were constructed from the model outputs of the U.S. Geological Survey's Water Availability Tool for Environmental Resources (WATER) application for streams in Kentucky. The WATER application was designed to access multiple geospatial datasets to generate more than 60 years of statistically based streamflow data for Kentucky. The WATER application enables a user to graphically select a site on a stream and generate an estimated hydrograph and flow-duration curve for the watershed upstream of that point. The flow-duration curves are constructed by calculating the exceedance probability of the modeled daily streamflows. User-defined water-quality criteria and (or) sampling results can be loaded into the WATER application to construct load-duration curves that are based on the modeled streamflow results. Estimates of flow and streamflow statistics were derived from TOPographically Based Hydrological MODEL (TOPMODEL) simulations in the WATER application. A modified TOPMODEL code, SDP-TOPMODEL (Sinkhole Drainage Process-TOPMODEL) was used to simulate daily mean discharges over the period of record for 5 karst and 5 non-karst watersheds in Kentucky in order to verify the calibrated model. A statistical evaluation of the model's verification simulations show that calibration criteria, established by previous WATER application reports, were met thus insuring the model's ability to provide acceptably accurate estimates of discharge at gaged and ungaged sites throughout Kentucky. Flow-duration curves are constructed in the WATER application by calculating the exceedence probability of the modeled daily flow values. The flow-duration intervals are expressed as a percentage, with zero corresponding to the highest stream discharge in the streamflow record. Load-duration curves are constructed by applying the loading equation (Load = Flow*Water-quality criterion) at each flow interval.
Baqué, Michèle; Amendt, Jens
2013-01-01
Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.
Continuous-time quantum Monte Carlo calculation of multiorbital vertex asymptotics
NASA Astrophysics Data System (ADS)
Kaufmann, Josef; Gunacker, Patrik; Held, Karsten
2017-07-01
We derive the equations for calculating the high-frequency asymptotics of the local two-particle vertex function for a multiorbital impurity model. These relate the asymptotics for a general local interaction to equal-time two-particle Green's functions, which we sample using continuous-time quantum Monte Carlo simulations with a worm algorithm. As specific examples we study the single-orbital Hubbard model and the three t2 g orbitals of SrVO3 within dynamical mean-field theory (DMFT). We demonstrate how the knowledge of the high-frequency asymptotics reduces the statistical uncertainties of the vertex and further eliminates finite-box-size effects. The proposed method benefits the calculation of nonlocal susceptibilities in DMFT and diagrammatic extensions of DMFT.
Statistical Prediction of Sea Ice Concentration over Arctic
NASA Astrophysics Data System (ADS)
Kim, Jongho; Jeong, Jee-Hoon; Kim, Baek-Min
2017-04-01
In this study, a statistical method that predict sea ice concentration (SIC) over the Arctic is developed. We first calculate the Season-reliant Empirical Orthogonal Functions (S-EOFs) of monthly Arctic SIC from Nimbus-7 SMMR and DMSP SSM/I-SSMIS Passive Microwave Data, which contain the seasonal cycles (12 months long) of dominant SIC anomaly patterns. Then, the current SIC state index is determined by projecting observed SIC anomalies for latest 12 months to the S-EOFs. Assuming the current SIC anomalies follow the spatio-temporal evolution in the S-EOFs, we project the future (upto 12 months) SIC anomalies by multiplying the SI and the corresponding S-EOF and then taking summation. The predictive skill is assessed by hindcast experiments initialized at all the months for 1980-2010. When comparing predictive skill of SIC predicted by statistical model and NCEP CFS v2, the statistical model shows a higher skill in predicting sea ice concentration and extent.
Jaikuna, Tanwiwat; Khadsiri, Phatchareewan; Chawapun, Nisa; Saekho, Suwit; Tharavichitkul, Ekkasit
2017-02-01
To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL) model. The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR), and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD 2 ) was calculated using biological effective dose (BED) based on the LQL model. The software calculation and the manual calculation were compared for EQD 2 verification with pair t -test statistical analysis using IBM SPSS Statistics version 22 (64-bit). Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS) in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV) determined by D 90% , 0.56% in the bladder, 1.74% in the rectum when determined by D 2cc , and less than 1% in Pinnacle. The difference in the EQD 2 between the software calculation and the manual calculation was not significantly different with 0.00% at p -values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT) and 0.240, 0.320, and 0.849 for brachytherapy (BT) in HR-CTV, bladder, and rectum, respectively. The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Wind turbine sound pressure level calculations at dwellings.
Keith, Stephen E; Feder, Katya; Voicescu, Sonia A; Soukhovtsev, Victor; Denning, Allison; Tsang, Jason; Broner, Norm; Leroux, Tony; Richarz, Werner; van den Berg, Frits
2016-03-01
This paper provides calculations of outdoor sound pressure levels (SPLs) at dwellings for 10 wind turbine models, to support Health Canada's Community Noise and Health Study. Manufacturer supplied and measured wind turbine sound power levels were used to calculate outdoor SPL at 1238 dwellings using ISO [(1996). ISO 9613-2-Acoustics] and a Swedish noise propagation method. Both methods yielded statistically equivalent results. The A- and C-weighted results were highly correlated over the 1238 dwellings (Pearson's linear correlation coefficient r > 0.8). Calculated wind turbine SPLs were compared to ambient SPLs from other sources, estimated using guidance documents from the United States and Alberta, Canada.
NASA Astrophysics Data System (ADS)
Allen, David
Some informal discussions among educators regarding motivation of students and academic performance have included the topic of magnet schools. The premise is that a focused theme, such as an aspect of science, positively affects student motivation and academic achievement. However, there is limited research involving magnet schools and their influence on student motivation and academic performance. This study provides empirical data for the discussion about magnet schools influence on motivation and academic ability. This study utilized path analysis in a structural equation modeling framework to simultaneously investigate the relationships between demographic exogenous independent variables, the independent variable of attending a science or technology magnet middle school, and the dependent variables of motivation to learn science and academic achievement in science. Due to the categorical nature of the variables, Bayesian statistical analysis was used to calculate the path coefficients and the standardized effects for each relationship in the model. The coefficients of determination were calculated to determine the amount of variance each path explained. Only five of 21 paths had statistical significance. Only one of the five statistically significant paths (Attended Magnet School to Motivation to Learn Science) explained a noteworthy amount (45.8%) of the variance.
NASA Astrophysics Data System (ADS)
Saakian, David B.
2012-03-01
We map the Markov-switching multifractal model (MSM) onto the random energy model (REM). The MSM is, like the REM, an exactly solvable model in one-dimensional space with nontrivial correlation functions. According to our results, four different statistical physics phases are possible in random walks with multifractal behavior. We also introduce the continuous branching version of the model, calculate the moments, and prove multiscaling behavior. Different phases have different multiscaling properties.
Lukas, J M; Hawkins, D M; Kinsel, M L; Reneau, J K
2005-11-01
The objective of this study was to examine the relationship between monthly Dairy Herd Improvement (DHI) subclinical mastitis and new infection rate estimates and daily bulk tank somatic cell count (SCC) summarized by statistical process control tools. Dairy Herd Improvement Association test-day subclinical mastitis and new infection rate estimates along with daily or every other day bulk tank SCC data were collected for 12 mo of 2003 from 275 Upper Midwest dairy herds. Herds were divided into 5 herd production categories. A linear score [LNS = ln(BTSCC/100,000)/0.693147 + 3] was calculated for each individual bulk tank SCC. For both the raw SCC and the transformed data, the mean and sigma were calculated using the statistical quality control individual measurement and moving range chart procedure of Statistical Analysis System. One hundred eighty-three herds of the 275 herds from the study data set were then randomly selected and the raw (method 1) and transformed (method 2) bulk tank SCC mean and sigma were used to develop models for predicting subclinical mastitis and new infection rate estimates. Herd production category was also included in all models as 5 dummy variables. Models were validated by calculating estimates of subclinical mastitis and new infection rates for the remaining 92 herds and plotting them against observed values of each of the dependents. Only herd production category and bulk tank SCC mean were significant and remained in the final models. High R2 values (0.83 and 0.81 for methods 1 and 2, respectively) indicated a strong correlation between the bulk tank SCC and herd's subclinical mastitis prevalence. The standard errors of the estimate were 4.02 and 4.28% for methods 1 and 2, respectively, and decreased with increasing herd production. As a case study, Shewhart Individual Measurement Charts were plotted from the bulk tank SCC to identify shifts in mastitis incidence. Four of 5 charts examined signaled a change in bulk tank SCC before the DHI test day identified the change in subclinical mastitis prevalence. It can be concluded that applying statistical process control tools to daily bulk tank SCC can be used to estimate subclinical mastitis prevalence in the herd and observe for change in the subclinical mastitis status. Single DHI test day estimates of new infection rate were insufficient to accurately describe its dynamics.
Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A
2016-01-01
Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.
Cavalcanti, Paulo Ernando Ferraz; Sá, Michel Pompeu Barros de Oliveira; Santos, Cecília Andrade dos; Esmeraldo, Isaac Melo; Chaves, Mariana Leal; Lins, Ricardo Felipe de Albuquerque; Lima, Ricardo de Carvalho
2015-01-01
To determine whether stratification of complexity models in congenital heart surgery (RACHS-1, Aristotle basic score and STS-EACTS mortality score) fit to our center and determine the best method of discriminating hospital mortality. Surgical procedures in congenital heart diseases in patients under 18 years of age were allocated to the categories proposed by the stratification of complexity methods currently available. The outcome hospital mortality was calculated for each category from the three models. Statistical analysis was performed to verify whether the categories presented different mortalities. The discriminatory ability of the models was determined by calculating the area under the ROC curve and a comparison between the curves of the three models was performed. 360 patients were allocated according to the three methods. There was a statistically significant difference between the mortality categories: RACHS-1 (1) - 1.3%, (2) - 11.4%, (3)-27.3%, (4) - 50 %, (P<0.001); Aristotle basic score (1) - 1.1%, (2) - 12.2%, (3) - 34%, (4) - 64.7%, (P<0.001); and STS-EACTS mortality score (1) - 5.5 %, (2) - 13.6%, (3) - 18.7%, (4) - 35.8%, (P<0.001). The three models had similar accuracy by calculating the area under the ROC curve: RACHS-1- 0.738; STS-EACTS-0.739; Aristotle- 0.766. The three models of stratification of complexity currently available in the literature are useful with different mortalities between the proposed categories with similar discriminatory capacity for hospital mortality.
Wu, S.-S.; Wang, L.; Qiu, X.
2008-01-01
This article presents a deterministic model for sub-block-level population estimation based on the total building volumes derived from geographic information system (GIS) building data and three census block-level housing statistics. To assess the model, we generated artificial blocks by aggregating census block areas and calculating the respective housing statistics. We then applied the model to estimate populations for sub-artificial-block areas and assessed the estimates with census populations of the areas. Our analyses indicate that the average percent error of population estimation for sub-artificial-block areas is comparable to those for sub-census-block areas of the same size relative to associated blocks. The smaller the sub-block-level areas, the higher the population estimation errors. For example, the average percent error for residential areas is approximately 0.11 percent for 100 percent block areas and 35 percent for 5 percent block areas.
Statistical model of exotic rotational correlations in emergent space-time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig; Kwon, Ohkyung; Richardson, Jonathan
2017-06-06
A statistical model is formulated to compute exotic rotational correlations that arise as inertial frames and causal structure emerge on large scales from entangled Planck scale quantum systems. Noncommutative quantum dynamics are represented by random transverse displacements that respect causal symmetry. Entanglement is represented by covariance of these displacements in Planck scale intervals defined by future null cones of events on an observer's world line. Light that propagates in a nonradial direction inherits a projected component of the exotic rotational correlation that accumulates as a random walk in phase. A calculation of the projection and accumulation leads to exact predictionsmore » for statistical properties of exotic Planck scale correlations in an interferometer of any configuration. The cross-covariance for two nearly co-located interferometers is shown to depart only slightly from the autocovariance. Specific examples are computed for configurations that approximate realistic experiments, and show that the model can be rigorously tested.« less
Puch-Solis, Roberto; Clayton, Tim
2014-07-01
The high sensitivity of the technology for producing profiles means that it has become routine to produce profiles from relatively small quantities of DNA. The profiles obtained from low template DNA (LTDNA) are affected by several phenomena which must be taken into consideration when interpreting and evaluating this evidence. Furthermore, many of the same phenomena affect profiles from higher amounts of DNA (e.g. where complex mixtures has been revealed). In this article we present a statistical model, which forms the basis of software DNA LiRa, and that is able to calculate likelihood ratios where one to four donors are postulated and for any number of replicates. The model can take into account dropin and allelic dropout for different contributors, template degradation and uncertain allele designations. In this statistical model unknown parameters are treated following the Empirical Bayesian paradigm. The performance of LiRa is tested using examples and the outputs are compared with those generated using two other statistical software packages likeLTD and LRmix. The concept of ban efficiency is introduced as a measure for assessing model sensitivity. Copyright © 2014. Published by Elsevier Ireland Ltd.
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
Baryon-antibaryon dynamics in relativistic heavy-ion collisions
NASA Astrophysics Data System (ADS)
Seifert, E.; Cassing, W.
2018-04-01
The dynamics of baryon-antibaryon annihilation and reproduction (B B ¯↔3 M ) is studied within the Parton-Hadron-String Dynamics (PHSD) transport approach for Pb+Pb and Au+Au collisions as a function of centrality from lower Super Proton Synchrotron (SPS) up to Large Hadron Collider (LHC) energies on the basis of the quark rearrangement model. At Relativistic Heavy-Ion Collider (RHIC) energies we find a small net reduction of baryon-antibaryon (B B ¯ ) pairs while for the LHC energy of √{sN N}=2.76 TeV a small net enhancement is found relative to calculations without annihilation (and reproduction) channels. Accordingly, the sizable difference between data and statistical calculations in Pb+Pb collisions at √{sN N}=2.76 TeV for proton and antiproton yields [ALICE Collaboration, B. Abelev et al., Phys. Rev. C 88, 044910 (2013), 10.1103/PhysRevC.88.044910], where a deviation of 2.7 σ was claimed by the ALICE Collaboration, should not be attributed to a net antiproton annihilation. This is in line with the observation that no substantial deviation between the data and statistical hadronization model (SHM) calculations is seen for antihyperons, since according to the PHSD analysis the antihyperons should be modified by the same amount as antiprotons. As the PHSD results for particle ratios are in line with the ALICE data (within error bars) this might point towards a deviation from statistical equilibrium in the hadronization (at least for protons and antiprotons). Furthermore, we find that the B B ¯↔3 M reactions are more effective at lower SPS energies where a net suppression for antiprotons and antihyperons up to a factor of 2-2.5 can be extracted from the PHSD calculations for central Au+Au collisions.
NASA Astrophysics Data System (ADS)
Trojková, Darina; Judas, Libor; Trojek, Tomáš
2014-11-01
Minimizing the late rectal toxicity of prostate cancer patients is a very important and widely-discussed topic. Normal tissue complication probability (NTCP) models can be used to evaluate competing treatment plans. In our work, the parameters of the Lyman-Kutcher-Burman (LKB), Källman, and Logit+EUD models are optimized by minimizing the Brier score for a group of 302 prostate cancer patients. The NTCP values are calculated and are compared with the values obtained using previously published values for the parameters. χ2 Statistics were calculated as a check of goodness of optimization.
Entanglement Entropy of the Six-Dimensional Horowitz-Strominger Black Hole
NASA Astrophysics Data System (ADS)
Li, Huai-Fan; Zhang, Sheng-Li; Wu, Yue-Qin; Ren, Zhao
By using the entanglement entropy method, the statistical entropy of the Bose and Fermi fields in a thin film is calculated and the Bekenstein-Hawking entropy of six-dimensional Horowitz-Strominger black hole is obtained. Here, the Bose and Fermi fields are entangled with the quantum states in six-dimensional Horowitz-Strominger black hole and the fields are outside of the horizon. The divergence of brick-wall model is avoided without any cutoff by the new equation of state density obtained with the generalized uncertainty principle. The calculation implies that the high density quantum states near the event horizon are strongly correlated with the quantum states in black hole. The black hole entropy is a quantum effect. It is an intrinsic characteristic of space-time. The ultraviolet cutoff in the brick-wall model is unreasonable. The generalized uncertainty principle should be considered in the high energy quantum field near the event horizon. Using the quantum statistical method, we directly calculate the partition function of the Bose and Fermi fields under the background of the six-dimensional black hole. The difficulty in solving the wave equations of various particles is overcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramos-Mendez, J; Faddegon, B; Paganetti, H
2015-06-15
Purpose: We used TOPAS (TOPAS wraps and extends Geant4 for medical physicists) to compare Geant4 physics models with published data for neutron shielding calculations. Subsequently, we calculated the source terms and attenuation lengths (shielding data) of the total ambient dose equivalent (TADE) in concrete for neutrons produced by protons in brass. Methods: Stage1: The Bertini and Binary nuclear models available in Geant4 were compared with published attenuation at depth of the TADE in concrete and iron. Stage2: Shielding data of the TADE in concrete was calculated for 50– 200 MeV proton beams on brass. Stage3: Shielding data from Stage2 wasmore » extrapolated for 235 MeV proton beams. This data was used in a point-line-source analytical model to calculate the ambient dose per unit therapeutic dose at two locations inside one treatment room at the Francis H Burr Proton Therapy Center. Finally, we compared these results with experimental data and full TOPAS simulations. Results: At larger angles (∼130o) the TADE in concrete calculated with the Bertini model was about 9 times larger than that calculated with the Binary model. The attenuation length in concrete calculated with the Binary model agreed with published data within 7%±0.4% (statistical uncertainty) for the deepest regions and 5%±0.1% for shallower regions. For iron the agreement was within 3%±0.1%. The ambient dose per therapeutic dose calculated with the Binary model, relative to the experimental data, was a ratio of 0.93±0.16 and 1.23±0.24 for two locations. The analytical model overestimated the dose by four orders of magnitude. These differences are attributed to the complexity of the geometry. Conclusion: The Binary and Bertini models gave comparable results, with the Binary model giving the best agreement with published data at large angle. Shielding data we calculated using the Binary model is useful for fast shielding calculations with other analytical models. This work was supported by National Cancer Institute Grant R01CA140735.« less
NASA Astrophysics Data System (ADS)
Kez, V.; Liu, F.; Consalvi, J. L.; Ströhle, J.; Epple, B.
2016-03-01
The oxy-fuel combustion is a promising CO2 capture technology from combustion systems. This process is characterized by much higher CO2 concentrations in the combustion system compared to that of the conventional air-fuel combustion. To accurately predict the enhanced thermal radiation in oxy-fuel combustion, it is essential to take into account the non-gray nature of gas radiation. In this study, radiation heat transfer in a 3D model gas turbine combustor under two test cases at 20 atm total pressure was calculated by various non-gray gas radiation models, including the statistical narrow-band (SNB) model, the statistical narrow-band correlated-k (SNBCK) model, the wide-band correlated-k (WBCK) model, the full spectrum correlated-k (FSCK) model, and several weighted sum of gray gases (WSGG) models. Calculations of SNB, SNBCK, and FSCK were conducted using the updated EM2C SNB model parameters. Results of the SNB model are considered as the benchmark solution to evaluate the accuracy of the other models considered. Results of SNBCK and FSCK are in good agreement with the benchmark solution. The WBCK model is less accurate than SNBCK or FSCK. Considering the three formulations of the WBCK model, the multiple gases formulation is the best choice regarding the accuracy and computational cost. The WSGG model with the parameters of Bordbar et al. (2014) [20] is the most accurate of the three investigated WSGG models. Use of the gray WSSG formulation leads to significant deviations from the benchmark data and should not be applied to predict radiation heat transfer in oxy-fuel combustion systems. A best practice to incorporate the state-of-the-art gas radiation models for high accuracy of radiation heat transfer calculations at minimal increase in computational cost in CFD simulation of oxy-fuel combustion systems for pressure path lengths up to about 10 bar m is suggested.
Regression Models of Quarterly Overhead Costs for Six Government Aerospace Contractors.
1986-03-01
34 Testing ,, for Serial Correlation After Least Squares %Regression, Econometrica, Vol. 36, No. 1, pp. 133-150, January 1968. Intrili8ator M.D., Econometric ...to be superior. These two estimators are both two-stage estimators that are calculated utilizing Wallis’s test statistic for fourth-order...utilizing Wallis’s test statistic for fourth-order autocorrelation. NTIS C F’,& D tI1C T - .1 I -. . . ..- rJ ,. *p J • - DA 3
Application of Tube Dynamics to Non-Statistical Reaction Processes
NASA Astrophysics Data System (ADS)
Gabern, F.; Koon, W. S.; Marsden, J. E.; Ross, S. D.; Yanao, T.
2006-06-01
A technique based on dynamical systems theory is introduced for the computation of lifetime distributions and rates of chemical reactions and scattering phenomena, even in systems that exhibit non-statistical behavior. In particular, we merge invariant manifold tube dynamics with Monte Carlo volume determination for accurate rate calculations. This methodology is applied to a three-degree-of-freedom model problem and some ideas on how it might be extended to higher-degree-of-freedom systems are presented.
Rolland, Y; Bézy-Wendling, J; Duvauferrier, R; Coatrieux, J L
1999-03-01
To demonstrate the usefulness of a model of the parenchymous vascularization to evaluate texture analysis methods. Slices with thickness varying from 1 to 4 mm were reformatted from a 3D vascular model corresponding to either normal tissue perfusion or local hypervascularization. Parameters of statistical methods were measured on 16128x128 regions of interest, and mean values and standard deviation were calculated. For each parameter, the performances (discrimination power and stability) were evaluated. Among 11 calculated statistical parameters, three (homogeneity, entropy, mean of gradients) were found to have a good discriminating power to differentiate normal perfusion from hypervascularization, but only the gradient mean was found to have a good stability with respect to the thickness. Five parameters (run percentage, run length distribution, long run emphasis, contrast, and gray level distribution) were found to have intermediate results. In the remaining three, curtosis and correlation was found to have little discrimination power, skewness none. This 3D vascular model, which allows the generation of various examples of vascular textures, is a powerful tool to assess the performance of texture analysis methods. This improves our knowledge of the methods and should contribute to their a priori choice when designing clinical studies.
Landstad, Bodil J; Gelin, Gunnar; Malmquist, Claes; Vinberg, Stig
2002-09-15
The study had two primary aims. The first aim was to combine a human resources costing and accounting approach (HRCA) with a quantitative statistical approach in order to get an integrated model. The second aim was to apply this integrated model in a quasi-experimental study in order to investigate whether preventive intervention affected sickness absence costs at the company level. The intervention studied contained occupational organizational measures, competence development, physical and psychosocial working environmental measures and individual and rehabilitation measures on both an individual and a group basis. The study is a quasi-experimental design with a non-randomized control group. Both groups involved cleaning jobs at predominantly female workplaces. The study plan involved carrying out before and after studies on both groups. The study included only those who were at the same workplace during the whole of the study period. In the HRCA model used here, the cost of sickness absence is the net difference between the costs, in the form of the value of the loss of production and the administrative cost, and the benefits in the form of lower labour costs. According to the HRCA model, the intervention used counteracted a rise in sickness absence costs at the company level, giving an average net effect of 266.5 Euros per person (full-time working) during an 8-month period. Using an analogue statistical analysis on the whole of the material, the contribution of the intervention counteracted a rise in sickness absence costs at the company level giving an average net effect of 283.2 Euros. Using a statistical method it was possible to study the regression coefficients in sub-groups and calculate the p-values for these coefficients; in the younger group the intervention gave a calculated net contribution of 605.6 Euros with a p-value of 0.073, while the intervention net contribution in the older group had a very high p-value. Using the statistical model it was also possible to study contributions of other variables and interactions. This study established that the HRCA model and the integrated model produced approximately the same monetary outcomes. The integrated model, however, allowed a deeper understanding of the various possible relationships and quantified the results with confidence intervals.
Jones, Hayley E.; Hickman, Matthew; Kasprzyk-Hordern, Barbara; Welton, Nicky J.; Baker, David R.; Ades, A.E.
2014-01-01
Concentrations of metabolites of illicit drugs in sewage water can be measured with great accuracy and precision, thanks to the development of sensitive and robust analytical methods. Based on assumptions about factors including the excretion profile of the parent drug, routes of administration and the number of individuals using the wastewater system, the level of consumption of a drug can be estimated from such measured concentrations. When presenting results from these ‘back-calculations’, the multiple sources of uncertainty are often discussed, but are not usually explicitly taken into account in the estimation process. In this paper we demonstrate how these calculations can be placed in a more formal statistical framework by assuming a distribution for each parameter involved, based on a review of the evidence underpinning it. Using a Monte Carlo simulations approach, it is then straightforward to propagate uncertainty in each parameter through the back-calculations, producing a distribution for instead of a single estimate of daily or average consumption. This can be summarised for example by a median and credible interval. To demonstrate this approach, we estimate cocaine consumption in a large urban UK population, using measured concentrations of two of its metabolites, benzoylecgonine and norbenzoylecgonine. We also demonstrate a more sophisticated analysis, implemented within a Bayesian statistical framework using Markov chain Monte Carlo simulation. Our model allows the two metabolites to simultaneously inform estimates of daily cocaine consumption and explicitly allows for variability between days. After accounting for this variability, the resulting credible interval for average daily consumption is appropriately wider, representing additional uncertainty. We discuss possibilities for extensions to the model, and whether analysis of wastewater samples has potential to contribute to a prevalence model for illicit drug use. PMID:24636801
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
Stochastic empirical loading and dilution model (SELDM) version 1.0.0
Granato, Gregory E.
2013-01-01
The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physiochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.
SEPEM: A tool for statistical modeling the solar energetic particle environment
NASA Astrophysics Data System (ADS)
Crosby, Norma; Heynderickx, Daniel; Jiggens, Piers; Aran, Angels; Sanahuja, Blai; Truscott, Pete; Lei, Fan; Jacobs, Carla; Poedts, Stefaan; Gabriel, Stephen; Sandberg, Ingmar; Glover, Alexi; Hilgers, Alain
2015-07-01
Solar energetic particle (SEP) events are a serious radiation hazard for spacecraft as well as a severe health risk to humans traveling in space. Indeed, accurate modeling of the SEP environment constitutes a priority requirement for astrophysics and solar system missions and for human exploration in space. The European Space Agency's Solar Energetic Particle Environment Modelling (SEPEM) application server is a World Wide Web interface to a complete set of cross-calibrated data ranging from 1973 to 2013 as well as new SEP engineering models and tools. Both statistical and physical modeling techniques have been included, in order to cover the environment not only at 1 AU but also in the inner heliosphere ranging from 0.2 AU to 1.6 AU using a newly developed physics-based shock-and-particle model to simulate particle flux profiles of gradual SEP events. With SEPEM, SEP peak flux and integrated fluence statistics can be studied, as well as durations of high SEP flux periods. Furthermore, effects tools are also included to allow calculation of single event upset rate and radiation doses for a variety of engineering scenarios.
Statistical Issues for Calculating Reentry Hazards
NASA Technical Reports Server (NTRS)
Matney, Mark; Bacon, John
2016-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.
Evaluation of FSK models for radiative heat transfer under oxyfuel conditions
NASA Astrophysics Data System (ADS)
Clements, Alastair G.; Porter, Rachael; Pranzitelli, Alessandro; Pourkashanian, Mohamed
2015-01-01
Oxyfuel is a promising technology for carbon capture and storage (CCS) applied to combustion processes. It would be highly advantageous in the deployment of CCS to be able to model and optimise oxyfuel combustion, however the increased concentrations of CO2 and H2O under oxyfuel conditions modify several fundamental processes of combustion, including radiative heat transfer. This study uses benchmark narrow band radiation models to evaluate the influence of assumptions in global full-spectrum k-distribution (FSK) models, and whether they are suitable for modelling radiation in computational fluid dynamics (CFD) calculations of oxyfuel combustion. The statistical narrow band (SNB) and correlated-k (CK) models are used to calculate benchmark data for the radiative source term and heat flux, which are then compared to the results calculated from FSK models. Both the full-spectrum correlated k (FSCK) and the full-spectrum scaled k (FSSK) models are applied using up-to-date spectral data. The results show that the FSCK and FSSK methods achieve good agreement in the test cases. The FSCK method using a five-point Gauss quadrature scheme is recommended for CFD calculations in oxyfuel conditions, however there are still potential inaccuracies in cases with very wide variations in the ratio between CO2 and H2O concentrations.
Earth Observing System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
A two-component rain model for the prediction of attenuation and diversity improvement
NASA Technical Reports Server (NTRS)
Crane, R. K.
1982-01-01
A new model was developed to predict attenuation statistics for a single Earth-satellite or terrestrial propagation path. The model was extended to provide predictions of the joint occurrences of specified or higher attenuation values on two closely spaced Earth-satellite paths. The joint statistics provide the information required to obtain diversity gain or diversity advantage estimates. The new model is meteorologically based. It was tested against available Earth-satellite beacon observations and terrestrial path measurements. The model employs the rain climate region descriptions of the Global rain model. The rms deviation between the predicted and observed attenuation values for the terrestrial path data was 35 percent, a result consistent with the expectations of the Global model when the rain rate distribution for the path is not used in the calculation. Within the United States the rms deviation between measurement and prediction was 36 percent but worldwide it was 79 percent.
NASA Technical Reports Server (NTRS)
Stewart, Richard W.; Thompson, Anne M.; Owens, Melody A.; Herwehe, Jerold A.
1989-01-01
A major tropospheric loss of soluble species such as nitric acid results from scavenging by water droplets. Several theoretical formulations have been advanced which relate an effective time-independent loss rate for soluble species to statistical properties of precipitation such as the wet fraction and length of a precipitation cycle. In this paper, various 'effective' loss rates that have been proposed are compared with the results of detailed time-dependent model calculations carried out over a seasonal time scale. The model is a stochastic precipitation model coupled to a tropospheric photochemical model. The results of numerous time-dependent seasonal model runs are used to derive numerical values for the nitric acid residence time for several assumed sets of preciptation statistics. These values are then compared with the results obtained by utilizing theoretical 'effective' loss rates in time-independent models.
Analysis/forecast experiments with a multivariate statistical analysis scheme using FGGE data
NASA Technical Reports Server (NTRS)
Baker, W. E.; Bloom, S. C.; Nestler, M. S.
1985-01-01
A three-dimensional, multivariate, statistical analysis method, optimal interpolation (OI) is described for modeling meteorological data from widely dispersed sites. The model was developed to analyze FGGE data at the NASA-Goddard Laboratory of Atmospherics. The model features a multivariate surface analysis over the oceans, including maintenance of the Ekman balance and a geographically dependent correlation function. Preliminary comparisons are made between the OI model and similar schemes employed at the European Center for Medium Range Weather Forecasts and the National Meteorological Center. The OI scheme is used to provide input to a GCM, and model error correlations are calculated for forecasts of 500 mb vertical water mixing ratios and the wind profiles. Comparisons are made between the predictions and measured data. The model is shown to be as accurate as a successive corrections model out to 4.5 days.
A statistical model of the wave field in a bounded domain
NASA Astrophysics Data System (ADS)
Hellsten, T.
2017-02-01
Numerical simulations of plasma heating with radiofrequency waves often require repetitive calculations of wave fields as the plasma evolves. To enable effective simulations, bench marked formulas of the power deposition have been developed. Here, a statistical model applicable to waves with short wavelengths is presented, which gives the expected amplitude of the wave field as a superposition of four wave fields with weight coefficients depending on the single pass damping, as. The weight coefficient for the wave field coherent with that calculated in the absence of reflection agrees with the coefficient for strong single pass damping of an earlier developed heuristic model, for which the weight coefficients were obtained empirically using a full wave code to calculate the wave field and power deposition. Antennas launching electromagnetic waves into bounded domains are often designed to produce localised wave fields and power depositions in the limit of strong single pass damping. The reflection of the waves changes the coupling that partly destroys the localisation of the wave field, which explains the apparent paradox arising from the earlier developed heuristic formula that only a fraction as2(2-as) and not as of the power is absorbed with a profile corresponding to the power deposition for the first pass of the rays. A method to account for the change in the coupling spectrum caused by reflection for modelling the wave field with ray tracing in bounded media is proposed, which should be applicable to wave propagation in non-uniform media in more general geometries.
NASA Astrophysics Data System (ADS)
Gulvi, Nitin R.; Patel, Priyanka; Badani, Purav M.
2018-04-01
Pathway for dissociation of multihalogenated alkyls is observed to be competitive between molecular and atomic elimination products. Factors such as molecular structure, temperature and pressure are known to influence the same. Hence present work is focussed to explore mechanism and kinetics of atomic (Br) and molecular (HBr and Br2) elimination upon pyrolysis of 1,1- and 1,2-ethyl dibromide (EDB). For this purpose, electronic structure calculations were performed at DFT and CCSD(T) level of theory. In addition to concerted mechanism, an alternate energetically efficient isomerisation pathway has been exploited for molecular elimination. Energy calculations are further complimented by detailed kinetic investigation, over wide range of temperature and pressure, using suitable models like Canonical Transition State Theory, Statistical Adiabatic Channel Model and Troe's formalism. Our calculations suggest high branching ratio for dehydrohalogentation reaction, from both isomers of EDB. Fall off curve depicts good agreement between theoretically estimated and experimentally reported values.
ARM Climate Modeling Best Estimate Lamont, OK Statistical Summary (ARMBE-CLDRAD SGPC1)
McCoy, Renata; Xie, Shaocheng
2010-01-26
Calculate monthly mean diurnal cycle based on the hourly CMBE data with qcflag >=-1 (>30% valid data within the averaged hour). For 2-D clouds, only data over the period when both MMCR and MPL were working are used.
Quantum signature of chaos and thermalization in the kicked Dicke model
NASA Astrophysics Data System (ADS)
Ray, S.; Ghosh, A.; Sinha, S.
2016-09-01
We study the quantum dynamics of the kicked Dicke model (KDM) in terms of the Floquet operator, and we analyze the connection between chaos and thermalization in this context. The Hamiltonian map is constructed by suitably taking the classical limit of the Heisenberg equation of motion to study the corresponding phase-space dynamics, which shows a crossover from regular to chaotic motion by tuning the kicking strength. The fixed-point analysis and calculation of the Lyapunov exponent (LE) provide us with a complete picture of the onset of chaos in phase-space dynamics. We carry out a spectral analysis of the Floquet operator, which includes a calculation of the quasienergy spacing distribution and structural entropy to show the correspondence to the random matrix theory in the chaotic regime. Finally, we analyze the thermodynamics and statistical properties of the bosonic sector as well as the spin sector, and we discuss how such a periodically kicked system relaxes to a thermalized state in accordance with the laws of statistical mechanics.
Quantum signature of chaos and thermalization in the kicked Dicke model.
Ray, S; Ghosh, A; Sinha, S
2016-09-01
We study the quantum dynamics of the kicked Dicke model (KDM) in terms of the Floquet operator, and we analyze the connection between chaos and thermalization in this context. The Hamiltonian map is constructed by suitably taking the classical limit of the Heisenberg equation of motion to study the corresponding phase-space dynamics, which shows a crossover from regular to chaotic motion by tuning the kicking strength. The fixed-point analysis and calculation of the Lyapunov exponent (LE) provide us with a complete picture of the onset of chaos in phase-space dynamics. We carry out a spectral analysis of the Floquet operator, which includes a calculation of the quasienergy spacing distribution and structural entropy to show the correspondence to the random matrix theory in the chaotic regime. Finally, we analyze the thermodynamics and statistical properties of the bosonic sector as well as the spin sector, and we discuss how such a periodically kicked system relaxes to a thermalized state in accordance with the laws of statistical mechanics.
Rainfall Threshold Assessment Corresponding to the Maximum Allowable Turbidity for Source Water.
Fan, Shu-Kai S; Kuan, Wen-Hui; Fan, Chihhao; Chen, Chiu-Yang
2016-12-01
This study aims to assess the upstream rainfall thresholds corresponding to the maximum allowable turbidity of source water, using monitoring data and artificial neural network computation. The Taipei Water Source Domain was selected as the study area, and the upstream rainfall records were collected for statistical analysis. Using analysis of variance (ANOVA), the cumulative rainfall records of one-day Ping-lin, two-day Ping-lin, two-day Tong-hou, one-day Guie-shan, and one-day Tai-ping (rainfall in the previous 24 or 48 hours at the named weather stations) were found to be the five most significant parameters for downstream turbidity development. An artificial neural network model was constructed to predict the downstream turbidity in the area investigated. The observed and model-calculated turbidity data were applied to assess the rainfall thresholds in the studied area. By setting preselected turbidity criteria, the upstream rainfall thresholds for these statistically determined rain gauge stations were calculated.
Software for Data Analysis with Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Roy, H. Scott
1994-01-01
Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
Statistical hadronization with exclusive channels in e +e - annihilation
Ferroni, L.; Becattini, F.
2012-01-01
We present a systematic analysis of exclusive hadronic channels in e +e - collisions at centre-of-mass energies between 2.1 and 2.6 GeV within the statistical hadronization model. Because of the low multiplicities involved, calculations have been carried out in the full microcanonical ensemble, including conservation of energy-momentum, angular momentum, parity, isospin, and all relevant charges. We show that the data is in an overall good agreement with the model for an energy density of about 0.5 GeV/fm 3 and an extra strangeness suppression parameter γ S 0:7, essentially the same values found with fits to inclusive multiplicities at higher energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, S. G.; Walentosky, M. J.; Messinger, Justin
We present a new computational method for calculating the motion of stars in a dwarf spheroidal galaxy (dSph) that can use either Newtonian gravity or Modified Newtonian Dynamics (MOND). In our model, we explicitly calculate the motion of several thousand stars in a spherically symmetric gravitational potential, and we statistically obtain both the line-of-sight bulk velocity dispersion and dispersion profile. Our results for MOND calculated bulk dispersions for Local Group dSph’s agree well with previous calculations and observations. Our MOND calculated dispersion profiles are compared with the observations of Walker et al. for Milky Way dSph’s, and we present calculatedmore » dispersion profiles for a selection of Andromeda dSph’s.« less
NASA Technical Reports Server (NTRS)
Bauman, William H., III
2010-01-01
The AMU conducted an objective analysis of the MesoNAM forecasts compared to observed values from sensors at specified KSC/CCAFS wind towers by calculating the following statistics to verify the performance of the model: 1) Bias (mean difference), 2) Standard deviation of Bias, 3) Root Mean Square Error (RMSE), and 4) Hypothesis test for Bias = O. The 45 WS LWOs use the MesoNAM to support launch weather operations. However, the actual performance of the model at KSC and CCAFS had not been measured objectively. The analysis compared the MesoNAM forecast winds, temperature and dew point to the observed values from the sensors on wind towers. The data were stratified by tower sensor, month and onshore/offshore wind direction based on the orientation of the coastline to each tower's location. The model's performance statistics were then calculated for each wind tower based on sensor height and model initialization time. The period of record for the data used in this task was based on the operational start of the current MesoNAM in mid-August 2006 and so the task began with the first full month of data, September 2006, through May 2010. The analysis of model performance indicated: a) The accuracy decreased as the forecast valid time from the model initialization increased, b) There was a diurnal signal in T with a cool bias during the late night and a warm bias during the afternoon, c) There was a diurnal signal in Td with a low bias during the afternoon and a high bias during the late night, and d) The model parameters at each vertical level most closely matched the observed parameters at heights closest to those vertical levels. The AMU developed a GUI that consists of a multi-level drop-down menu written in JavaScript embedded within the HTML code. This tool allows the LWO to easily and efficiently navigate among the charts and spreadsheet files containing the model performance statistics. The objective statistics give the LWOs knowledge of the model's strengths and weaknesses and the GUI allows quick access to the data which will result in improved forecasts for operations.
NASA Astrophysics Data System (ADS)
Motornenko, A.; Bravina, L.; Gorenstein, M. I.; Magner, A. G.; Zabrodin, E.
2018-03-01
Properties of equilibrated nucleon system are studied within the ultra-relativistic quantum molecular dynamics (UrQMD) transport model. The UrQMD calculations are done within a finite box with periodic boundary conditions. The system achieves thermal equilibrium due to nucleon-nucleon elastic scattering. For the UrQMD-equilibrium state, nucleon energy spectra, equation of state, particle number fluctuations, and shear viscosity η are calculated. The UrQMD results are compared with both, statistical mechanics and Chapman-Enskog kinetic theory, for a classical system of nucleons with hard-core repulsion.
Algorithms for the Computation of Debris Risk
NASA Technical Reports Server (NTRS)
Matney, Mark J.
2017-01-01
Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of satellites. A number of tools have been developed in NASA’s Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA’s Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper presents an introduction to these algorithms and the assumptions upon which they are based.
Algorithms for the Computation of Debris Risks
NASA Technical Reports Server (NTRS)
Matney, Mark
2017-01-01
Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of non-spherical satellites. A number of tools have been developed in NASA's Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA's Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper will present an introduction to these algorithms and the assumptions upon which they are based.
NASA Astrophysics Data System (ADS)
Fuchs, Sven; Schütz, Felina; Förster, Andrea; Förster, Hans-Jürgen
2013-04-01
The thermal conductivity (TC) of a rock is, in collaboration with the temperature gradient, the basic parameter to determine the heat flow from the Earth interior. Moreover, it forms the input into models targeted on temperature prognoses for geothermal reservoirs at those depths not yet reached by boreholes. Thus, rock TC is paramount in geothermal exploration and site selection. Most commonly, TC of a rock is determined in the laboratory on samples that are either dry or water-saturated. Because sample saturation is time-consuming, it is desirable, especially if large numbers of samples need to be assessed, to develop an approach that quickly and reliably converts dry-measured bulk TC into the respective saturated value without applying the saturation procedure. Different petrophysical models can be deployed to calculate the matrix TC of a rock from the bulk TC and vice versa, if the effective porosity is known (e.g., from well logging data) and the TC of the saturation fluid (e.g., gas, oil, water) is considered. We have studied for a large suite of different sedimentary rocks the performance of two-component (rock matrix, porosity) models that are widely used in geothermics (arithmetic mean, geometric mean, harmonic mean, Hashin and Shtrikman mean, and effective medium theory mean). The data set consisted of 1147 TC data from three different sedimentary basins (North German Basin, Molasse Basin, Mesozoic platform sediments of the northern Sinai Microplate in Israel). Four lithotypes (sandstone, mudstone, limestone, dolomite) were studied exhibiting bulk TC in the range between 1.0 and 6.5 W/(mK). The quality of fit between measured (laboratory) and calculated bulk TC values was studied separately for the influence of lithotype, saturation fluid (water and isooctane), and rock anisotropy (parallel and perpendicular to bedding). The geometric mean model displays the best correspondence between calculated and measured bulk TC, however, the relation is not satisfying. To improve the fit of the models, correction equations are calculated based on the statistical data. In addition, the application of correction equations allows a significant improvement of the accuracy of bulk TC data calculated. However, the "corrected" geometric mean constitutes the only model universally applicable to different types of sedimentary rocks and, thus, is recommended for the calculation of bulk TC. Finally, the statistical analysis also resulted in lithotype-specific conversion equations, which permit a calculation of the water-saturated bulk TC from dry-measured TC and porosity (e.g., well-log-derived porosity). This approach has the advantage that the saturated bulk TC could be calculated readily without application of any mixing model. The expected errors with this approach are in the range between 5 and 10 % (Fuchs et al., 2013).
Ozaki, Vitor A.; Ghosh, Sujit K.; Goodwin, Barry K.; Shirota, Ricardo
2009-01-01
This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Paraná (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited. PMID:19890450
NASA Astrophysics Data System (ADS)
Knani, S.; Aouaini, F.; Bahloul, N.; Khalfaoui, M.; Hachicha, M. A.; Ben Lamine, A.; Kechaou, N.
2014-04-01
Analytical expression for modeling water adsorption isotherms of food or agricultural products is developed using the statistical mechanics formalism. The model developed in this paper is further used to fit and interpret the isotherms of four varieties of Tunisian olive leaves called “Chemlali, Chemchali, Chetoui and Zarrazi”. The parameters involved in the model such as the number of adsorbed water molecules per site, n, the receptor sites density, NM, and the energetic parameters, a1 and a2, were determined by fitting the experimental adsorption isotherms at temperatures ranging from 303 to 323 K. We interpret the results of fitting. After that, the model is further applied to calculate thermodynamic functions which govern the adsorption mechanism such as entropy, the free enthalpy of Gibbs and the internal energy.
Biological evolution and statistical physics
NASA Astrophysics Data System (ADS)
Drossel, Barbara
2001-03-01
This review is an introduction to theoretical models and mathematical calculations for biological evolution, aimed at physicists. The methods in the field are naturally very similar to those used in statistical physics, although the majority of publications have appeared in biology journals. The review has three parts, which can be read independently. The first part deals with evolution in fitness landscapes and includes Fisher's theorem, adaptive walks, quasispecies models, effects of finite population sizes, and neutral evolution. The second part studies models of coevolution, including evolutionary game theory, kin selection, group selection, sexual selection, speciation, and coevolution of hosts and parasites. The third part discusses models for networks of interacting species and their extinction avalanches. Throughout the review, attention is paid to giving the necessary biological information, and to pointing out the assumptions underlying the models, and their limits of validity.
Wherry, Susan A.; Wood, Tamara M.
2018-04-27
A whole lake eutrophication (WLE) model approach for phosphorus and cyanobacterial biomass in Upper Klamath Lake, south-central Oregon, is presented here. The model is a successor to a previous model developed to inform a Total Maximum Daily Load (TMDL) for phosphorus in the lake, but is based on net primary production (NPP), which can be calculated from dissolved oxygen, rather than scaling up a small-scale description of cyanobacterial growth and respiration rates. This phase 3 WLE model is a refinement of the proof-of-concept developed in phase 2, which was the first attempt to use NPP to simulate cyanobacteria in the TMDL model. The calibration of the calculated NPP WLE model was successful, with performance metrics indicating a good fit to calibration data, and the calculated NPP WLE model was able to simulate mid-season bloom decreases, a feature that previous models could not reproduce.In order to use the model to simulate future scenarios based on phosphorus load reduction, a multivariate regression model was created to simulate NPP as a function of the model state variables (phosphorus and chlorophyll a) and measured meteorological and temperature model inputs. The NPP time series was split into a low- and high-frequency component using wavelet analysis, and regression models were fit to the components separately, with moderate success.The regression models for NPP were incorporated in the WLE model, referred to as the “scenario” WLE (SWLE), and the fit statistics for phosphorus during the calibration period were mostly unchanged. The fit statistics for chlorophyll a, however, were degraded. These statistics are still an improvement over prior models, and indicate that the SWLE is appropriate for long-term predictions even though it misses some of the seasonal variations in chlorophyll a.The complete whole lake SWLE model, with multivariate regression to predict NPP, was used to make long-term simulations of the response to 10-, 20-, and 40-percent reductions in tributary nutrient loads. The long-term mean water column concentration of total phosphorus was reduced by 9, 18, and 36 percent, respectively, in response to these load reductions. The long-term water column chlorophyll a concentration was reduced by 4, 13, and 44 percent, respectively. The adjustment to a new equilibrium between the water column and sediments occurred over about 30 years.
Statistical fluctuations in pedestrian evacuation times and the effect of social contagion
NASA Astrophysics Data System (ADS)
Nicolas, Alexandre; Bouzat, Sebastián; Kuperman, Marcelo N.
2016-08-01
Mathematical models of pedestrian evacuation and the associated simulation software have become essential tools for the assessment of the safety of public facilities and buildings. While a variety of models is now available, their calibration and test against empirical data are generally restricted to global averaged quantities; the statistics compiled from the time series of individual escapes ("microscopic" statistics) measured in recent experiments are thus overlooked. In the same spirit, much research has primarily focused on the average global evacuation time, whereas the whole distribution of evacuation times over some set of realizations should matter. In the present paper we propose and discuss the validity of a simple relation between this distribution and the microscopic statistics, which is theoretically valid in the absence of correlations. To this purpose, we develop a minimal cellular automaton, with features that afford a semiquantitative reproduction of the experimental microscopic statistics. We then introduce a process of social contagion of impatient behavior in the model and show that the simple relation under test may dramatically fail at high contagion strengths, the latter being responsible for the emergence of strong correlations in the system. We conclude with comments on the potential practical relevance for safety science of calculations based on microscopic statistics.
Comparison of Histograms for Use in Cloud Observation and Modeling
NASA Technical Reports Server (NTRS)
Green, Lisa; Xu, Kuan-Man
2005-01-01
Cloud observation and cloud modeling data can be presented in histograms for each characteristic to be measured. Combining information from single-cloud histograms yields a summary histogram. Summary histograms can be compared to each other to reach conclusions about the behavior of an ensemble of clouds in different places at different times or about the accuracy of a particular cloud model. As in any scientific comparison, it is necessary to decide whether any apparent differences are statistically significant. The usual methods of deciding statistical significance when comparing histograms do not apply in this case because they assume independent data. Thus, a new method is necessary. The proposed method uses the Euclidean distance metric and bootstrapping to calculate the significance level.
The Application of Probabilistic Methods to the Mistuning Problem
NASA Technical Reports Server (NTRS)
Griffin, J. H.; Rossi, M. R.; Feiner, D. M.
2004-01-01
FMM is a reduced order model for efficiently calculating the forced response of a mistuned bladed disk. FMM ID is a companion program which determines the mistuning in a particular rotor. Together, these methods provide a way to acquire data on the mistuning in a population of bladed disks, and then simulate the forced response of the fleet. This process is tested experimentally, and the simulated results are compared with laboratory measurements of a fleet of test rotors. The method is shown to work quite well. It is found that accuracy of the results depends on two factors: the quality of the statistical model used to characterize mistuning, and how sensitive the system is to errors in the statistical modeling.
Modelling of Rail Vehicles and Track for Calculation of Ground-Vibration Transmission Into Buildings
NASA Astrophysics Data System (ADS)
Hunt, H. E. M.
1996-05-01
A methodology for the calculation of vibration transmission from railways into buildings is presented. The method permits existing models of railway vehicles and track to be incorporated and it has application to any model of vibration transmission through the ground. Special attention is paid to the relative phasing between adjacent axle-force inputs to the rail, so that vibration transmission may be calculated as a random process. The vehicle-track model is used in conjunction with a building model of infinite length. The tracking and building are infinite and parallel to each other and forces applied are statistically stationary in space so that vibration levels at any two points along the building are the same. The methodology is two-dimensional for the purpose of application of random process theory, but fully three-dimensional for calculation of vibration transmission from the track and through the ground into the foundations of the building. The computational efficiency of the method will interest engineers faced with the task of reducing vibration levels in buildings. It is possible to assess the relative merits of using rail pads, under-sleeper pads, ballast mats, floating-slab track or base isolation for particular applications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Statistics. 1065.602 Section 1065.602... PROCEDURES Calculations and Data Requirements § 1065.602 Statistics. (a) Overview. This section contains equations and example calculations for statistics that are specified in this part. In this section we use...
Calculation of streamflow statistics for Ontario and the Great Lakes states
Piggott, Andrew R.; Neff, Brian P.
2005-01-01
Basic, flow-duration, and n-day frequency statistics were calculated for 779 current and historical streamflow gages in Ontario and 3,157 streamflow gages in the Great Lakes states with length-of-record daily mean streamflow data ending on December 31, 2000 and September 30, 2001, respectively. The statistics were determined using the U.S. Geological Survey’s SWSTAT and IOWDM, ANNIE, and LIBANNE software and Linux shell and PERL programming that enabled the mass processing of the data and calculation of the statistics. Verification exercises were performed to assess the accuracy of the processing and calculations. The statistics and descriptions, longitudes and latitudes, and drainage areas for each of the streamflow gages are summarized in ASCII text files and ESRI shapefiles.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.
NASA Technical Reports Server (NTRS)
Kranz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.
Ricker, Martin; Peña Ramírez, Víctor M.; von Rosen, Dietrich
2014-01-01
Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A·T+E, where for and for , A = initial relative growth to be estimated, , and E is an error term for each tree and time point. Furthermore, Ei[–b·r] = , , with TPR being the turning point radius in a sigmoid curve, and at is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth . One site (at the Popocatépetl volcano) stood out, with being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time. PMID:25402427
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2016-01-01
This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.
Lods, wrods, and mods: the interpretation of lod scores calculated under different models.
Hodge, S E; Elston, R C
1994-01-01
In this paper we examine the relationships among classical lod scores, "wrod" scores (lod scores calculated under the wrong genetic model), and "mod" scores (lod scores maximized over genetic model parameters). We compare the behavior of these scores when the state of nature is linkage to their behavior when the state of nature is no linkage. We describe sufficient conditions for mod scores to be valid and discuss their use to determine the correct genetic model. We show that lod scores represent a likelihood-ratio test for independence. We explain the "ascertainment-assumption-free" aspect of using mod scores to determine mode of inheritance and we set this aspect into a well-established statistical framework. Finally, we summarize practical guidelines for the use of mod scores.
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
NASA Astrophysics Data System (ADS)
Nishio, Katsuhisa
2013-12-01
Fission fragment mass distributions were measured in heavy-ion induced fissions using 238U target nucleus. The measured mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and quasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si + 238U and 34S + 238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections for seaborgium and hassium isotopes.
In-beam fission study for Heavy Element Synthesis
NASA Astrophysics Data System (ADS)
Nishio, Katsuhisa
2013-12-01
Fission fragment mass distributions were measured in heavy-ion induced fissions using 238U target nucleus. The measured mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si + 238U and 34S + 238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections for seaborgium and hassium isotopes.
NASA Technical Reports Server (NTRS)
Sprowls, D. O.; Bucci, R. J.; Ponchel, B. M.; Brazill, R. L.; Bretz, P. E.
1984-01-01
A technique is demonstrated for accelerated stress corrosion testing of high strength aluminum alloys. The method offers better precision and shorter exposure times than traditional pass fail procedures. The approach uses data from tension tests performed on replicate groups of smooth specimens after various lengths of exposure to static stress. The breaking strength measures degradation in the test specimen load carrying ability due to the environmental attack. Analysis of breaking load data by extreme value statistics enables the calculation of survival probabilities and a statistically defined threshold stress applicable to the specific test conditions. A fracture mechanics model is given which quantifies depth of attack in the stress corroded specimen by an effective flaw size calculated from the breaking stress and the material strength and fracture toughness properties. Comparisons are made with experimental results from three tempers of 7075 alloy plate tested by the breaking load method and by traditional tests of statistically loaded smooth tension bars and conventional precracked specimens.
Statistical Mechanics Model of Solids with Defects
NASA Astrophysics Data System (ADS)
Kaufman, M.; Walters, P. A.; Ferrante, J.
1997-03-01
Previously(M.Kaufman, J.Ferrante,NASA Tech. Memor.,1996), we examined the phase diagram for the failure of a solid under isotropic expansion and compression as a function of stress and temperature with the "springs" modelled by the universal binding energy relation (UBER)(J.H.Rose, J.R.Smith, F.Guinea, J.Ferrante, Phys.Rev.B29, 2963 (1984)). In the previous calculation we assumed that the "springs" failed independently and that the strain is uniform. In the present work, we have extended this statistical model of mechanical failure by allowing for correlations between "springs" and for thermal fluctuations in strains. The springs are now modelled in the harmonic approximation with a failure threshold energy E0, as an intermediate step in future studies to reinclude the full non-linear dependence of the UBER for modelling the interactions. We use the Migdal-Kadanoff renormalization-group method to determine the phase diagram of the model and to compute the free energy.
Obtaining the variance of gametic diversity with genomic models
USDA-ARS?s Scientific Manuscript database
It may be possible to use information about the variability among gametes (spermatozoa and ova) to select parents that are more likely than average to produce offspring with extremely high or low breeding values. In this study, statistical formulae were developed to calculate variability among gamet...
Statistical mechanics of neocortical interactions: Constraints on 40-Hz models of short-term memory
NASA Astrophysics Data System (ADS)
Ingber, Lester
1995-10-01
Calculations presented in L. Ingber and P.L. Nunez, Phys. Rev. E 51, 5074 (1995) detailed the evolution of short-term memory in the neocortex, supporting the empirical 7+/-2 rule of constraints on the capacity of neocortical processing. These results are given further support when other recent models of 40-Hz subcycles of low-frequency oscillations are considered.
Statistics of the residual refraction errors in laser ranging data
NASA Technical Reports Server (NTRS)
Gardner, C. S.
1977-01-01
A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.
Steve P. Verrill; Frank C. Owens; David E. Kretschmann; Rubin Shmulsky
2017-01-01
It is common practice to assume that a two-parameter Weibull probability distribution is suitable for modeling lumber properties. Verrill and co-workers demonstrated theoretically and empirically that the modulus of rupture (MOR) distribution of visually graded or machine stress rated (MSR) lumber is not distributed as a Weibull. Instead, the tails of the MOR...
A Multiplicative Cascade Model for High-Resolution Space-Time Downscaling of Rainfall
NASA Astrophysics Data System (ADS)
Raut, Bhupendra A.; Seed, Alan W.; Reeder, Michael J.; Jakob, Christian
2018-02-01
Distributions of rainfall with the time and space resolutions of minutes and kilometers, respectively, are often needed to drive the hydrological models used in a range of engineering, environmental, and urban design applications. The work described here is the first step in constructing a model capable of downscaling rainfall to scales of minutes and kilometers from time and space resolutions of several hours and a hundred kilometers. A multiplicative random cascade model known as the Short-Term Ensemble Prediction System is run with parameters from the radar observations at Melbourne (Australia). The orographic effects are added through multiplicative correction factor after the model is run. In the first set of model calculations, 112 significant rain events over Melbourne are simulated 100 times. Because of the stochastic nature of the cascade model, the simulations represent 100 possible realizations of the same rain event. The cascade model produces realistic spatial and temporal patterns of rainfall at 6 min and 1 km resolution (the resolution of the radar data), the statistical properties of which are in close agreement with observation. In the second set of calculations, the cascade model is run continuously for all days from January 2008 to August 2015 and the rainfall accumulations are compared at 12 locations in the greater Melbourne area. The statistical properties of the observations lie with envelope of the 100 ensemble members. The model successfully reproduces the frequency distribution of the 6 min rainfall intensities, storm durations, interarrival times, and autocorrelation function.
Statistical mechanics of simple models of protein folding and design.
Pande, V S; Grosberg, A Y; Tanaka, T
1997-01-01
It is now believed that the primary equilibrium aspects of simple models of protein folding are understood theoretically. However, current theories often resort to rather heavy mathematics to overcome some technical difficulties inherent in the problem or start from a phenomenological model. To this end, we take a new approach in this pedagogical review of the statistical mechanics of protein folding. The benefit of our approach is a drastic mathematical simplification of the theory, without resort to any new approximations or phenomenological prescriptions. Indeed, the results we obtain agree precisely with previous calculations. Because of this simplification, we are able to present here a thorough and self contained treatment of the problem. Topics discussed include the statistical mechanics of the random energy model (REM), tests of the validity of REM as a model for heteropolymer freezing, freezing transition of random sequences, phase diagram of designed ("minimally frustrated") sequences, and the degree to which errors in the interactions employed in simulations of either folding and design can still lead to correct folding behavior. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 6 PMID:9414231
Applied statistics in ecology: common pitfalls and simple solutions
E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick
2013-01-01
The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
NASA Astrophysics Data System (ADS)
Kutiev, Ivan; Marinov, Pencho; Fidanova, Stefka; Belehaki, Anna; Tsagouri, Ioanna
2012-12-01
Validation results on the latest version of TaD model (TaDv2) show realistic reconstruction of the electron density profiles (EDPs) with an average error of 3 TECU, similar to the error obtained from GNSS-TEC calculated paremeters. The work presented here has the aim to further improve the accuracy of the TaD topside reconstruction, adjusting the TEC parameter calculated from TaD model with the TEC parameter calculated by GNSS transmitting RINEX files provided by receivers co-located with the Digisondes. The performance of the new version is tested during a storm period demonstrating further improvements in respect to the previous version. Statistical comparison of modeled and observed TEC confirms the validity of the proposed adjustment. A significant benefit of the proposed upgrade is that it facilitates the real-time implementation of TaD. The model needs a reliable measure of the scale height at the peak height, which is supposed to be provided by Digisondes. Oftenly, the automatic scaling software fails to correctly calculate the scale height at the peak, Hm, due to interferences in the receiving signal. Consequently the model estimated topside scale height is wrongly calculated leading to unrealistic results for the modeled EDP. The proposed TEC adjustment forces the model to correctly reproduce the topside scale height, despite the inaccurate values of Hm. This adjustment is very important for the application of TaD in an operational environment.
NASA Technical Reports Server (NTRS)
Rastaetter, L.; Kuznetsova, M.; Hesse, M.; Pulkkinen, A.; Glocer, A.; Yu, Y.; Meng, X.; Raeder, J.; Wiltberger, M.; Welling, D.;
2011-01-01
In this paper the metrics-based results of the Dst part of the 2008-2009 GEM Metrics Challenge are reported. The Metrics Challenge asked modelers to submit results for 4 geomagnetic storm events and 5 different types of observations that can be modeled by statistical or climatological or physics-based (e.g. MHD) models of the magnetosphere-ionosphere system. We present the results of over 25 model settings that were run at the Community Coordinated Modeling Center (CCMC) and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations we use comparisons of one-hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of one-minute model data with the one-minute Dst index calculated by the United States Geologic Survey (USGS).
Cavalcanti, Paulo Ernando Ferraz; Sá, Michel Pompeu Barros de Oliveira; dos Santos, Cecília Andrade; Esmeraldo, Isaac Melo; Chaves, Mariana Leal; Lins, Ricardo Felipe de Albuquerque; Lima, Ricardo de Carvalho
2015-01-01
Objective To determine whether stratification of complexity models in congenital heart surgery (RACHS-1, Aristotle basic score and STS-EACTS mortality score) fit to our center and determine the best method of discriminating hospital mortality. Methods Surgical procedures in congenital heart diseases in patients under 18 years of age were allocated to the categories proposed by the stratification of complexity methods currently available. The outcome hospital mortality was calculated for each category from the three models. Statistical analysis was performed to verify whether the categories presented different mortalities. The discriminatory ability of the models was determined by calculating the area under the ROC curve and a comparison between the curves of the three models was performed. Results 360 patients were allocated according to the three methods. There was a statistically significant difference between the mortality categories: RACHS-1 (1) - 1.3%, (2) - 11.4%, (3)-27.3%, (4) - 50 %, (P<0.001); Aristotle basic score (1) - 1.1%, (2) - 12.2%, (3) - 34%, (4) - 64.7%, (P<0.001); and STS-EACTS mortality score (1) - 5.5 %, (2) - 13.6%, (3) - 18.7%, (4) - 35.8%, (P<0.001). The three models had similar accuracy by calculating the area under the ROC curve: RACHS-1- 0.738; STS-EACTS-0.739; Aristotle- 0.766. Conclusion The three models of stratification of complexity currently available in the literature are useful with different mortalities between the proposed categories with similar discriminatory capacity for hospital mortality. PMID:26107445
From Weakly Chaotic Dynamics to Deterministic Subdiffusion via Copula Modeling
NASA Astrophysics Data System (ADS)
Nazé, Pierre
2018-03-01
Copula modeling consists in finding a probabilistic distribution, called copula, whereby its coupling with the marginal distributions of a set of random variables produces their joint distribution. The present work aims to use this technique to connect the statistical distributions of weakly chaotic dynamics and deterministic subdiffusion. More precisely, we decompose the jumps distribution of Geisel-Thomae map into a bivariate one and determine the marginal and copula distributions respectively by infinite ergodic theory and statistical inference techniques. We verify therefore that the characteristic tail distribution of subdiffusion is an extreme value copula coupling Mittag-Leffler distributions. We also present a method to calculate the exact copula and joint distributions in the case where weakly chaotic dynamics and deterministic subdiffusion statistical distributions are already known. Numerical simulations and consistency with the dynamical aspects of the map support our results.
Li, Ji; Gray, B.R.; Bates, D.M.
2008-01-01
Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.
Topological and statistical properties of nonlinear force-free fields
NASA Astrophysics Data System (ADS)
Mangalam, A.; Prasad, A.
2018-01-01
We use our semi-analytic solution of the nonlinear force-free field equation to construct three-dimensional magnetic fields that are applicable to the solar corona and study their statistical properties for estimating the degree of braiding exhibited by these fields. We present a new formula for calculating the winding number and compare it with the formula for the crossing number. The comparison is shown for a toy model of two helices and for realistic cases of nonlinear force-free fields; conceptually the formulae are nearly the same but the resulting distributions calculated for a given topology can be different. We also calculate linkages, which are useful topological quantities that are independent measures of the contribution of magnetic braiding to the total free energy and relative helicity of the field. Finally, we derive new analytical bounds for the free energy and relative helicity for the field configurations in terms of the linking number. These bounds will be of utility in estimating the braided energy available for nano-flares or for eruptions.
Statistical hadronization and microcanonical ensemble
Becattini, F.; Ferroni, L.
2004-01-01
We present a Monte Carlo calculation of the microcanonical ensemble of the of the ideal hadron-resonance gas including all known states up to a mass of 1. 8 GeV, taking into account quantum statistics. The computing method is a development of a previous one based on a Metropolis Monte Carlo algorithm, with a the grand-canonical limit of the multi-species multiplicity distribution as proposal matrix. The microcanonical average multiplicities of the various hadron species are found to converge to the canonical ones for moderately low values of the total energy. This algorithm opens the way for event generators based for themore » statistical hadronization model.« less
Tooth-size discrepancy: A comparison between manual and digital methods
Correia, Gabriele Dória Cabral; Habib, Fernando Antonio Lima; Vogel, Carlos Jorge
2014-01-01
Introduction Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. Objective This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. Material and Methods To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. Results Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05), except for values found by the linear digital method which revealed a slight, non-significant statistical difference. Conclusions Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable. PMID:25279529
Climate patterns as predictors of amphibians species richness and indicators of potential stress
Battaglin, W.; Hay, L.; McCabe, G.; Nanjappa, P.; Gallant, Alisa L.
2005-01-01
Amphibians occupy a range of habitats throughout the world, but species richness is greatest in regions with moist, warm climates. We modeled the statistical relations of anuran and urodele species richness with mean annual climate for the conterminous United States, and compared the strength of these relations at national and regional levels. Model variables were calculated for county and subcounty mapping units, and included 40-year (1960-1999) annual mean and mean annual climate statistics, mapping unit average elevation, mapping unit land area, and estimates of anuran and urodele species richness. Climate data were derived from more than 7,500 first-order and cooperative meteorological stations and were interpolated to the mapping units using multiple linear regression models. Anuran and urodele species richness were calculated from the United States Geological Survey's Amphibian Research and Monitoring Initiative (ARMI) National Atlas for Amphibian Distributions. The national multivariate linear regression (MLR) model of anuran species richness had an adjusted coefficient of determination (R2) value of 0.64 and the national MLR model for urodele species richness had an R2 value of 0.45. Stratifying the United States by coarse-resolution ecological regions provided models for anUrans that ranged in R2 values from 0.15 to 0.78. Regional models for urodeles had R2 values. ranging from 0.27 to 0.74. In general, regional models for anurans were more strongly influenced by temperature variables, whereas precipitation variables had a larger influence on urodele models.
Insight into structural phase transitions from the decoupled anharmonic mode approximation
NASA Astrophysics Data System (ADS)
Adams, Donat J.; Passerone, Daniele
2016-08-01
We develop a formalism (decoupled anharmonic mode approximation, DAMA) that allows calculation of the vibrational free energy using density functional theory even for materials which exhibit negative curvature of the potential energy surface with respect to atomic displacements. We investigate vibrational modes beyond the harmonic approximation and approximate the potential energy surface with the superposition of the accurate potential along each normal mode. We show that the free energy can stabilize crystal structures at finite temperatures which appear dynamically unstable at T = 0. The DAMA formalism is computationally fast because it avoids statistical sampling through molecular dynamics calculations, and is in principle completely ab initio. It is free of statistical uncertainties and independent of model parameters, but can give insight into the mechanism of a structural phase transition. We apply the formalism to the perovskite cryolite, and investigate the temperature-driven phase transition from the P21/n to the Immm space group. We calculate a phase transition temperature between 710 and 950 K, in fair agreement with the experimental value of 885 K. This can be related to the underestimation of the interaction of the vibrational states. We also calculate the main axes of the thermal ellipsoid and can explain the experimentally observed increase of its volume for the fluorine by 200-300% throughout the phase transition. Our calculations suggest the appearance of tunneling states in the high temperature phase. The convergence of the vibrational DOS and of the critical temperature with respect of reciprocal space sampling is investigated using the polarizable-ion model.
Insight into structural phase transitions from the decoupled anharmonic mode approximation.
Adams, Donat J; Passerone, Daniele
2016-08-03
We develop a formalism (decoupled anharmonic mode approximation, DAMA) that allows calculation of the vibrational free energy using density functional theory even for materials which exhibit negative curvature of the potential energy surface with respect to atomic displacements. We investigate vibrational modes beyond the harmonic approximation and approximate the potential energy surface with the superposition of the accurate potential along each normal mode. We show that the free energy can stabilize crystal structures at finite temperatures which appear dynamically unstable at T = 0. The DAMA formalism is computationally fast because it avoids statistical sampling through molecular dynamics calculations, and is in principle completely ab initio. It is free of statistical uncertainties and independent of model parameters, but can give insight into the mechanism of a structural phase transition. We apply the formalism to the perovskite cryolite, and investigate the temperature-driven phase transition from the P21/n to the Immm space group. We calculate a phase transition temperature between 710 and 950 K, in fair agreement with the experimental value of 885 K. This can be related to the underestimation of the interaction of the vibrational states. We also calculate the main axes of the thermal ellipsoid and can explain the experimentally observed increase of its volume for the fluorine by 200-300% throughout the phase transition. Our calculations suggest the appearance of tunneling states in the high temperature phase. The convergence of the vibrational DOS and of the critical temperature with respect of reciprocal space sampling is investigated using the polarizable-ion model.
Development of a funding, cost, and spending model for satellite projects
NASA Technical Reports Server (NTRS)
Johnson, Jesse P.
1989-01-01
The need for a predictive budget/funging model is obvious. The current models used by the Resource Analysis Office (RAO) are used to predict the total costs of satellite projects. An effort to extend the modeling capabilities from total budget analysis to total budget and budget outlays over time analysis was conducted. A statistical based and data driven methodology was used to derive and develop the model. Th budget data for the last 18 GSFC-sponsored satellite projects were analyzed and used to build a funding model which would describe the historical spending patterns. This raw data consisted of dollars spent in that specific year and their 1989 dollar equivalent. This data was converted to the standard format used by the RAO group and placed in a database. A simple statistical analysis was performed to calculate the gross statistics associated with project length and project cost ant the conditional statistics on project length and project cost. The modeling approach used is derived form the theory of embedded statistics which states that properly analyzed data will produce the underlying generating function. The process of funding large scale projects over extended periods of time is described by Life Cycle Cost Models (LCCM). The data was analyzed to find a model in the generic form of a LCCM. The model developed is based on a Weibull function whose parameters are found by both nonlinear optimization and nonlinear regression. In order to use this model it is necessary to transform the problem from a dollar/time space to a percentage of total budget/time space. This transformation is equivalent to moving to a probability space. By using the basic rules of probability, the validity of both the optimization and the regression steps are insured. This statistically significant model is then integrated and inverted. The resulting output represents a project schedule which relates the amount of money spent to the percentage of project completion.
Instantaneous polarization statistic property of EM waves incident on time-varying reentry plasma
NASA Astrophysics Data System (ADS)
Bai, Bowen; Liu, Yanming; Li, Xiaoping; Yao, Bo; Shi, Lei
2018-06-01
An analytical method is proposed in this paper to study the effect of time-varying reentry plasma sheath on the instantaneous polarization statistic property of electromagnetic (EM) waves. Based on the disturbance property of the hypersonic fluid, the spatial-temporal model of the time-varying reentry plasma sheath is established. An analytical technique referred to as transmission line analogy is developed to calculate the instantaneous transmission coefficient of EM wave propagation in time-varying plasma. Then, the instantaneous polarization statistic theory of EM wave propagation in the time-varying plasma sheath is developed. Taking the S-band telemetry right hand circularly polarized wave as an example, effects of incident angle and plasma parameters, including the electron density and the collision frequency on the EM wave's polarization statistic property are studied systematically. Statistical results indicate that the lower the collision frequency and the larger the electron density and incident angle is, the worse the deterioration of the polarization property is. Meanwhile, in conditions of critical parameters of certain electron density, collision frequency, and incident angle, the transmitted waves have both the right and left hand polarization mode, and the polarization mode will reverse. The calculation results could provide useful information for adaptive polarization receiving of the spacecraft's reentry communication.
COMPARISON OF CANCER SLOPE FACTORS FOR USING DIFFERENT STATISTICAL APPROACHES
In the past, the cancer slope factor has been calculated as the upper 95% confidence limit on the coefficient (q1*) of the linear term of the multistage model for the extra cancer risk over background. The U.S. EPA's draft final cancer guidelines, released in 2003, however, pres...
40 CFR Appendix IV to Part 265 - Tests for Significance
Code of Federal Regulations, 2010 CFR
2010-07-01
... introductory statistics texts. ... student's t-test involves calculation of the value of a t-statistic for each comparison of the mean... parameter with its initial background concentration or value. The calculated value of the t-statistic must...
NASA Astrophysics Data System (ADS)
Rüstemoǧlu, Sevinç; Barutçu, Burak; Sibel Menteş, Å..
2010-05-01
The continuous usage of fossil fuels as primary energy source is the reason of the emission of CO and powerless economy of the country affected by the great flactuations in the unit price of energy sources. In recent years, developments in wind energy sector and the supporting new renewable energy policies of the countries allow the new wind farm owners and the firms who expect to be an owner to consider and invest on the renewable sources. In this study, the annual production of the turbines with 1.8 kW and 30 kW which are available for Istanbul Technical University in Energy Institute is calculated by Wasp and WindPro Field Flow Models and the wind characteristics of the area are analysed. The meteorological data used in calculation includes the period between 02.March.2000 and 31.May.2004 and is taken from the meteorological mast ( ) in Istanbul Technical University's campus area. The measurement data is taken from 2 m and 10 m heights with hourly means. The topography, roughness classes and shelter effects are defined in the models to make accurate extrapolation to the turbine sites. As an advantage, the region is nearly 3.5 km close to the Istanbul Bosphorous but as it can be seen from the Wasp and WindPro Model Results, the Bosphorous effect is interrupted by the new buildings and hight forestry. The shelter effect of these high buildings have a great influence on the wind flow and decrease the high wind energy potential which is produced by the Bosphorous effect. This study, which determines wind characteristics and expected annual production, is important for this Project Site and therefore gains importance before the construction of wind energy system. However, when the system is operating, developing the energy management skills, forecasting the wind speed and direction will become important. At this point, three statistical models which are Kalman Fitler, AR Model and Neural Networks models are used to determine the success of each method for correct wind prediction. Statistical methods' preditictions as time series are included and the similartiy rates are compared for each method. The algorithms which are performed in MATLAB, gave the similarity results of each model. According to the Neural Networks results which are found to be the most successful method for prediction within these three statistical models, the windspeed similarity rate between the original measurements and the prediction set which includes 1 year period between 2003 and 2004, is evaluated as % 94.7. For wind direction, the similarity rate is %81.61. High noise margin and ability to learn the characteristics of the signal are important advantages of Neural Networks for compatible windspeed and direction predictions compared with measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, D; O’Connell, D; Lamb, J
Purpose: To demonstrate real-time dose calculation of free-breathing MRI guided Co−60 treatments, using a motion model and Monte-Carlo dose calculation to accurately account for the interplay between irregular breathing motion and an IMRT delivery. Methods: ViewRay Co-60 dose distributions were optimized on ITVs contoured from free-breathing CT images of lung cancer patients. Each treatment plan was separated into 0.25s segments, accounting for the MLC positions and beam angles at each time point. A voxel-specific motion model derived from multiple fast-helical free-breathing CTs and deformable registration was calculated for each patient. 3D images for every 0.25s of a simulated treatment weremore » generated in real time, here using a bellows signal as a surrogate to accurately account for breathing irregularities. Monte-Carlo dose calculation was performed every 0.25s of the treatment, with the number of histories in each calculation scaled to give an overall 1% statistical uncertainty. Each dose calculation was deformed back to the reference image using the motion model and accumulated. The static and real-time dose calculations were compared. Results: Image generation was performed in real time at 4 frames per second (GPU). Monte-Carlo dose calculation was performed at approximately 1frame per second (CPU), giving a total calculation time of approximately 30 minutes per treatment. Results show both cold- and hot-spots in and around the ITV, and increased dose to contralateral lung as the tumor moves in and out of the beam during treatment. Conclusion: An accurate motion model combined with a fast Monte-Carlo dose calculation allows almost real-time dose calculation of a free-breathing treatment. When combined with sagittal 2D-cine-mode MRI during treatment to update the motion model in real time, this will allow the true delivered dose of a treatment to be calculated, providing a useful tool for adaptive planning and assessing the effectiveness of gated treatments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nedic, Vladimir, E-mail: vnedic@kg.ac.rs; Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs; Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs
2014-11-15
Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. Themore » output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.« less
A DISCUSSION ON DIFFERENT APPROACHES FOR ASSESSING LIFETIME RISKS OF RADON-INDUCED LUNG CANCER.
Chen, Jing; Murith, Christophe; Palacios, Martha; Wang, Chunhong; Liu, Senlin
2017-11-01
Lifetime risks of radon induced lung cancer were assessed based on epidemiological approaches for Canadian, Swiss and Chinese populations, using the most recent vital statistic data and radon distribution characteristics available for each country. In the risk calculation, the North America residential radon risk model was used for the Canadian population, the European residential radon risk model for the Swiss population, the Chinese residential radon risk model for the Chinese population, and the EPA/BEIR-VI radon risk model for all three populations. The results were compared with the risk calculated from the International Commission on Radiological Protection (ICRP)'s exposure-to-risk conversion coefficients. In view of the fact that the ICRP coefficients were recommended for radiation protection of all populations, it was concluded that, generally speaking, lifetime absolute risks calculated with ICRP-recommended coefficients agree reasonably well with the range of radon induced lung cancer risk predicted by risk models derived from epidemiological pooling analyses. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
NASA Astrophysics Data System (ADS)
Behera, B. R.
2015-01-01
In this talk results of the evaporation residue (ER) cross sections for the 19F+194,196,198Pt (forming compound nuclei 213,215,217Fr) and 16,18O+198Pt (forming compound nuclei 214,216Rn) systems measured at Hybrid Recoil mass Analyzer (HYRA) spectrometer installed at the Pelletron+LINAC accelerator facility of the Inter University Accelerator Center (IUAC), New Delhi are reported. The survival probabilities of 215Fr and 217Fr with neutron numbers N = 126 are found to be lower than the survival probabilities of 215Fr and 217Fr with neutron numbers N = 128 and 130 respectively. Statistical model analysis of the ER cross sections show that an excitation energy dependent scaling factor of the finite-range rotating liquid drop model fission barrier is necessary to fit the experimental data. For the case of 214,216Rn, the experimental ER cross sections are compared with the predictions from the statistical model calculations of compound nuclear decay where Kramer's fission width is used. The strength of nuclear dissipation is treated as a free parameter in the calculations to fit the experimental data.
Role of hydrogen in volatile behaviour of defects in SiO2-based electronic devices
NASA Astrophysics Data System (ADS)
Wimmer, Yannick; El-Sayed, Al-Moatasem; Gös, Wolfgang; Grasser, Tibor; Shluger, Alexander L.
2016-06-01
Charge capture and emission by point defects in gate oxides of metal-oxide-semiconductor field-effect transistors (MOSFETs) strongly affect reliability and performance of electronic devices. Recent advances in experimental techniques used for probing defect properties have led to new insights into their characteristics. In particular, these experimental data show a repeated dis- and reappearance (the so-called volatility) of the defect-related signals. We use multiscale modelling to explain the charge capture and emission as well as defect volatility in amorphous SiO2 gate dielectrics. We first briefly discuss the recent experimental results and use a multiphonon charge capture model to describe the charge-trapping behaviour of defects in silicon-based MOSFETs. We then link this model to ab initio calculations that investigate the three most promising defect candidates. Statistical distributions of defect characteristics obtained from ab initio calculations in amorphous SiO2 are compared with the experimentally measured statistical properties of charge traps. This allows us to suggest an atomistic mechanism to explain the experimentally observed volatile behaviour of defects. We conclude that the hydroxyl-E' centre is a promising candidate to explain all the observed features, including defect volatility.
Bayesian Atmospheric Radiative Transfer (BART) Code and Application to WASP-43b
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Cubillos, Patricio; Bowman, Oliver; Rojo, Patricio; Stemm, Madison; Lust, Nathaniel B.; Challener, Ryan; Foster, Austin James; Foster, Andrew S.; Blumenthal, Sarah D.; Bruce, Dylan
2016-01-01
We present a new open-source Bayesian radiative-transfer framework, Bayesian Atmospheric Radiative Transfer (BART, https://github.com/exosports/BART), and its application to WASP-43b. BART initializes a model for the atmospheric retrieval calculation, generates thousands of theoretical model spectra using parametrized pressure and temperature profiles and line-by-line radiative-transfer calculation, and employs a statistical package to compare the models with the observations. It consists of three self-sufficient modules available to the community under the reproducible-research license, the Thermochemical Equilibrium Abundances module (TEA, https://github.com/dzesmin/TEA, Blecic et al. 2015}, the radiative-transfer module (Transit, https://github.com/exosports/transit), and the Multi-core Markov-chain Monte Carlo statistical module (MCcubed, https://github.com/pcubillos/MCcubed, Cubillos et al. 2015). We applied BART on all available WASP-43b secondary eclipse data from the space- and ground-based observations constraining the temperature-pressure profile and molecular abundances of the dayside atmosphere of WASP-43b. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
The value of a statistical life: a meta-analysis with a mixed effects regression model.
Bellavance, François; Dionne, Georges; Lebeau, Martin
2009-03-01
The value of a statistical life (VSL) is a very controversial topic, but one which is essential to the optimization of governmental decisions. We see a great variability in the values obtained from different studies. The source of this variability needs to be understood, in order to offer public decision-makers better guidance in choosing a value and to set clearer guidelines for future research on the topic. This article presents a meta-analysis based on 39 observations obtained from 37 studies (from nine different countries) which all use a hedonic wage method to calculate the VSL. Our meta-analysis is innovative in that it is the first to use the mixed effects regression model [Raudenbush, S.W., 1994. Random effects models. In: Cooper, H., Hedges, L.V. (Eds.), The Handbook of Research Synthesis. Russel Sage Foundation, New York] to analyze studies on the value of a statistical life. We conclude that the variability found in the values studied stems in large part from differences in methodologies.
Solar granulation and statistical crystallography: A modeling approach using size-shape relations
NASA Technical Reports Server (NTRS)
Noever, D. A.
1994-01-01
The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.
NASA Astrophysics Data System (ADS)
Bastianello, Alvise; Piroli, Lorenzo; Calabrese, Pasquale
2018-05-01
We derive exact analytic expressions for the n -body local correlations in the one-dimensional Bose gas with contact repulsive interactions (Lieb-Liniger model) in the thermodynamic limit. Our results are valid for arbitrary states of the model, including ground and thermal states, stationary states after a quantum quench, and nonequilibrium steady states arising in transport settings. Calculations for these states are explicitly presented and physical consequences are critically discussed. We also show that the n -body local correlations are directly related to the full counting statistics for the particle-number fluctuations in a short interval, for which we provide an explicit analytic result.
Matoz-Fernandez, D A; Linares, D H; Ramirez-Pastor, A J
2012-09-04
The statistical thermodynamics of straight rigid rods of length k on triangular lattices was developed on a generalization in the spirit of the lattice-gas model and the classical Guggenheim-DiMarzio approximation. In this scheme, the Helmholtz free energy and its derivatives were written in terms of the order parameter, δ, which characterizes the nematic phase occurring in the system at intermediate densities. Then, using the principle of minimum free energy with δ as a parameter, the main adsorption properties were calculated. Comparisons with Monte Carlo simulations and experimental data were performed in order to evaluate the outcome and limitations of the theoretical model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avrett, E.; Tian, H.; Landi, E.
Semiempirical atmospheric modeling attempts to match an observed spectrum by finding the temperature distribution and other physical parameters along the line of sight through the emitting region such that the calculated spectrum agrees with the observed one. In this paper we take the observed spectrum of a sunspot and the quiet Sun in the EUV wavelength range 668–1475 Å from the 2001 SUMER atlas of Curdt et al. to determine models of the two atmospheric regions, extending from the photosphere through the overlying chromosphere into the transition region. We solve the coupled statistical equilibrium and optically thick radiative transfer equationsmore » for a set of 32 atoms and ions. The atoms that are part of molecules are treated separately, and are excluded from the atomic abundances and atomic opacities. We compare the Mg ii k line profile observations from the Interface Region Imaging Spectrograph with the profiles calculated from the two models. The calculated profiles for the sunspot are substantially lower than the observed ones, based on the SUMER models. The only way we have found to raise the calculated Mg ii lines to agree with the observations is to introduce illumination of the sunspot from the surrounding active region.« less
Statistical analysis of measured free-space laser signal intensity over a 2.33 km optical path.
Tunick, Arnold
2007-10-17
Experimental research is conducted to determine the characteristic behavior of high frequency laser signal intensity data collected over a 2.33 km optical path. Results focus mainly on calculated power spectra and frequency distributions. In addition, a model is developed to calculate optical turbulence intensity (C(n)/2) as a function of receiving and transmitting aperture diameter, log-amplitude variance, and path length. Initial comparisons of calculated to measured C(n)/2 data are favorable. It is anticipated that this kind of signal data analysis will benefit laser communication systems development and testing at the U.S. Army Research Laboratory (ARL) and elsewhere.
Ely, D.M.; Hill, M.C.; Tiedeman, C.R.; O'Brien, G. M.
2004-01-01
When a model is calibrated by nonlinear regression, calculated diagnostic and inferential statistics provide a wealth of information about many aspects of the system. This work uses linear inferential statistics that are measures of prediction uncertainty to investigate the likely importance of continued monitoring of hydraulic head to the accuracy of model predictions. The measurements evaluated are hydraulic heads; the predictions of interest are subsurface transport from 15 locations. The advective component of transport is considered because it is the component most affected by the system dynamics represented by the regional-scale model being used. The problem is addressed using the capabilities of the U.S. Geological Survey computer program MODFLOW-2000, with its Advective Travel Observation (ADV) Package. Copyright ASCE 2004.
Evaluation of neutron total and capture cross sections on 99Tc in the unresolved resonance region
NASA Astrophysics Data System (ADS)
Iwamoto, Nobuyuki; Katabuchi, Tatsuya
2017-09-01
Long-lived fission product Technetium-99 is one of the most important radioisotopes for nuclear transmutation. The reliable nuclear data are indispensable for a wide energy range up to a few MeV, in order to develop environmental load reducing technology. The statistical analyses of resolved resonances were performed by using the truncated Porter-Thomas distribution, coupled-channels optical model, nuclear level density model and Bayes' theorem on conditional probability. The total and capture cross sections were calculated by a nuclear reaction model code CCONE. The resulting cross sections have statistical consistency between the resolved and unresolved resonance regions. The evaluated capture data reproduce those recently measured at ANNRI of J-PARC/MLF above resolved resonance region up to 800 keV.
Using sensitivity analysis in model calibration efforts
Tiedeman, Claire; Hill, Mary C.
2003-01-01
In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.
Statistical alignment: computational properties, homology testing and goodness-of-fit.
Hein, J; Wiuf, C; Knudsen, B; Møller, M B; Wibling, G
2000-09-08
The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum likelihood estimate. In addition, the recursions originally presented by Thorne, Kishino and Felsenstein can be simplified. Two proteins, about 1500 amino acids long, can be analysed with this method in less than five seconds on a fast desktop computer, which makes this method practical for actual data analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test, that allows testing the proposed insertion-deletion (indel) process inherent to this model and find that real sequences (here globins) probably experience indels longer than one, contrary to what is assumed by the model. Copyright 2000 Academic Press.
K to π π decay amplitudes from lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, T.; Boyle, P. A.; Christ, N. H.
2011-12-01
We report a direct lattice calculation of the K to ππ decay matrix elements for both the ΔI=1/2 and 3/2 amplitudes A 0 and A 2 on 2+1 flavor, domain wall fermion, 16 3×32×16 lattices. This is a complete calculation in which all contractions for the required ten, four-quark operators are evaluated, including the disconnected graphs in which no quark line connects the initial kaon and final two-pion states. These lattice operators are nonperturbatively renormalized using the Rome-Southampton method and the quadratic divergences are studied and removed. This is an important but notoriously difficult calculation, requiring high statistics on amore » large volume. In this paper, we take a major step toward the computation of the physical K→ππ amplitudes by performing a complete calculation at unphysical kinematics with pions of mass 422 MeV at rest in the kaon rest frame. With this simplification, we are able to resolve Re(A 0) from zero for the first time, with a 25% statistical error and can develop and evaluate methods for computing the complete, complex amplitude A 0, a calculation central to understanding the Δ=1/2 rule and testing the standard model of CP violation in the kaon system.« less
Applications of modern statistical methods to analysis of data in physical science
NASA Astrophysics Data System (ADS)
Wicker, James Eric
Modern methods of statistical and computational analysis offer solutions to dilemmas confronting researchers in physical science. Although the ideas behind modern statistical and computational analysis methods were originally introduced in the 1970's, most scientists still rely on methods written during the early era of computing. These researchers, who analyze increasingly voluminous and multivariate data sets, need modern analysis methods to extract the best results from their studies. The first section of this work showcases applications of modern linear regression. Since the 1960's, many researchers in spectroscopy have used classical stepwise regression techniques to derive molecular constants. However, problems with thresholds of entry and exit for model variables plagues this analysis method. Other criticisms of this kind of stepwise procedure include its inefficient searching method, the order in which variables enter or leave the model and problems with overfitting data. We implement an information scoring technique that overcomes the assumptions inherent in the stepwise regression process to calculate molecular model parameters. We believe that this kind of information based model evaluation can be applied to more general analysis situations in physical science. The second section proposes new methods of multivariate cluster analysis. The K-means algorithm and the EM algorithm, introduced in the 1960's and 1970's respectively, formed the basis of multivariate cluster analysis methodology for many years. However, several shortcomings of these methods include strong dependence on initial seed values and inaccurate results when the data seriously depart from hypersphericity. We propose new cluster analysis methods based on genetic algorithms that overcomes the strong dependence on initial seed values. In addition, we propose a generalization of the Genetic K-means algorithm which can accurately identify clusters with complex hyperellipsoidal covariance structures. We then use this new algorithm in a genetic algorithm based Expectation-Maximization process that can accurately calculate parameters describing complex clusters in a mixture model routine. Using the accuracy of this GEM algorithm, we assign information scores to cluster calculations in order to best identify the number of mixture components in a multivariate data set. We will showcase how these algorithms can be used to process multivariate data from astronomical observations.
Problems With Risk Reclassification Methods for Evaluating Prediction Models
Pepe, Margaret S.
2011-01-01
For comparing the performance of a baseline risk prediction model with one that includes an additional predictor, a risk reclassification analysis strategy has been proposed. The first step is to cross-classify risks calculated according to the 2 models for all study subjects. Summary measures including the percentage of reclassification and the percentage of correct reclassification are calculated, along with 2 reclassification calibration statistics. The author shows that interpretations of the proposed summary measures and P values are problematic. The author's recommendation is to display the reclassification table, because it shows interesting information, but to use alternative methods for summarizing and comparing model performance. The Net Reclassification Index has been suggested as one alternative method. The author argues for reporting components of the Net Reclassification Index because they are more clinically relevant than is the single numerical summary measure. PMID:21555714
Acidity in DMSO from the embedded cluster integral equation quantum solvation model.
Heil, Jochen; Tomazic, Daniel; Egbers, Simon; Kast, Stefan M
2014-04-01
The embedded cluster reference interaction site model (EC-RISM) is applied to the prediction of acidity constants of organic molecules in dimethyl sulfoxide (DMSO) solution. EC-RISM is based on a self-consistent treatment of the solute's electronic structure and the solvent's structure by coupling quantum-chemical calculations with three-dimensional (3D) RISM integral equation theory. We compare available DMSO force fields with reference calculations obtained using the polarizable continuum model (PCM). The results are evaluated statistically using two different approaches to eliminating the proton contribution: a linear regression model and an analysis of pK(a) shifts for compound pairs. Suitable levels of theory for the integral equation methodology are benchmarked. The results are further analyzed and illustrated by visualizing solvent site distribution functions and comparing them with an aqueous environment.
Two statistics for evaluating parameter identifiability and error reduction
Doherty, John; Hunt, Randall J.
2009-01-01
Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.
Investigation of α -induced reactions on Sb isotopes relevant to the astrophysical γ process
NASA Astrophysics Data System (ADS)
Korkulu, Z.; Özkan, N.; Kiss, G. G.; Szücs, T.; Gyürky, Gy.; Fülöp, Zs.; Güray, R. T.; Halász, Z.; Rauscher, T.; Somorjai, E.; Török, Zs.; Yalçın, C.
2018-04-01
Background: The reaction rates used in γ -process nucleosynthesis network calculations are mostly derived from theoretical, statistical model cross sections. Experimental data is scarce for charged particle reactions at astrophysical, low energies. Where experimental (α ,γ ) data exists, it is often strongly overestimated by Hauser-Feshbach statistical model calculations. Further experimental α -capture cross sections in the intermediate and heavy mass region are necessary to test theoretical models and to gain understanding of heavy element nucleosynthesis in the astrophysical γ process. Purpose: The aim of the present work is to measure the 121Sb(α ,γ )125I , 121Sb(α ,n )124I , and 123Sb(α ,n )126I reaction cross sections. These measurements are important tests of astrophysical reaction rate predictions and extend the experimental database required for an improved understanding of p-isotope production. Method: The α -induced reactions on natural and enriched antimony targets were investigated using the activation technique. The (α ,γ ) cross sections of 121Sb were measured and are reported for the first time. To determine the cross section of the 121Sb(α ,γ )125I , 121Sb(α ,n )124I , and 123Sb(α ,n )126I reactions, the yields of γ rays following the β decay of the reaction products were measured. For the measurement of the lowest cross sections, the characteristic x rays were counted with a low-energy photon spectrometer detector. Results: The cross section of the 121Sb(α ,γ )125I , 121Sb(α ,n )124I , and 123Sb(α ,n )126I reactions were measured with high precision in an energy range between 9.74 and 15.48 MeV, close to the astrophysically relevant energy window. The results are compared with the predictions of statistical model calculations. The (α ,n) data show that the α widths are predicted well for these reactions. The (α ,γ ) results are overestimated by the calculations but this is because of the applied neutron and γ widths. Conclusions: Relevant for the astrophysical reaction rate is the α width used in the calculations. While for other reactions the α widths seem to have been overestimated and their energy dependence was not described well in the measured energy range, this is not the case for the reactions studied here. The result is consistent with the proposal that additional reaction channels, such as Coulomb excitation, may have led to the discrepancies found in other reactions.
Statistical time-dependent model for the interstellar gas
NASA Technical Reports Server (NTRS)
Gerola, H.; Kafatos, M.; Mccray, R.
1974-01-01
We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.
A Regression Framework for Effect Size Assessments in Longitudinal Modeling of Group Differences
Feingold, Alan
2013-01-01
The use of growth modeling analysis (GMA)--particularly multilevel analysis and latent growth modeling--to test the significance of intervention effects has increased exponentially in prevention science, clinical psychology, and psychiatry over the past 15 years. Model-based effect sizes for differences in means between two independent groups in GMA can be expressed in the same metric (Cohen’s d) commonly used in classical analysis and meta-analysis. This article first reviews conceptual issues regarding calculation of d for findings from GMA and then introduces an integrative framework for effect size assessments that subsumes GMA. The new approach uses the structure of the linear regression model, from which effect sizes for findings from diverse cross-sectional and longitudinal analyses can be calculated with familiar statistics, such as the regression coefficient, the standard deviation of the dependent measure, and study duration. PMID:23956615
Particle acceleration in a complex solar active region modelled by a Cellular automata model
NASA Astrophysics Data System (ADS)
Dauphin, C.; Vilmer, N.; Anastasiadis, A.
2004-12-01
The models of cellular automat allowed to reproduce successfully several statistical properties of the solar flares. We use a cellular automat model based on the concept of self-organised critical system to model the evolution of the magnetic energy released in an eruptive active area. Each burst of magnetic energy released is assimilated to a process of magnetic reconnection. We will thus generate several current layers (RCS) where the particles are accelerated by a direct electric field. We calculate the energy gain of the particles (ions and electrons) for various types of magnetic configuration. We calculate the distribution function of the kinetic energy of the particles after their interactions with a given number of RCS for each type of configurations. We show that the relative efficiency of the acceleration of the electrons and the ions depends on the selected configuration.
The Coalescent Process in Models with Selection
Kaplan, N. L.; Darden, T.; Hudson, R. R.
1988-01-01
Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685
A new equation of state Based on Nuclear Statistical Equilibrium for Core-Collapse Simulations
NASA Astrophysics Data System (ADS)
Furusawa, Shun; Yamada, Shoichi; Sumiyoshi, Kohsuke; Suzuki, Hideyuki
2012-09-01
We calculate a new equation of state for baryons at sub-nuclear densities for the use in core-collapse simulations of massive stars. The formulation is the nuclear statistical equilibrium description and the liquid drop approximation of nuclei. The model free energy to minimize is calculated by relativistic mean field theory for nucleons and the mass formula for nuclei with atomic number up to ~ 1000. We have also taken into account the pasta phase. We find that the free energy and other thermodynamical quantities are not very different from those given in the standard EOSs that adopt the single nucleus approximation. On the other hand, the average mass is systematically different, which may have an important effect on the rates of electron captures and coherent neutrino scatterings on nuclei in supernova cores.
Assessment of NDE Reliability Data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Chang, F. H.; Couchman, J. C.; Lemon, G. H.; Packman, P. F.
1976-01-01
Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.
Statistical fluctuations of an ocean surface inferred from shoes and ships
NASA Astrophysics Data System (ADS)
Lerche, Ian; Maubeuge, Frédéric
1995-12-01
This paper shows that it is possible to roughly estimate some ocean properties using simple time-dependent statistical models of ocean fluctuations. Based on a real incident, the loss by a vessel of a Nike shoes container in the North Pacific Ocean, a statistical model was tested on data sets consisting of the Nike shoes found by beachcombers a few months later. This statistical treatment of the shoes' motion allows one to infer velocity trends of the Pacific Ocean, together with their fluctuation strengths. The idea is to suppose that there is a mean bulk flow speed that can depend on location on the ocean surface and time. The fluctuations of the surface flow speed are then treated as statistically random. The distribution of shoes is described in space and time using Markov probability processes related to the mean and fluctuating ocean properties. The aim of the exercise is to provide some of the properties of the Pacific Ocean that are otherwise calculated using a sophisticated numerical model, OSCURS, where numerous data are needed. Relevant quantities are sharply estimated, which can be useful to (1) constrain output results from OSCURS computations, and (2) elucidate the behavior patterns of ocean flow characteristics on long time scales.
New statistical potential for quality assessment of protein models and a survey of energy functions
2010-01-01
Background Scoring functions, such as molecular mechanic forcefields and statistical potentials are fundamentally important tools in protein structure modeling and quality assessment. Results The performances of a number of publicly available scoring functions are compared with a statistical rigor, with an emphasis on knowledge-based potentials. We explored the effect on accuracy of alternative choices for representing interaction center types and other features of scoring functions, such as using information on solvent accessibility, on torsion angles, accounting for secondary structure preferences and side chain orientation. Partially based on the observations made, we present a novel residue based statistical potential, which employs a shuffled reference state definition and takes into account the mutual orientation of residue side chains. Atom- and residue-level statistical potentials and Linux executables to calculate the energy of a given protein proposed in this work can be downloaded from http://www.fiserlab.org/potentials. Conclusions Among the most influential terms we observed a critical role of a proper reference state definition and the benefits of including information about the microenvironment of interaction centers. Molecular mechanical potentials were also tested and found to be over-sensitive to small local imperfections in a structure, requiring unfeasible long energy relaxation before energy scores started to correlate with model quality. PMID:20226048
Schinke, Reinhard; Fleurat-Lessard, Paul
2005-03-01
The effect of zero-point energy differences (DeltaZPE) between the possible fragmentation channels of highly excited O(3) complexes on the isotope dependence of the formation of ozone is investigated by means of classical trajectory calculations and a strong-collision model. DeltaZPE is incorporated in the calculations in a phenomenological way by adjusting the potential energy surface in the product channels so that the correct exothermicities and endothermicities are matched. The model contains two parameters, the frequency of stabilizing collisions omega and an energy dependent parameter Delta(damp), which favors the lower energies in the Maxwell-Boltzmann distribution. The stabilization frequency is used to adjust the pressure dependence of the absolute formation rate while Delta(damp) is utilized to control its isotope dependence. The calculations for several isotope combinations of oxygen atoms show a clear dependence of relative formation rates on DeltaZPE. The results are similar to those of Gao and Marcus [J. Chem. Phys. 116, 137 (2002)] obtained within a statistical model. In particular, like in the statistical approach an ad hoc parameter eta approximately 1.14, which effectively reduces the formation rates of the symmetric ABA ozone molecules, has to be introduced in order to obtain good agreement with the measured relative rates of Janssen et al. [Phys. Chem. Chem. Phys. 3, 4718 (2001)]. The temperature dependence of the recombination rate is also addressed.
Molecular vibrational energy flow
NASA Astrophysics Data System (ADS)
Gruebele, M.; Bigwood, R.
This article reviews some recent work in molecular vibrational energy flow (IVR), with emphasis on our own computational and experimental studies. We consider the problem in various representations, and use these to develop a family of simple models which combine specific molecular properties (e.g. size, vibrational frequencies) with statistical properties of the potential energy surface and wavefunctions. This marriage of molecular detail and statistical simplification captures trends of IVR mechanisms and survival probabilities beyond the abilities of purely statistical models or the computational limitations of full ab initio approaches. Of particular interest is IVR in the intermediate time regime, where heavy-atom skeletal modes take over the IVR process from hydrogenic motions even upon X H bond excitation. Experiments and calculations on prototype heavy-atom systems show that intermediate time IVR differs in many aspects from the early stages of hydrogenic mode IVR. As a result, IVR can be coherently frozen, with potential applications to selective chemistry.
[Hungarian health resource allocation from the viewpoint of the English methodology].
Fadgyas-Freyler, Petra
2018-02-01
This paper describes both the English health resource allocation and the attempt of its Hungarian adaptation. We describe calculations for a Hungarian regression model using the English 'weighted capitation formula'. The model has proven statistically correct. New independent variables and homogenous regional units have to be found for Hungary. The English method can be used with adequate variables. Hungarian patient-level health data can support a much more sophisticated model. Further research activities are needed. Orv Hetil. 2018; 159(5): 183-191.
Ablation effects in oxygen-lead fragmentation at 2.1 GeV/nucleon
NASA Technical Reports Server (NTRS)
Townsend, L. W.
1984-01-01
The mechanism of particle evaporation was used to examine ablation effects in the fragmentation of 2.1 GeV/nucleon oxygen nuclei by lead targets. Following the initial abrasion process, the excited projectile prefragment is assumed to statistically decay in a manner analogous to that of a compound nucleus. The decay probabilities for the various particle emission channels are calculated by using the EVAP-4 Monte Carlo computer program. The input excitation energy spectrum for the prefragment is estimated from the geometric ""clean cut'' abrasion-ablation model. Isotope production cross sections are calculated and compared with experimental data and with the predictions from the standard geometric abrasion-ablation fragmentation model.
Validation and Sensitivity Analysis of a New Atmosphere-Soil-Vegetation Model.
NASA Astrophysics Data System (ADS)
Nagai, Haruyasu
2002-02-01
This paper describes details, validation, and sensitivity analysis of a new atmosphere-soil-vegetation model. The model consists of one-dimensional multilayer submodels for atmosphere, soil, and vegetation and radiation schemes for the transmission of solar and longwave radiations in canopy. The atmosphere submodel solves prognostic equations for horizontal wind components, potential temperature, specific humidity, fog water, and turbulence statistics by using a second-order closure model. The soil submodel calculates the transport of heat, liquid water, and water vapor. The vegetation submodel evaluates the heat and water budget on leaf surface and the downward liquid water flux. The model performance was tested by using measured data of the Cooperative Atmosphere-Surface Exchange Study (CASES). Calculated ground surface fluxes were mainly compared with observations at a winter wheat field, concerning the diurnal variation and change in 32 days of the first CASES field program in 1997, CASES-97. The measured surface fluxes did not satisfy the energy balance, so sensible and latent heat fluxes obtained by the eddy correlation method were corrected. By using options of the solar radiation scheme, which addresses the effect of the direct solar radiation component, calculated albedo agreed well with the observations. Some sensitivity analyses were also done for model settings. Model calculations of surface fluxes and surface temperature were in good agreement with measurements as a whole.
Theoretical uncertainties in the calculation of supersymmetric dark matter observables
NASA Astrophysics Data System (ADS)
Bergeron, Paul; Sandick, Pearl; Sinha, Kuver
2018-05-01
We estimate the current theoretical uncertainty in supersymmetric dark matter predictions by comparing several state-of-the-art calculations within the minimal supersymmetric standard model (MSSM). We consider standard neutralino dark matter scenarios — coannihilation, well-tempering, pseudoscalar resonance — and benchmark models both in the pMSSM framework and in frameworks with Grand Unified Theory (GUT)-scale unification of supersymmetric mass parameters. The pipelines we consider are constructed from the publicly available software packages SOFTSUSY, SPheno, FeynHiggs, SusyHD, micrOMEGAs, and DarkSUSY. We find that the theoretical uncertainty in the relic density as calculated by different pipelines, in general, far exceeds the statistical errors reported by the Planck collaboration. In GUT models, in particular, the relative discrepancies in the results reported by different pipelines can be as much as a few orders of magnitude. We find that these discrepancies are especially pronounced for cases where the dark matter physics relies critically on calculations related to electroweak symmetry breaking, which we investigate in detail, and for coannihilation models, where there is heightened sensitivity to the sparticle spectrum. The dark matter annihilation cross section today and the scattering cross section with nuclei also suffer appreciable theoretical uncertainties, which, as experiments reach the relevant sensitivities, could lead to uncertainty in conclusions regarding the viability or exclusion of particular models.
Statistical physics of interacting neural networks
NASA Astrophysics Data System (ADS)
Kinzel, Wolfgang; Metzler, Richard; Kanter, Ido
2001-12-01
Recent results on the statistical physics of time series generation and prediction are presented. A neural network is trained on quasi-periodic and chaotic sequences and overlaps to the sequence generator as well as the prediction errors are calculated numerically. For each network there exists a sequence for which it completely fails to make predictions. Two interacting networks show a transition to perfect synchronization. A pool of interacting networks shows good coordination in the minority game-a model of competition in a closed market. Finally, as a demonstration, a perceptron predicts bit sequences produced by human beings.
Prediction of pilot reserve attention capacity during air-to-air target tracking
NASA Technical Reports Server (NTRS)
Onstott, E. D.; Faulkner, W. H.
1977-01-01
Reserve attention capacity of a pilot was calculated using a pilot model that allocates exclusive model attention according to the ranking of task urgency functions whose variables are tracking error and error rate. The modeled task consisted of tracking a maneuvering target aircraft both vertically and horizontally, and when possible, performing a diverting side task which was simulated by the precise positioning of an electrical stylus and modeled as a task of constant urgency in the attention allocation algorithm. The urgency of the single loop vertical task is simply the magnitude of the vertical tracking error, while the multiloop horizontal task requires a nonlinear urgency measure of error and error rate terms. Comparison of model results with flight simulation data verified the computed model statistics of tracking error of both axes, lateral and longitudinal stick amplitude and rate, and side task episodes. Full data for the simulation tracking statistics as well as the explicit equations and structure of the urgency function multiaxis pilot model are presented.
Equations of state for explosive detonation products: The PANDA model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerley, G.I.
1994-05-01
This paper discusses a thermochemical model for calculating equations of state (EOS) for the detonation products of explosives. This model, which was first presented at the Eighth Detonation Symposium, is available in the PANDA code and is referred to here as ``the Panda model``. The basic features of the PANDA model are as follows. (1) Statistical-mechanical theories are used to construct EOS tables for each of the chemical species that are to be allowed in the detonation products. (2) The ideal mixing model is used to compute the thermodynamic functions for a mixture of these species, and the composition ofmore » the system is determined from assumption of chemical equilibrium. (3) For hydrocode calculations, the detonation product EOS are used in tabular form, together with a reactive burn model that allows description of shock-induced initiation and growth or failure as well as ideal detonation wave propagation. This model has been implemented in the three-dimensional Eulerian code, CTH.« less
Hunting down the best model of inflation with Bayesian evidence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Jerome; Ringeval, Christophe; Trotta, Roberto
2011-03-15
We present the first calculation of the Bayesian evidence for different prototypical single field inflationary scenarios, including representative classes of small field and large field models. This approach allows us to compare inflationary models in a well-defined statistical way and to determine the current 'best model of inflation'. The calculation is performed numerically by interfacing the inflationary code FieldInf with MultiNest. We find that small field models are currently preferred, while large field models having a self-interacting potential of power p>4 are strongly disfavored. The class of small field models as a whole has posterior odds of approximately 3 ratiomore » 1 when compared with the large field class. The methodology and results presented in this article are an additional step toward the construction of a full numerical pipeline to constrain the physics of the early Universe with astrophysical observations. More accurate data (such as the Planck data) and the techniques introduced here should allow us to identify conclusively the best inflationary model.« less
The landscape of W± and Z bosons produced in pp collisions up to LHC energies
NASA Astrophysics Data System (ADS)
Basso, Eduardo; Bourrely, Claude; Pasechnik, Roman; Soffer, Jacques
2017-10-01
We consider a selection of recent experimental results on electroweak W± , Z gauge boson production in pp collisions at BNL RHIC and CERN LHC energies in comparison to prediction of perturbative QCD calculations based on different sets of NLO parton distribution functions including the statistical PDF model known from fits to the DIS data. We show that the current statistical PDF parametrization (fitted to the DIS data only) underestimates the LHC data on W± , Z gauge boson production cross sections at the NLO by about 20%. This suggests that there is a need to refit the parameters of the statistical PDF including the latest LHC data.
The Multiphoton Interaction of Lambda Model Atom and Two-Mode Fields
NASA Technical Reports Server (NTRS)
Liu, Tang-Kun
1996-01-01
The system of two-mode fields interacting with atom by means of multiphotons is addressed, and the non-classical statistic quality of two-mode fields with interaction is discussed. Through mathematical calculation, some new rules of non-classical effects of two-mode fields which evolue with time, are established.
Performance of DIMTEST-and NOHARM-Based Statistics for Testing Unidimensionality
ERIC Educational Resources Information Center
Finch, Holmes; Habing, Brian
2007-01-01
This Monte Carlo study compares the ability of the parametric bootstrap version of DIMTEST with three goodness-of-fit tests calculated from a fitted NOHARM model to detect violations of the assumption of unidimensionality in testing data. The effectiveness of the procedures was evaluated for different numbers of items, numbers of examinees,…
An Economic Analysis of the Demand for Scientific Journals
ERIC Educational Resources Information Center
Berg, Sanford V.
1972-01-01
The purpose of this study is to demonstrate that economic analysis can be useful in modeling the scientific journal market. Of particular interest is the efficiency of pricing and page policies. To calculate loses due to inefficiencies, demand parameters are statistically estimated and used in a discussion of market efficiency. (3 references)…
NASA Astrophysics Data System (ADS)
Obraztsov, S. M.; Konobeev, Yu. V.; Birzhevoy, G. A.; Rachkov, V. I.
2006-12-01
The dependence of mechanical properties of ferritic/martensitic (F/M) steels on irradiation temperature is of interest because these steels are used as structural materials for fast, fusion reactors and accelerator driven systems. Experimental data demonstrating temperature peaks in physical and mechanical properties of neutron irradiated pure iron, nickel, vanadium, and austenitic stainless steels are available in the literature. A lack of such an information for F/M steels forces one to apply a computational mathematical-statistical modeling methods. The bootstrap procedure is one of such methods that allows us to obtain the necessary statistical characteristics using only a sample of limited size. In the present work this procedure is used for modeling the frequency distribution histograms of ultimate strength temperature peaks in pure iron and Russian F/M steels EP-450 and EP-823. Results of fitting the sums of Lorentz or Gauss functions to the calculated distributions are presented. It is concluded that there are two temperature (at 360 and 390 °C) peaks of the ultimate strength in EP-450 steel and single peak at 390 °C in EP-823.
Ecological footprint model using the support vector machine technique.
Ma, Haibo; Chang, Wenjuan; Cui, Guangbai
2012-01-01
The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance.
Energy Weighted Angular Correlations Between Hadrons Produced in Electron-Positron Annihilation.
NASA Astrophysics Data System (ADS)
Strharsky, Roger Joseph
Electron-positron annihilation at large center of mass energy produces many hadronic particles. Experimentalists then measure the energies of these particles in calorimeters. This study investigated correlations between the angular locations of one or two such calorimeters and the angular orientation of the electron beam in the laboratory frame of reference. The calculation of these correlations includes weighting by the fraction of the total center of mass energy which the calorimeter measures. Starting with the assumption that the reaction proceeeds through the intermediate production of a single quark/anti-quark pair, a simple statistical model was developed to provide a phenomenological description of the distribution of final state hadrons. The model distributions were then used to calculate the one- and two-calorimeter correlation functions. Results of these calculations were compared with available data and several predictions were made for those quantities which had not yet been measured. Failure of the model to reproduce all of the data was discussed in terms of quantum chromodynamics, a fundamental theory which includes quark interactions.
Method for Real-Time Model Based Structural Anomaly Detection
NASA Technical Reports Server (NTRS)
Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)
2015-01-01
A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.
A Bayesian approach to modeling diffraction profiles and application to ferroelectric materials
Iamsasri, Thanakorn; Guerrier, Jonathon; Esteves, Giovanni; ...
2017-02-01
A new statistical approach for modeling diffraction profiles is introduced, using Bayesian inference and a Markov chain Monte Carlo (MCMC) algorithm. This method is demonstrated by modeling the degenerate reflections during application of an electric field to two different ferroelectric materials: thin-film lead zirconate titanate (PZT) of composition PbZr 0.3Ti 0.7O 3and a bulk commercial PZT polycrystalline ferroelectric. Here, the new method offers a unique uncertainty quantification of the model parameters that can be readily propagated into new calculated parameters.
Bushmakin, A G; Cappelleri, J C; Symonds, T; Stecher, V J
2014-01-01
To apportion the direct effect and the indirect effect (through erections) that sildenafil (vs placebo) has on individual satisfaction and couple satisfaction over time, longitudinal mediation modeling was applied to outcomes on the Sexual Experience Questionnaire. The model included data from weeks 4 and 10 (double-blind phase) and week 16 (open-label phase) of a controlled study. Data from 167 patients with erectile dysfunction (ED) were available for analysis. Estimation of statistical significance was based on bootstrap simulations, which allowed inferences at and between time points. Percentages (and corresponding 95% confidence intervals) for direct and indirect effects of treatment were calculated using the model. For the individual satisfaction and couple satisfaction domains, direct treatment effects were negligible (not statistically significant) whereas indirect treatment effects via the erection domain represented >90% of the treatment effects (statistically significant). Week 4 vs week 10 percentages of direct and indirect effects were not statistically different, indicating that the mediation effects are longitudinally invariant. As there was no placebo arm in the open-label phase, mediation effects at week 16 were not estimable. In conclusion, erection has a crucial role as a mediator in restoring individual satisfaction and couple satisfaction in men with ED treated with sildenafil.
Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia
2016-01-01
Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis. PMID:27792763
Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia
2016-01-01
Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.
A phenomenological biological dose model for proton therapy based on linear energy transfer spectra.
Rørvik, Eivind; Thörnqvist, Sara; Stokkevåg, Camilla H; Dahle, Tordis J; Fjaera, Lars Fredrik; Ytre-Hauge, Kristian S
2017-06-01
The relative biological effectiveness (RBE) of protons varies with the radiation quality, quantified by the linear energy transfer (LET). Most phenomenological models employ a linear dependency of the dose-averaged LET (LET d ) to calculate the biological dose. However, several experiments have indicated a possible non-linear trend. Our aim was to investigate if biological dose models including non-linear LET dependencies should be considered, by introducing a LET spectrum based dose model. The RBE-LET relationship was investigated by fitting of polynomials from 1st to 5th degree to a database of 85 data points from aerobic in vitro experiments. We included both unweighted and weighted regression, the latter taking into account experimental uncertainties. Statistical testing was performed to decide whether higher degree polynomials provided better fits to the data as compared to lower degrees. The newly developed models were compared to three published LET d based models for a simulated spread out Bragg peak (SOBP) scenario. The statistical analysis of the weighted regression analysis favored a non-linear RBE-LET relationship, with the quartic polynomial found to best represent the experimental data (P = 0.010). The results of the unweighted regression analysis were on the borderline of statistical significance for non-linear functions (P = 0.053), and with the current database a linear dependency could not be rejected. For the SOBP scenario, the weighted non-linear model estimated a similar mean RBE value (1.14) compared to the three established models (1.13-1.17). The unweighted model calculated a considerably higher RBE value (1.22). The analysis indicated that non-linear models could give a better representation of the RBE-LET relationship. However, this is not decisive, as inclusion of the experimental uncertainties in the regression analysis had a significant impact on the determination and ranking of the models. As differences between the models were observed for the SOBP scenario, both non-linear LET spectrum- and linear LET d based models should be further evaluated in clinically realistic scenarios. © 2017 American Association of Physicists in Medicine.
Direct numerical simulations and modeling of a spatially-evolving turbulent wake
NASA Technical Reports Server (NTRS)
Cimbala, John M.
1994-01-01
Understanding of turbulent free shear flows (wakes, jets, and mixing layers) is important, not only for scientific interest, but also because of their appearance in numerous practical applications. Turbulent wakes, in particular, have recently received increased attention by researchers at NASA Langley. The turbulent wake generated by a two-dimensional airfoil has been selected as the test-case for detailed high-resolution particle image velocimetry (PIV) experiments. This same wake has also been chosen to enhance NASA's turbulence modeling efforts. Over the past year, the author has completed several wake computations, while visiting NASA through the 1993 and 1994 ASEE summer programs, and also while on sabbatical leave during the 1993-94 academic year. These calculations have included two-equation (K-omega and K-epsilon) models, algebraic stress models (ASM), full Reynolds stress closure models, and direct numerical simulations (DNS). Recently, there has been mutually beneficial collaboration of the experimental and computational efforts. In fact, these projects have been chosen for joint presentation at the NASA Turbulence Peer Review, scheduled for September 1994. DNS calculations are presently underway for a turbulent wake at Re(sub theta) = 1000 and at a Mach number of 0.20. (Theta is the momentum thickness, which remains constant in the wake of a two dimensional body.) These calculations utilize a compressible DNS code written by M. M. Rai of NASA Ames, and modified for the wake by J. Cimbala. The code employs fifth-order accurate upwind-biased finite differencing for the convective terms, fourth-order accurate central differencing for the viscous terms, and an iterative-implicit time-integration scheme. The computational domain for these calculations starts at x/theta = 10, and extends to x/theta = 610. Fully developed turbulent wake profiles, obtained from experimental data from several wake generators, are supplied at the computational inlet, along with appropriate noise. After some adjustment period, the flow downstream of the inlet develops into a fully three-dimensional turbulent wake. Of particular interest in the present study is the far wake spreading rate and the self-similar mean and turbulence profiles. At the time of this writing, grid resolution studies are underway, and a code is being written to calculate turbulence statistics from these wake calculations; the statistics will be compared to those from the ongoing PIV wake measurements, those of previous experiments, and those predicted by the various turbulence models. These calculations will lead to significant long-term benefits for the turbulence modeling effort. In particular, quantities such as the pressure-strain correlation and the dissipation rate tensor can be easily calculated from the DNS results, whereas these quantities are nearly impossible to measure experimentally. Improvements to existing turbulence models (and development of new models) require knowledge about flow quantities such as these. Present turbulence models do a very good job at prediction of the shape of the mean velocity and Reynolds stress profiles in a turbulent wake, but significantly underpredict the magnitude of the stresses and the spreading rate of the wake. Thus, the turbulent wake is an ideal flow for turbulence modeling research. By careful comparison and analysis of each term in the modeled Reynolds stress equations, the DNS data can show where deficiencies in the models exist; improvements to the models can then be attempted.
Statistics-based model for prediction of chemical biosynthesis yield from Saccharomyces cerevisiae
2011-01-01
Background The robustness of Saccharomyces cerevisiae in facilitating industrial-scale production of ethanol extends its utilization as a platform to synthesize other metabolites. Metabolic engineering strategies, typically via pathway overexpression and deletion, continue to play a key role for optimizing the conversion efficiency of substrates into the desired products. However, chemical production titer or yield remains difficult to predict based on reaction stoichiometry and mass balance. We sampled a large space of data of chemical production from S. cerevisiae, and developed a statistics-based model to calculate production yield using input variables that represent the number of enzymatic steps in the key biosynthetic pathway of interest, metabolic modifications, cultivation modes, nutrition and oxygen availability. Results Based on the production data of about 40 chemicals produced from S. cerevisiae, metabolic engineering methods, nutrient supplementation, and fermentation conditions described therein, we generated mathematical models with numerical and categorical variables to predict production yield. Statistically, the models showed that: 1. Chemical production from central metabolic precursors decreased exponentially with increasing number of enzymatic steps for biosynthesis (>30% loss of yield per enzymatic step, P-value = 0); 2. Categorical variables of gene overexpression and knockout improved product yield by 2~4 folds (P-value < 0.1); 3. Addition of notable amount of intermediate precursors or nutrients improved product yield by over five folds (P-value < 0.05); 4. Performing the cultivation in a well-controlled bioreactor enhanced the yield of product by three folds (P-value < 0.05); 5. Contribution of oxygen to product yield was not statistically significant. Yield calculations for various chemicals using the linear model were in fairly good agreement with the experimental values. The model generally underestimated the ethanol production as compared to other chemicals, which supported the notion that the metabolism of Saccharomyces cerevisiae has historically evolved for robust alcohol fermentation. Conclusions We generated simple mathematical models for first-order approximation of chemical production yield from S. cerevisiae. These linear models provide empirical insights to the effects of strain engineering and cultivation conditions toward biosynthetic efficiency. These models may not only provide guidelines for metabolic engineers to synthesize desired products, but also be useful to compare the biosynthesis performance among different research papers. PMID:21689458
Akazawa, K; Nakamura, T; Moriguchi, S; Shimada, M; Nose, Y
1991-07-01
Small sample properties of the maximum partial likelihood estimates for Cox's proportional hazards model depend on the sample size, the true values of regression coefficients, covariate structure, censoring pattern and possibly baseline hazard functions. Therefore, it would be difficult to construct a formula or table to calculate the exact power of a statistical test for the treatment effect in any specific clinical trial. The simulation program, written in SAS/IML, described in this paper uses Monte-Carlo methods to provide estimates of the exact power for Cox's proportional hazards model. For illustrative purposes, the program was applied to real data obtained from a clinical trial performed in Japan. Since the program does not assume any specific function for the baseline hazard, it is, in principle, applicable to any censored survival data as long as they follow Cox's proportional hazards model.
Semiclassical matrix model for quantum chaotic transport with time-reversal symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novaes, Marcel, E-mail: marcel.novaes@gmail.com
2015-10-15
We show that the semiclassical approach to chaotic quantum transport in the presence of time-reversal symmetry can be described by a matrix model. In other words, we construct a matrix integral whose perturbative expansion satisfies the semiclassical diagrammatic rules for the calculation of transport statistics. One of the virtues of this approach is that it leads very naturally to the semiclassical derivation of universal predictions from random matrix theory.
Lee, F K-H; Chan, C C-L; Law, C-K
2009-02-01
Contrast enhanced computed tomography (CECT) has been used for delineation of treatment target in radiotherapy. The different Hounsfield unit due to the injected contrast agent may affect radiation dose calculation. We investigated this effect on intensity modulated radiotherapy (IMRT) of nasopharyngeal carcinoma (NPC). Dose distributions of 15 IMRT plans were recalculated on CECT. Dose statistics for organs at risk (OAR) and treatment targets were recorded for the plain CT-calculated and CECT-calculated plans. Statistical significance of the differences was evaluated. Correlations were also tested, among magnitude of calculated dose difference, tumor size and level of enhancement contrast. Differences in nodal mean/median dose were statistically significant, but small (approximately 0.15 Gy for a 66 Gy prescription). In the vicinity of the carotid arteries, the difference in calculated dose was also statistically significant, but only with a mean of approximately 0.2 Gy. We did not observe any significant correlation between the difference in the calculated dose and the tumor size or level of enhancement. The results implied that the calculated dose difference was clinically insignificant and may be acceptable for IMRT planning.
Fattal, D R; Ben-Shaul, A
1994-01-01
A molecular, mean-field theory of chain packing statistics in aggregates of amphiphilic molecules is applied to calculate the conformational properties of the lipid chains comprising the hydrophobic cores of dipalmitoyl-phosphatidylcholine (DPPC), dioleoyl-phosphatidylcholine (DOPC), and palmitoyl-oleoyl-phosphatidylcholine (POPC) bilayers in their fluid state. The central quantity in this theory, the probability distribution of chain conformations, is evaluated by minimizing the free energy of the bilayer assuming only that the segment density within the hydrophobic region is uniform (liquidlike). Using this distribution we calculate chain conformational properties such as bond orientational order parameters and spatial distributions of the various chain segments. The lipid chains, both the saturated palmitoyl (-(CH2)14-CH3) and the unsaturated oleoyl (-(CH2)7-CH = CH-(CH2)7-CH3) chains are modeled using rotational isomeric state schemes. All possible chain conformations are enumerated and their statistical weights are determined by the self-consistency equations expressing the condition of uniform density. The hydrophobic core of the DPPC bilayer is treated as composed of single (palmitoyl) chain amphiphiles, i.e., the interactions between chains originating from the same lipid headgroup are assumed to be the same as those between chains belonging to different molecules. Similarly, the DOPC system is treated as a bilayer of oleoyl chains. The POPC bilayer is modeled as an equimolar mixture of palmitoyl and oleoyl chains. Bond orientational order parameter profiles, and segment spatial distributions are calculated for the three systems above, for several values of the bilayer thickness (or, equivalently, average area/headgroup) chosen, where possible, so as to allow for comparisons with available experimental data and/or molecular dynamics simulations. In most cases the agreement between the mean-field calculations, which are relatively easy to perform, and the experimental and simulation data is very good, supporting their use as an efficient tool for analyzing a variety of systems subject to varying conditions (e.g., bilayers of different compositions or thicknesses at different temperatures). PMID:7811955
NASA Astrophysics Data System (ADS)
Emoto, K.; Saito, T.; Shiomi, K.
2017-12-01
Short-period (<1 s) seismograms are strongly affected by small-scale (<10 km) heterogeneities in the lithosphere. In general, short-period seismograms are analysed based on the statistical method by considering the interaction between seismic waves and randomly distributed small-scale heterogeneities. Statistical properties of the random heterogeneities have been estimated by analysing short-period seismograms. However, generally, the small-scale random heterogeneity is not taken into account for the modelling of long-period (>2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.
Reproducibility of structural strength and stiffness for graphite-epoxy aircraft spoilers
NASA Technical Reports Server (NTRS)
Howell, W. E.; Reese, C. D.
1978-01-01
Structural strength reproducibility of graphite epoxy composite spoilers for the Boeing 737 aircraft was evaluated by statically loading fifteen spoilers to failure at conditions simulating aerodynamic loads. Spoiler strength and stiffness data were statistically modeled using a two parameter Weibull distribution function. Shape parameter values calculated for the composite spoiler strength and stiffness were within the range of corresponding shape parameter values calculated for material property data of composite laminates. This agreement showed that reproducibility of full scale component structural properties was within the reproducibility range of data from material property tests.
Chi-Square Statistics, Tests of Hypothesis and Technology.
ERIC Educational Resources Information Center
Rochowicz, John A.
The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…
A 3D model retrieval approach based on Bayesian networks lightfield descriptor
NASA Astrophysics Data System (ADS)
Xiao, Qinhan; Li, Yanjun
2009-12-01
A new 3D model retrieval methodology is proposed by exploiting a novel Bayesian networks lightfield descriptor (BNLD). There are two key novelties in our approach: (1) a BN-based method for building lightfield descriptor; and (2) a 3D model retrieval scheme based on the proposed BNLD. To overcome the disadvantages of the existing 3D model retrieval methods, we explore BN for building a new lightfield descriptor. Firstly, 3D model is put into lightfield, about 300 binary-views can be obtained along a sphere, then Fourier descriptors and Zernike moments descriptors can be calculated out from binaryviews. Then shape feature sequence would be learned into a BN model based on BN learning algorithm; Secondly, we propose a new 3D model retrieval method by calculating Kullback-Leibler Divergence (KLD) between BNLDs. Beneficial from the statistical learning, our BNLD is noise robustness as compared to the existing methods. The comparison between our method and the lightfield descriptor-based approach is conducted to demonstrate the effectiveness of our proposed methodology.
A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components
NASA Technical Reports Server (NTRS)
Abernethy, K.
1986-01-01
The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.
Long-term changes (1980-2003) in total ozone time series over Northern Hemisphere midlatitudes
NASA Astrophysics Data System (ADS)
Białek, Małgorzata
2006-03-01
Long-term changes in total ozone time series for Arosa, Belsk, Boulder and Sapporo stations are examined. For each station we analyze time series of the following statistical characteristics of the distribution of daily ozone data: seasonal mean, standard deviation, maximum and minimum of total daily ozone values for all seasons. The iterative statistical model is proposed to estimate trends and long-term changes in the statistical distribution of the daily total ozone data. The trends are calculated for the period 1980-2003. We observe lessening of negative trends in the seasonal means as compared to those calculated by WMO for 1980-2000. We discuss a possibility of a change of the distribution shape of ozone daily data using the Kolmogorov-Smirnov test and comparing trend values in the seasonal mean, standard deviation, maximum and minimum time series for the selected stations and seasons. The distribution shift toward lower values without a change in the distribution shape is suggested with the following exceptions: the spreading of the distribution toward lower values for Belsk during winter and no decisive result for Sapporo and Boulder in summer.
On the analysis of para-ammonia observations
NASA Technical Reports Server (NTRS)
Kuiper, T. B. H.
1994-01-01
The intensities and optical depths of the (1, 1), (2, 2), and (2, 1) inversion transitions of ammonia can be calculated quite accurately without solving the equations of statistical equilibrium. A two-temperature partition function suffices. The excitation of the K-ladders can be approximated by using a temperature obtained from a two-level model with the (2, 1) and (1, 1) levels. Distribution of populations between the ladders is described with the kinetic temperature. This enables one to compute the (1, 1) and (2, 1) inversion transition excitation temperatures and optical depths. To compute the (2, 2) brightness temperatures, the fractional population of the (2, 2) doublet is computed from the population of the (1, 1) doublet using the 'true rotation temperature,' which is calculated using a three-level model with the (2, 1), (2, 2), and (1, 1) levels. In spite of some iterative steps, the calculation is quite fast.
NASA Astrophysics Data System (ADS)
Marjani, Azam
2016-07-01
For biomolecules and cell particles purification and separation in biological engineering, besides the chromatography as mostly applied process, aqueous two-phase systems (ATPS) are of the most favorable separation processes that are worth to be investigated in thermodynamic theoretically. In recent years, thermodynamic calculation of ATPS properties has attracted much attention due to their great applications in chemical industries such as separation processes. These phase calculations of ATPS have inherent complexity due to the presence of ions and polymers in aqueous solution. In this work, for target ternary systems of polyethylene glycol (PEG4000)-salt-water, thermodynamic investigation for constituent systems with three salts (NaCl, KCl and LiCl) has been carried out as PEG is the most favorable polymer in ATPS. The modified perturbed hard sphere chain (PHSC) equation of state (EOS), extended Debye-Hückel and Pitzer models were employed for calculation of activity coefficients for the considered systems. Four additional statistical parameters were considered to ensure the consistency of correlations and introduced as objective functions in the particle swarm optimization algorithm. The results showed desirable agreement to the available experimental data, and the order of recommendation of studied models is PHSC EOS > extended Debye-Hückel > Pitzer. The concluding remark is that the all the employed models are reliable in such calculations and can be used for thermodynamic correlation/predictions; however, by using an ion-based parameter calculation method, the PHSC EOS reveals both reliability and universality of applications.
Bimodality emerges from transport model calculations of heavy ion collisions at intermediate energy
NASA Astrophysics Data System (ADS)
Mallik, S.; Das Gupta, S.; Chaudhuri, G.
2016-04-01
This work is a continuation of our effort [S. Mallik, S. Das Gupta, and G. Chaudhuri, Phys. Rev. C 91, 034616 (2015)], 10.1103/PhysRevC.91.034616 to examine if signatures of a phase transition can be extracted from transport model calculations of heavy ion collisions at intermediate energy. A signature of first-order phase transition is the appearance of a bimodal distribution in Pm(k ) in finite systems. Here Pm(k ) is the probability that the maximum of the multiplicity distribution occurs at mass number k . Using a well-known model for event generation [Botzmann-Uehling-Uhlenbeck (BUU) plus fluctuation], we study two cases of central collision: mass 40 on mass 40 and mass 120 on mass 120. Bimodality is seen in both the cases. The results are quite similar to those obtained in statistical model calculations. An intriguing feature is seen. We observe that at the energy where bimodality occurs, other phase-transition-like signatures appear. There are breaks in certain first-order derivatives. We then examine if such breaks appear in standard BUU calculations without fluctuations. They do. The implication is interesting. If first-order phase transition occurs, it may be possible to recognize that from ordinary BUU calculations. Probably the reason this has not been seen already is because this aspect was not investigated before.
NASA Technical Reports Server (NTRS)
Selle, L. C.; Bellan, Josette
2006-01-01
Transitional databases from Direct Numerical Simulation (DNS) of three-dimensional mixing layers for single-phase flows and two-phase flows with evaporation are analyzed and used to examine the typical hypothesis that the scalar dissipation Probability Distribution Function (PDF) may be modeled as a Gaussian. The databases encompass a single-component fuel and four multicomponent fuels, two initial Reynolds numbers (Re), two mass loadings for two-phase flows and two free-stream gas temperatures. Using the DNS calculated moments of the scalar-dissipation PDF, it is shown, consistent with existing experimental information on single-phase flows, that the Gaussian is a modest approximation of the DNS-extracted PDF, particularly poor in the range of the high scalar-dissipation values, which are significant for turbulent reaction rate modeling in non-premixed flows using flamelet models. With the same DNS calculated moments of the scalar-dissipation PDF and making a change of variables, a model of this PDF is proposed in the form of the (beta)-PDF which is shown to approximate much better the DNS-extracted PDF, particularly in the regime of the high scalar-dissipation values. Several types of statistical measures are calculated over the ensemble of the fourteen databases. For each statistical measure, the proposed (beta)-PDF model is shown to be much superior to the Gaussian in approximating the DNS-extracted PDF. Additionally, the agreement between the DNS-extracted PDF and the (beta)-PDF even improves when the comparison is performed for higher initial Re layers, whereas the comparison with the Gaussian is independent of the initial Re values. For two-phase flows, the comparison between the DNS-extracted PDF and the (beta)-PDF also improves with increasing free-stream gas temperature and mass loading. The higher fidelity approximation of the DNS-extracted PDF by the (beta)-PDF with increasing Re, gas temperature and mass loading bodes well for turbulent reaction rate modeling.
Pfeifle, Mark; Ma, Yong-Tao; Jasper, Ahren W; Harding, Lawrence B; Hase, William L; Klippenstein, Stephen J
2018-05-07
Ozonolysis produces chemically activated carbonyl oxides (Criegee intermediates, CIs) that are either stabilized or decompose directly. This branching has an important impact on atmospheric chemistry. Prior theoretical studies have employed statistical models for energy partitioning to the CI arising from dissociation of the initially formed primary ozonide (POZ). Here, we used direct dynamics simulations to explore this partitioning for decomposition of c-C 2 H 4 O 3 , the POZ in ethylene ozonolysis. A priori estimates for the overall stabilization probability were then obtained by coupling the direct dynamics results with master equation simulations. Trajectories were initiated at the concerted cycloreversion transition state, as well as the second transition state of a stepwise dissociation pathway, both leading to a CI (H 2 COO) and formaldehyde (H 2 CO). The resulting CI energy distributions were incorporated in master equation simulations of CI decomposition to obtain channel-specific stabilized CI (sCI) yields. Master equation simulations of POZ formation and decomposition, based on new high-level electronic structure calculations, were used to predict yields for the different POZ decomposition channels. A non-negligible contribution of stepwise POZ dissociation was found, and new mechanistic aspects of this pathway were elucidated. By combining the trajectory-based channel-specific sCI yields with the channel branching fractions, an overall sCI yield of (48 ± 5)% was obtained. Non-statistical energy release was shown to measurably affect sCI formation, with statistical models predicting significantly lower overall sCI yields (∼30%). Within the range of experimental literature values (35%-54%), our trajectory-based calculations favor those clustered at the upper end of the spectrum.
NASA Astrophysics Data System (ADS)
Pfeifle, Mark; Ma, Yong-Tao; Jasper, Ahren W.; Harding, Lawrence B.; Hase, William L.; Klippenstein, Stephen J.
2018-05-01
Ozonolysis produces chemically activated carbonyl oxides (Criegee intermediates, CIs) that are either stabilized or decompose directly. This branching has an important impact on atmospheric chemistry. Prior theoretical studies have employed statistical models for energy partitioning to the CI arising from dissociation of the initially formed primary ozonide (POZ). Here, we used direct dynamics simulations to explore this partitioning for decomposition of c-C2H4O3, the POZ in ethylene ozonolysis. A priori estimates for the overall stabilization probability were then obtained by coupling the direct dynamics results with master equation simulations. Trajectories were initiated at the concerted cycloreversion transition state, as well as the second transition state of a stepwise dissociation pathway, both leading to a CI (H2COO) and formaldehyde (H2CO). The resulting CI energy distributions were incorporated in master equation simulations of CI decomposition to obtain channel-specific stabilized CI (sCI) yields. Master equation simulations of POZ formation and decomposition, based on new high-level electronic structure calculations, were used to predict yields for the different POZ decomposition channels. A non-negligible contribution of stepwise POZ dissociation was found, and new mechanistic aspects of this pathway were elucidated. By combining the trajectory-based channel-specific sCI yields with the channel branching fractions, an overall sCI yield of (48 ± 5)% was obtained. Non-statistical energy release was shown to measurably affect sCI formation, with statistical models predicting significantly lower overall sCI yields (˜30%). Within the range of experimental literature values (35%-54%), our trajectory-based calculations favor those clustered at the upper end of the spectrum.
SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit
Chu, Annie; Cui, Jenny; Dinov, Ivo D.
2011-01-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994
Seven-parameter statistical model for BRDF in the UV band.
Bai, Lu; Wu, Zhensen; Zou, Xiren; Cao, Yunhua
2012-05-21
A new semi-empirical seven-parameter BRDF model is developed in the UV band using experimentally measured data. The model is based on the five-parameter model of Wu and the fourteen-parameter model of Renhorn and Boreman. Surface scatter, bulk scatter and retro-reflection scatter are considered. An optimizing modeling method, the artificial immune network genetic algorithm, is used to fit the BRDF measurement data over a wide range of incident angles. The calculation time and accuracy of the five- and seven-parameter models are compared. After fixing the seven parameters, the model can well describe scattering data in the UV band.
Neutron-$$\\gamma$$ competition for β-delayed neutron emission
Mumpower, Matthew Ryan; Kawano, Toshihiko; Moller, Peter
2016-12-19
Here we present a coupled quasiparticle random phase approximation and Hauser-Feshbach (QRPA+HF) model for calculating delayed particle emission. This approach uses microscopic nuclear structure information, which starts with Gamow-Teller strength distributions in the daughter nucleus and then follows the statistical decay until the initial available excitation energy is exhausted. Explicitly included at each particle emission stage is γ-ray competition. We explore this model in the context of neutron emission of neutron-rich nuclei and find that neutron-γ competition can lead to both increases and decreases in neutron emission probabilities, depending on the system considered. Finally, a second consequence of this formalismmore » is a prediction of more neutrons on average being emitted after β decay for nuclei near the neutron drip line compared to models that do not consider the statistical decay.« less
Nonlinear estimation of parameters in biphasic Arrhenius plots.
Puterman, M L; Hrboticky, N; Innis, S M
1988-05-01
This paper presents a formal procedure for the statistical analysis of data on the thermotropic behavior of membrane-bound enzymes generated using the Arrhenius equation and compares the analysis to several alternatives. Data is modeled by a bent hyperbola. Nonlinear regression is used to obtain estimates and standard errors of the intersection of line segments, defined as the transition temperature, and slopes, defined as energies of activation of the enzyme reaction. The methodology allows formal tests of the adequacy of a biphasic model rather than either a single straight line or a curvilinear model. Examples on data concerning the thermotropic behavior of pig brain synaptosomal acetylcholinesterase are given. The data support the biphasic temperature dependence of this enzyme. The methodology represents a formal procedure for statistical validation of any biphasic data and allows for calculation of all line parameters with estimates of precision.
NASA Astrophysics Data System (ADS)
Ogawa, Tatsuhiko; Sato, Tatsuhiko; Hashimoto, Shintaro; Niita, Koji
2014-06-01
The fragmentation reactions of relativistic-energy nucleus-nucleus and proton-nucleus collisions were simulated using the Statistical Multi-fragmentation Model (SMM) incorporated with the Particle and Heavy Ion Transport code System (PHITS). The comparisons of calculated cross-sections with literature data showed that PHITS-SMM predicts the fragmentation cross-sections of heavy nuclei up to two orders of magnitude more accurately than PHITS for heavy-ion-induced reactions. For proton-induced reactions, noticeable improvements are observed for interactions of the heavy target with protons at an energy greater than 1 GeV. Therefore, consideration for multi-fragmentation reactions is necessary for the accurate simulation of energetic fragmentation reactions of heavy nuclei.
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
Quantum statistics of Raman scattering model with Stokes mode generation
NASA Technical Reports Server (NTRS)
Tanatar, Bilal; Shumovsky, Alexander S.
1994-01-01
The model describing three coupled quantum oscillators with decay of Rayleigh mode into the Stokes and vibration (phonon) modes is examined. Due to the Manley-Rowe relations the problem of exact eigenvalues and eigenstates is reduced to the calculation of new orthogonal polynomials defined both by the difference and differential equations. The quantum statistical properties are examined in the case when initially: the Stokes mode is in the vacuum state; the Rayleigh mode is in the number state; and the vibration mode is in the number of or squeezed states. The collapses and revivals are obtained for different initial conditions as well as the change in time the sub-Poisson distribution by the super-Poisson distribution and vice versa.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-11-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-01-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087
Wallace, Michael P; Stewart, Catherine E; Moseley, Merrick J; Stephens, David A; Fielder, Alistair R
2016-12-01
To generate a statistical model for personalizing a patient's occlusion therapy regimen. Statistical modelling was undertaken on a combined data set of the Monitored Occlusion Treatment of Amblyopia Study (MOTAS) and the Randomized Occlusion Treatment of Amblyopia Study (ROTAS). This exercise permits the calculation of future patients' total effective dose (TED)-that predicted to achieve their best attainable visual acuity. Daily patching regimens (hours/day) can be calculated from the TED. Occlusion data for 149 study participants with amblyopia (anisometropic in 50, strabismic in 43, and mixed in 56) were analyzed. Median time to best observed visual acuity was 63 days (25% and 75% quartiles; 28 and 91 days). Median visual acuity in the amblyopic eye at start of occlusion was 0.40 logMAR (quartiles 0.22 and 0.68 logMAR) and at end of occlusion was 0.12 (quartiles 0.025 and 0.32 logMAR). Median lower and upper estimates of TED were 120 hours (quartiles 34 and 242 hours), and 176 hours (quartiles 84 and 316 hours). The data suggest a piecewise linear relationship (P = 0.008) between patching dose-rate (hours/day) and TED with a single breakpoint estimated at 2.16 (standard error 0.51) hours/day, suggesting doses below 2.16 hours/day are less effective. We introduce the concept of TED of occlusion. Predictors for TED are visual acuity deficit, amblyopia type, and age at start of occlusion therapy. Dose-rates prescribed within the model range from 2.5 to 12 hours/day and can be revised dynamically throughout treatment in response to recorded patient compliance: a personalized dosing strategy.
The MSFC Solar Activity Future Estimation (MSAFE) Model
NASA Technical Reports Server (NTRS)
Suggs, Ron
2017-01-01
The MSAFE model provides forecasts for the solar indices SSN, F10.7, and Ap. These solar indices are used as inputs to space environment models used in orbital spacecraft operations and space mission analysis. Forecasts from the MSAFE model are provided on the MSFC Natural Environments Branch's solar web page and are updated as new monthly observations become available. The MSAFE prediction routine employs a statistical technique that calculates deviations of past solar cycles from the mean cycle and performs a regression analysis to calculate the deviation from the mean cycle of the solar index at the next future time interval. The forecasts are initiated for a given cycle after about 8 to 9 monthly observations from the start of the cycle are collected. A forecast made at the beginning of cycle 24 using the MSAFE program captured the cycle fairly well with some difficulty in discerning the double peak that occurred at solar cycle maximum.
Shedge, Sapana V; Zhou, Xiuwen; Wesolowski, Tomasz A
2014-09-01
Recent application of the Frozen-Density Embedding Theory based continuum model of the solvent, which is used for calculating solvatochromic shifts in the UV/Vis range, are reviewed. In this model, the solvent is represented as a non-uniform continuum taking into account both the statistical nature of the solvent and specific solute-solvent interactions. It offers, therefore, a computationally attractive alternative to methods in which the solvent is described at atomistic level. The evaluation of the solvatochromic shift involves only two calculations of excitation energy instead of at least hundreds needed to account for inhomogeneous broadening. The present review provides a detailed graphical analysis of the key quantities of this model: the average charge density of the solvent (<ρB>) and the corresponding Frozen-Density Embedding Theory derived embedding potential for coumarin 153.
Applying Probabilistic Decision Models to Clinical Trial Design
Smith, Wade P; Phillips, Mark H
2018-01-01
Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075
An oilspill trajectory analysis model with a variable wind deflection angle
Samuels, W.B.; Huang, N.E.; Amstutz, D.E.
1982-01-01
The oilspill trajectory movement algorithm consists of a vector sum of the surface drift component due to wind and the surface current component. In the U.S. Geological Survey oilspill trajectory analysis model, the surface drift component is assumed to be 3.5% of the wind speed and is rotated 20 degrees clockwise to account for Coriolis effects in the Northern Hemisphere. Field and laboratory data suggest, however, that the deflection angle of the surface drift current can be highly variable. An empirical formula, based on field observations and theoretical arguments relating wind speed to deflection angle, was used to calculate a new deflection angle at each time step in the model. Comparisons of oilspill contact probabilities to coastal areas calculated for constant and variable deflection angles showed that the model is insensitive to this changing angle at low wind speeds. At high wind speeds, some statistically significant differences in contact probabilities did appear. ?? 1982.
Nie, Z Q; Ou, Y Q; Zhuang, J; Qu, Y J; Mai, J Z; Chen, J M; Liu, X Q
2016-05-01
Conditional logistic regression analysis and unconditional logistic regression analysis are commonly used in case control study, but Cox proportional hazard model is often used in survival data analysis. Most literature only refer to main effect model, however, generalized linear model differs from general linear model, and the interaction was composed of multiplicative interaction and additive interaction. The former is only statistical significant, but the latter has biological significance. In this paper, macros was written by using SAS 9.4 and the contrast ratio, attributable proportion due to interaction and synergy index were calculated while calculating the items of logistic and Cox regression interactions, and the confidence intervals of Wald, delta and profile likelihood were used to evaluate additive interaction for the reference in big data analysis in clinical epidemiology and in analysis of genetic multiplicative and additive interactions.
NASA Technical Reports Server (NTRS)
Gyekenyesi, J. P.
1985-01-01
A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.
Automated scoring of regional lung perfusion in children from contrast enhanced 3D MRI
NASA Astrophysics Data System (ADS)
Heimann, Tobias; Eichinger, Monika; Bauman, Grzegorz; Bischoff, Arved; Puderbach, Michael; Meinzer, Hans-Peter
2012-03-01
MRI perfusion images give information about regional lung function and can be used to detect pulmonary pathologies in cystic fibrosis (CF) children. However, manual assessment of the percentage of pathologic tissue in defined lung subvolumes features large inter- and intra-observer variation, making it difficult to determine disease progression consistently. We present an automated method to calculate a regional score for this purpose. First, lungs are located based on thresholding and morphological operations. Second, statistical shape models of left and right children's lungs are initialized at the determined locations and used to precisely segment morphological images. Segmentation results are transferred to perfusion maps and employed as masks to calculate perfusion statistics. An automated threshold to determine pathologic tissue is calculated and used to determine accurate regional scores. We evaluated the method on 10 MRI images and achieved an average surface distance of less than 1.5 mm compared to manual reference segmentations. Pathologic tissue was detected correctly in 9 cases. The approach seems suitable for detecting early signs of CF and monitoring response to therapy.
Extended optical model for fission
Sin, M.; Capote, R.; Herman, M. W.; ...
2016-03-07
A comprehensive formalism to calculate fission cross sections based on the extension of the optical model for fission is presented. It can be used for description of nuclear reactions on actinides featuring multi-humped fission barriers with partial absorption in the wells and direct transmission through discrete and continuum fission channels. The formalism describes the gross fluctuations observed in the fission probability due to vibrational resonances, and can be easily implemented in existing statistical reaction model codes. The extended optical model for fission is applied for neutron induced fission cross-section calculations on 234,235,238U and 239Pu targets. A triple-humped fission barrier ismore » used for 234,235U(n,f), while a double-humped fission barrier is used for 238U(n,f) and 239Pu(n,f) reactions as predicted by theoretical barrier calculations. The impact of partial damping of class-II/III states, and of direct transmission through discrete and continuum fission channels, is shown to be critical for a proper description of the measured fission cross sections for 234,235,238U(n,f) reactions. The 239Pu(n,f) reaction can be calculated in the complete damping approximation. Calculated cross sections for 235,238U(n,f) and 239Pu(n,f) reactions agree within 3% with the corresponding cross sections derived within the Neutron Standards least-squares fit of available experimental data. Lastly, the extended optical model for fission can be used for both theoretical fission studies and nuclear data evaluation.« less
Zhao, Jing-Xin; Su, Xiu-Yun; Xiao, Ruo-Xiu; Zhao, Zhe; Zhang, Li-Hai; Zhang, Li-Cheng; Tang, Pei-Fu
2016-11-01
We established a mathematical method to precisely calculate the radiographic anteversion (RA) and radiographic inclination (RI) angles of the acetabular cup based on anterior-posterior (AP) pelvic radiographs after total hip arthroplasty. Using Mathematica software, a mathematical model for an oblique cone was established to simulate how AP pelvic radiographs are obtained and to address the relationship between the two-dimensional and three-dimensional geometry of the opening circle of the cup. In this model, the vertex was the X-ray beam source, and the generatrix was the ellipse in radiographs projected from the opening circle of the acetabular cup. Using this model, we established a series of mathematical formulas to reveal the differences between the true RA and RI cup angles and the measurements results achieved using traditional methods and AP pelvic radiographs and to precisely calculate the RA and RI cup angles based on post-operative AP pelvic radiographs. Statistical analysis indicated that traditional methods should be used with caution if traditional measurements methods are used to calculate the RA and RI cup angles with AP pelvic radiograph. The entire calculation process could be performed by an orthopedic surgeon with mathematical knowledge of basic matrix and vector equations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
The Triangle Technique: a new evidence-based educational tool for pediatric medication calculations.
Sredl, Darlene
2006-01-01
Many nursing student verbalize an aversion to mathematical concepts and experience math anxiety whenever a mathematical problem is confronted. Since nurses confront mathematical problems on a daily basis, they must learn to feel comfortable with their ability to perform these calculations correctly. The Triangle Technique, a new educational tool available to nurse educators, incorporates evidence-based concepts within a graphic model using visual, auditory, and kinesthetic learning styles to demonstrate pediatric medication calculations of normal therapeutic ranges. The theoretical framework for the technique is presented, as is a pilot study examining the efficacy of the educational tool. Statistically significant results obtained by Pearson's product-moment correlation indicate that students are better able to calculate accurate pediatric therapeutic dosage ranges after participation in the educational intervention of learning the Triangle Technique.
3D QSAR models built on structure-based alignments of Abl tyrosine kinase inhibitors.
Falchi, Federico; Manetti, Fabrizio; Carraro, Fabio; Naldini, Antonella; Maga, Giovanni; Crespan, Emmanuele; Schenone, Silvia; Bruno, Olga; Brullo, Chiara; Botta, Maurizio
2009-06-01
Quality QSAR: A combination of docking calculations and a statistical approach toward Abl inhibitors resulted in a 3D QSAR model, the analysis of which led to the identification of ligand portions important for affinity. New compounds designed on the basis of the model were found to have very good affinity for the target, providing further validation of the model itself.The X-ray crystallographic coordinates of the Abl tyrosine kinase domain in its active, inactive, and Src-like inactive conformations were used as targets to simulate the binding mode of a large series of pyrazolo[3,4-d]pyrimidines (known Abl inhibitors) by means of GOLD software. Receptor-based alignments provided by molecular docking calculations were submitted to a GRID-GOLPE protocol to generate 3D QSAR models. Analysis of the results showed that the models based on the inactive and Src-like inactive conformations had very poor statistical parameters, whereas the sole model based on the active conformation of Abl was characterized by significant internal and external predictive ability. Subsequent analysis of GOLPE PLS pseudo-coefficient contour plots of this model gave us a better understanding of the relationships between structure and affinity, providing suggestions for the next optimization process. On the basis of these results, new compounds were designed according to the hydrophobic and hydrogen bond donor and acceptor contours, and were found to have improved enzymatic and cellular activity with respect to parent compounds. Additional biological assays confirmed the important role of the selected compounds as inhibitors of cell proliferation in leukemia cells.
Hill, Mary Catherine
1992-01-01
This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, R; Bai, W
Purpose: Because of statistical noise in Monte Carlo dose calculations, effective point doses may not be accurate. Volume spheres are useful for evaluating dose in Monte Carlo plans, which have an inherent statistical uncertainty.We use a user-defined sphere volume instead of a point, take sphere sampling around effective point make the dose statistics to decrease the stochastic errors. Methods: Direct dose measurements were made using a 0.125cc Semiflex ion chamber (IC) 31010 isocentrically placed in the center of a homogeneous Cylindric sliced RW3 phantom (PTW, Germany).In the scanned CT phantom series the sensitive volume length of the IC (6.5mm) weremore » delineated and defined the isocenter as the simulation effective points. All beams were simulated in Monaco in accordance to the measured model. In our simulation using 2mm voxels calculation grid spacing and choose calculate dose to medium and request the relative standard deviation ≤0.5%. Taking three different assigned IC over densities (air electron density(ED) as 0.01g/cm3 default CT scanned ED and Esophageal lumen ED 0.21g/cm3) were tested at different sampling sphere radius (2.5, 2, 1.5 and 1 mm) statistics dose were compared with the measured does. Results: The results show that in the Monaco TPS for the IC using Esophageal lumen ED 0.21g/cm3 and sampling sphere radius 1.5mm the statistical value is the best accordance with the measured value, the absolute average percentage deviation is 0.49%. And when the IC using air electron density(ED) as 0.01g/cm3 and default CT scanned EDthe recommented statistical sampling sphere radius is 2.5mm, the percentage deviation are 0.61% and 0.70%, respectivly. Conclusion: In Monaco treatment planning system for the ionization chamber 31010 recommend air cavity using ED 0.21g/cm3 and sampling 1.5mm sphere volume instead of a point dose to decrease the stochastic errors. Funding Support No.C201505006.« less
Computationally Efficient Multiconfigurational Reactive Molecular Dynamics
Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.
2012-01-01
It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924
Alchemical prediction of hydration free energies for SAMPL
Mobley, David L.; Liu, Shaui; Cerutti, David S.; Swope, William C.; Rice, Julia E.
2013-01-01
Hydration free energy calculations have become important tests of force fields. Alchemical free energy calculations based on molecular dynamics simulations provide a rigorous way to calculate these free energies for a particular force field, given sufficient sampling. Here, we report results of alchemical hydration free energy calculations for the set of small molecules comprising the 2011 Statistical Assessment of Modeling of Proteins and Ligands (SAMPL) challenge. Our calculations are largely based on the Generalized Amber Force Field (GAFF) with several different charge models, and we achieved RMS errors in the 1.4-2.2 kcal/mol range depending on charge model, marginally higher than what we typically observed in previous studies1-5. The test set consists of ethane, biphenyl, and a dibenzyl dioxin, as well as a series of chlorinated derivatives of each. We found that, for this set, using high-quality partial charges from MP2/cc-PVTZ SCRF RESP fits provided marginally improved agreement with experiment over using AM1-BCC partial charges as we have more typically done, in keeping with our recent findings5. Switching to OPLS Lennard-Jones parameters with AM1-BCC charges also improves agreement with experiment. We also find a number of chemical trends within each molecular series which we can explain, but there are also some surprises, including some that are captured by the calculations and some that are not. PMID:22198475
Cardot, J-M; Roudier, B; Schütz, H
2017-07-01
The f 2 test is generally used for comparing dissolution profiles. In cases of high variability, the f 2 test is not applicable, and the Multivariate Statistical Distance (MSD) test is frequently proposed as an alternative by the FDA and EMA. The guidelines provide only general recommendations. MSD tests can be performed either on raw data with or without time as a variable or on parameters of models. In addition, data can be limited-as in the case of the f 2 test-to dissolutions of up to 85% or to all available data. In the context of the present paper, the recommended calculation included all raw dissolution data up to the first point greater than 85% as a variable-without the various times as parameters. The proposed MSD overcomes several drawbacks found in other methods.
PDF-based heterogeneous multiscale filtration model.
Gong, Jian; Rutland, Christopher J
2015-04-21
Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.
Implementation and evaluation of a comprehensive emission model for Europe
NASA Astrophysics Data System (ADS)
Bieser, Johannes; Aulinger, Armin; Matthias, Volker; Quante, Markus
2010-05-01
Crucial input data sets for Chemical Transport Models (CTM) are the meteorological fields and the emissions data. While there are several publicly available meteorological models, the situation for European emission models is still different. European emissions data either lack spatial and temporal resolution, only cover specific countries or are proprietary and not free to use. In this work the US EPA emission model SMOKE (Sparse Matrix Operator Kernel Emissions) has been successfully adapted and partially extended to create European emissions input for CTMs. The modified version of the SMOKE emission model (SMOKE/E) uses official and publicly available data sets and statistics to create emissions of CO, NOx, SO2, NH3, PM2.5, PM10, NMVOC. Currently it supports VOC splits for several photochemical mechanisms, namely CB4, CB5 and RADM2. PM2.5 is split into elemental carbon, organic carbon, sulfate, nitrate and other particles. Additionally emissions of benzo[a]pyrene (BaP) have been modelled with SMOKE Europe. The temporal resolution of the emissions is one hour, the horizontal resolution is up to 1x1 km². SMOKE/E also implements plume in grid calculations for vertical distribution of point sources. The vertical resolution is infinitely variable and is implemented in the form of pressure levels. The area covered by the emission model at this point is Europe and it's surrounding countries, including north Africa and parts of Asia. Thus far SMOKE Europe has been used to create European emissions on a 54x54km² grid covering the whole of Europe and a 18x18km² nested grid over the North and Baltic Sea for the years 1990-2006. The currently implemented datasets allow for the calculation of emissions between 1970-2010. Besides this future emissions scenarios for the timespan 2010-2020 are being calculated using the EMEP projections. The created emissions have been statistically compared to the gridded EMEP emissions as well as to data from other emission models for the base years 2000 and 2001. The 54x54km² emissions data for 2000 were used as input for the CMAQ4.6 CTM and the calculated air concentrations were compared to EMEP measurements in Europe. Statistical comparison of Ozone and three particulate species (NH4,NO3,SO4) showed that SMOKE/E performs very good under the tested circumstances. O3 : (NMB 0.71) (SD 0.68) (F2 0.83) (CORR 0.55) using 48 Stations (hourly) NH4: (NMB 0.25) (SD 1.01) (F2 0.55) (CORR 0.53) using 8 Stations (daily) NO3: (NMB 0.42) (SD 0.60) (F2 0.40) (CORR 0.45) using 7 Stations (daily) SO4: (NMB 0.34) (SD 0.84) (F2 0.65) (CORR 0.55) using 21 Stations (daily) Abbreviations: Normalized Mean Bias (NMB), Standard Deviation (SD), Factor of 2 (F2), Correlation (CORR). The calculated air concentrations were compared with CMAQ runs using two purchased emissions datasets. It could be shown that the modified SMOKE model produces results comparable to those of commonly used European emissions data sets. For the future it is planed to implement emissions of heavy metals and polyfluorinated compounds into the SMOKE Europe model.
Daikoku, Tatsuya
2018-01-01
Learning and knowledge of transitional probability in sequences like music, called statistical learning and knowledge, are considered implicit processes that occur without intention to learn and awareness of what one knows. This implicit statistical knowledge can be alternatively expressed via abstract medium such as musical melody, which suggests this knowledge is reflected in melodies written by a composer. This study investigates how statistics in music vary over a composer's lifetime. Transitional probabilities of highest-pitch sequences in Ludwig van Beethoven's Piano Sonata were calculated based on different hierarchical Markov models. Each interval pattern was ordered based on the sonata opus number. The transitional probabilities of sequential patterns that are musical universal in music gradually decreased, suggesting that time-course variations of statistics in music reflect time-course variations of a composer's statistical knowledge. This study sheds new light on novel methodologies that may be able to evaluate the time-course variation of composer's implicit knowledge using musical scores.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavinto, Mikko; Räsänen, Syksy, E-mail: mikko.lavinto@helsinki.fi, E-mail: syksy.rasanen@iki.fi
We consider a Swiss Cheese model with a random arrangement of Lemaȋtre-Tolman-Bondi holes in ΛCDM cheese. We study two kinds of holes with radius r{sub b}=50 h{sup −1} Mpc, with either an underdense or an overdense centre, called the open and closed case, respectively. We calculate the effect of the holes on the temperature, angular diameter distance and, for the first time in Swiss Cheese models, shear of the CMB . We quantify the systematic shift of the mean and the statistical scatter, and calculate the power spectra. In the open case, the temperature power spectrum is three orders of magnitude belowmore » the linear ISW spectrum. It is sensitive to the details of the hole, in the closed case the amplitude is two orders of magnitude smaller. In contrast, the power spectra of the distance and shear are more robust, and agree with perturbation theory and previous Swiss Cheese results. We do not find a statistically significant mean shift in the sky average of the angular diameter distance, and obtain the 95% limit |Δ D{sub A}/ D-bar {sub A}|∼< 10{sup −4}. We consider the argument that areas of spherical surfaces are nearly unaffected by perturbations, which is often invoked in light propagation calculations. The closed case is consistent with this at 1σ, whereas in the open case the probability is only 1.4%.« less
CMB seen through random Swiss Cheese
NASA Astrophysics Data System (ADS)
Lavinto, Mikko; Räsänen, Syksy
2015-10-01
We consider a Swiss Cheese model with a random arrangement of Lemaȋtre-Tolman-Bondi holes in ΛCDM cheese. We study two kinds of holes with radius rb=50 h-1 Mpc, with either an underdense or an overdense centre, called the open and closed case, respectively. We calculate the effect of the holes on the temperature, angular diameter distance and, for the first time in Swiss Cheese models, shear of the CMB . We quantify the systematic shift of the mean and the statistical scatter, and calculate the power spectra. In the open case, the temperature power spectrum is three orders of magnitude below the linear ISW spectrum. It is sensitive to the details of the hole, in the closed case the amplitude is two orders of magnitude smaller. In contrast, the power spectra of the distance and shear are more robust, and agree with perturbation theory and previous Swiss Cheese results. We do not find a statistically significant mean shift in the sky average of the angular diameter distance, and obtain the 95% limit |Δ DA/bar DA|lesssim 10-4. We consider the argument that areas of spherical surfaces are nearly unaffected by perturbations, which is often invoked in light propagation calculations. The closed case is consistent with this at 1σ, whereas in the open case the probability is only 1.4%.
40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?
Code of Federal Regulations, 2013 CFR
2013-07-01
... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...
40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?
Code of Federal Regulations, 2011 CFR
2011-07-01
... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...
40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?
Code of Federal Regulations, 2014 CFR
2014-07-01
... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...
40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?
Code of Federal Regulations, 2012 CFR
2012-07-01
... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...
A Comparison of Statistical Models for Calculating Reliability of the Hoffmann Reflex
ERIC Educational Resources Information Center
Christie, A.; Kamen, G.; Boucher, Jean P.; Inglis, J. Greig; Gabriel, David A.
2010-01-01
The Hoffmann reflex is obtained through surface electromyographic recordings, and it is one of the most common neurophysiological techniques in exercise science. Measurement and evaluation of the peak-to-peak amplitude of the Hoffmann reflex has been guided by the observation that it is a variable response that requires multiple trials to obtain a…
Ricker, Martin; Peña Ramírez, Víctor M; von Rosen, Dietrich
2014-01-01
Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A · T+E, where for b ≠ 0 : Q = Ei[-b · r]-Ei[-b · r1] and for b = 0 : Q = Ln[r/r1], A = initial relative growth to be estimated, T = t-t1, and E is an error term for each tree and time point. Furthermore, Ei[-b · r] = ∫(Exp[-b · r]/r)dr, b = -1/TPR, with TPR being the turning point radius in a sigmoid curve, and r1 at t1 is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth A. One site (at the Popocatépetl volcano) stood out, with A being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time.
Search for ternary fission of chromium-48
NASA Astrophysics Data System (ADS)
Dummer, Andrew K.
1999-07-01
Both alpha cluster model calculations and macroscopic energy calculations that allow for a double-neck shape of the compound nucleus suggest the possibility of a novel three 16O, chain-like configuration in 48 Cr. Such a configuration might lead to an enhanced cross section for three-16O breakup. To explore this possibility, the three-body exit channels for the 36Ar + 12C reaction at a beam energy of 210 MeV have been studied. The cross section for 16O + 16O + 16O breakup has been deduced and has been found to be in excess of what would be expected to result from a sequential binary fission process. However, the observation of a similarly enhanced 12C + 16O + 20Ne breakup cross section suggests that the observed 16O + 16O + 16O yields might still be associated with a statistical fission process. The results are discussed in the context of the fission of light nuclear systems and a simple cluster model calculation. This latter, ``Harvey model'' calculation suggests a possible inhibition of the formation of a three- 16O chain configuration from the 36Ar + 12C entrance channel. A further measurement using the 20Ne + 28Si-entrance channel is suggested.
McMullan, Miriam; Jones, Ray; Lea, Susan
2010-04-01
This paper is a report of a correlational study of the relations of age, status, experience and drug calculation ability to numerical ability of nursing students and Registered Nurses. Competent numerical and drug calculation skills are essential for nurses as mistakes can put patients' lives at risk. A cross-sectional study was carried out in 2006 in one United Kingdom university. Validated numerical and drug calculation tests were given to 229 second year nursing students and 44 Registered Nurses attending a non-medical prescribing programme. The numeracy test was failed by 55% of students and 45% of Registered Nurses, while 92% of students and 89% of nurses failed the drug calculation test. Independent of status or experience, older participants (> or = 35 years) were statistically significantly more able to perform numerical calculations. There was no statistically significant difference between nursing students and Registered Nurses in their overall drug calculation ability, but nurses were statistically significantly more able than students to perform basic numerical calculations and calculations for solids, oral liquids and injections. Both nursing students and Registered Nurses were statistically significantly more able to perform calculations for solids, liquid oral and injections than calculations for drug percentages, drip and infusion rates. To prevent deskilling, Registered Nurses should continue to practise and refresh all the different types of drug calculations as often as possible with regular (self)-testing of their ability. Time should be set aside in curricula for nursing students to learn how to perform basic numerical and drug calculations. This learning should be reinforced through regular practice and assessment.
Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Patera, Anthony
1993-01-01
Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.
SurfKin: an ab initio kinetic code for modeling surface reactions.
Le, Thong Nguyen-Minh; Liu, Bin; Huynh, Lam K
2014-10-05
In this article, we describe a C/C++ program called SurfKin (Surface Kinetics) to construct microkinetic mechanisms for modeling gas-surface reactions. Thermodynamic properties of reaction species are estimated based on density functional theory calculations and statistical mechanics. Rate constants for elementary steps (including adsorption, desorption, and chemical reactions on surfaces) are calculated using the classical collision theory and transition state theory. Methane decomposition and water-gas shift reaction on Ni(111) surface were chosen as test cases to validate the code implementations. The good agreement with literature data suggests this is a powerful tool to facilitate the analysis of complex reactions on surfaces, and thus it helps to effectively construct detailed microkinetic mechanisms for such surface reactions. SurfKin also opens a possibility for designing nanoscale model catalysts. Copyright © 2014 Wiley Periodicals, Inc.
Arctic Acoustic Workshop Proceedings, 14-15 February 1989.
1989-06-01
measurements. The measurements reported by Levine et al. (1987) were taken from current and temperature sensors moored in two triangular grids . The internal...requires a resampling of the data series on a uniform depth-time grid . Statistics calculated from the resampled series will be used to test numerical...from an isolated keel. Figure 2: 2-D Modeling Geometry - The model is based on a 2-D Cartesian grid with an axis of symmetry on the left. A pulsed
K-shell Photoionization of Na-like to Cl-like Ions of Mg, Si, S, Ar, and Ca
NASA Technical Reports Server (NTRS)
Witthoeft, M. C.; Garcia, J.; Kallman, T. R.; Bautista, M. A.; Mendoza, C.; Palmeri, P.; Quinet, P.
2010-01-01
We present R-matrix calculations of photoabsorption and photoionization cross sections across the K edge of Mg, Si, S, Ar, and Ca ions with more than 10 electrons. The calculations include the effects of radiative and Auger damping by means of an optical potential. The wave functions are constructed from single-electron. orbital bases obtained using a Thomas-Fermi-Dirac statistical model potential. Configuration interaction is considered among all states up to n = 3. The damping processes affect the resonances converging to the K-thresholds causing them to display symmetric profiles of constant width that smear the otherwise sharp edge at the photoionization threshold. These data are important for the modeling of features found in photoionized plasmas.
What can 35 years and over 700,000 measurements tell us about noise exposure in the mining industry?
Roberts, Benjamin; Sun, Kan; Neitzel, Richard L
2017-01-01
To analyse over 700,000 cross-sectional measurements from the Mine Safety and Health Administration (MHSA) and develop statistical models to predict noise exposure for a worker. Descriptive statistics were used to summarise the data. Two linear regression models were used to predict noise exposure based on MSHA-permissible exposure limit (PEL) and action level (AL), respectively. Twofold cross validation was used to compare the exposure estimates from the models to actual measurement. The mean difference and t-statistic was calculated for each job title to determine whether the model predictions were significantly different from the actual data. Measurements were acquired from MSHA through a Freedom of Information Act request. From 1979 to 2014, noise exposure has decreased. Measurements taken before the implementation of MSHA's revised noise regulation in 2000 were on average 4.5 dBA higher than after the law was implemented. Both models produced exposure predictions that were less than 1 dBA different than the holdout data. Overall noise levels in mines have been decreasing. However, this decrease has not been uniform across all mining sectors. The exposure predictions from the model will be useful to help predict hearing loss in workers in the mining industry.
Geometrical modeling of optical phase difference for analyzing atmospheric turbulence
NASA Astrophysics Data System (ADS)
Yuksel, Demet; Yuksel, Heba
2013-09-01
Ways of calculating phase shifts between laser beams propagating through atmospheric turbulence can give us insight towards the understanding of spatial diversity in Free-Space Optical (FSO) links. We propose a new geometrical model to estimate phase shifts between rays as the laser beam propagates through a simulated turbulent media. Turbulence is simulated by filling the propagation path with spherical bubbles of varying sizes and refractive index discontinuities statistically distributed according to various models. The level of turbulence is increased by elongating the range and/or increasing the number of bubbles that the rays interact with along their path. For each statistical representation of the atmosphere, the trajectories of two parallel rays separated by a particular distance are analyzed and computed simultaneously using geometrical optics. The three-dimensional geometry of the spheres is taken into account in the propagation of the rays. The bubble model is used to calculate the correlation between the two rays as their separation distance changes. The total distance traveled by each ray as both rays travel to the target is computed. The difference in the path length traveled will yield the phase difference between the rays. The mean square phase difference is taken to be the phase structure function which in the literature, for a pair of collimated parallel pencil thin rays, obeys a five-third law assuming weak turbulence. All simulation results will be compared with the predictions of wave theory.
Low-lying dipole strength of the open-shell nucleus 94Mo
NASA Astrophysics Data System (ADS)
Romig, C.; Beller, J.; Glorius, J.; Isaak, J.; Kelley, J. H.; Kwan, E.; Pietralla, N.; Ponomarev, V. Yu.; Sauerwein, A.; Savran, D.; Scheck, M.; Schnorrenberger, L.; Sonnabend, K.; Tonchev, A. P.; Tornow, W.; Weller, H. R.; Zilges, A.; Zweidinger, M.
2013-10-01
The low-lying dipole strength of the open-shell nucleus 94Mo was studied via the nuclear resonance fluorescence technique up to 8.7 MeV excitation energy at the bremsstrahlung facility at the Superconducting Darmstadt Electron Linear Accelerator (S-DALINAC), and with Compton backscattered photons at the High Intensity γ-ray Source (HIγS) facility. In total, 83 excited states were identified. Exploiting polarized quasi-monoenergetic photons at HIγS, parity quantum numbers were assigned to 41 states excited by dipole transitions. The electric dipole-strength distribution was determined up to 8.7 MeV and compared to microscopic calculations within the quasiparticle phonon model. Calculations and experimental data are in good agreement for the fragmentation, as well as for the integrated strength. The average decay pattern of the excited states was investigated exploiting the HIγS measurements at five energy settings. Mean branching ratios to the ground state and first excited 21+ state were extracted from the measurements with quasi-monoenergetic photons and compared to γ-cascade simulations within the statistical model. The experimentally deduced mean branching ratios exhibit a resonance-like maximum at 6.4 MeV which cannot be reproduced within the statistical model. This indicates a nonstatistical structure in the energy range between 5.5 and 7.5 MeV.
Quality assessment of butter cookies applying multispectral imaging
Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne
2013-01-01
A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4–16 min and 160–200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400–700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center. PMID:24804036
NASA Astrophysics Data System (ADS)
Fatkullin, M. N.; Solodovnikov, G. K.; Trubitsyn, V. M.
2004-01-01
The results of developing the empirical model of parameters of radio signals propagating in the inhomogeneous ionosphere at middle and high latitudes are presented. As the initial data we took the homogeneous data obtained as a result of observations carried out at the Antarctic ``Molodezhnaya'' station by the method of continuous transmission probing of the ionosphere by signals of the satellite radionavigation ``Transit'' system at coherent frequencies of 150 and 400 MHz. The data relate to the summer season period in the Southern hemisphere of the Earth in 1988-1989 during high (F > 160) activity of the Sun. The behavior of the following statistical characteristics of radio signal parameters was analyzed: (a) the interval of correlation of fluctuations of amplitudes at a frequency of 150 MHz (τkA) (b) the interval of correlation of fluctuations of the difference phase (τkϕ) and (c) the parameter characterizing frequency spectra of amplitude (PA) and phase (Pϕ) fluctuations. A third-degree polynomial was used for modeling of propagation parameters. For all above indicated propagation parameters, the coefficients of the third-degree polynomial were calculated as a function of local time and magnetic activity. The results of calculations are tabulated.
NASA Astrophysics Data System (ADS)
Jambrina, P. G.; Lara, Manuel; Menéndez, M.; Launay, J.-M.; Aoiz, F. J.
2012-10-01
Cumulative reaction probabilities (CRPs) at various total angular momenta have been calculated for the barrierless reaction S(1D) + H2 → SH + H at total energies up to 1.2 eV using three different theoretical approaches: time-independent quantum mechanics (QM), quasiclassical trajectories (QCT), and statistical quasiclassical trajectories (SQCT). The calculations have been carried out on the widely used potential energy surface (PES) by Ho et al. [J. Chem. Phys. 116, 4124 (2002), 10.1063/1.1431280] as well as on the recent PES developed by Song et al. [J. Phys. Chem. A 113, 9213 (2009), 10.1021/jp903790h]. The results show that the differences between these two PES are relatively minor and mostly related to the different topologies of the well. In addition, the agreement between the three theoretical methodologies is good, even for the highest total angular momenta and energies. In particular, the good accordance between the CRPs obtained with dynamical methods (QM and QCT) and the statistical model (SQCT) indicates that the reaction can be considered statistical in the whole range of energies in contrast with the findings for other prototypical barrierless reactions. In addition, total CRPs and rate coefficients in the range of 20-1000 K have been calculated using the QCT and SQCT methods and have been found somewhat smaller than the experimental total removal rates of S(1D).
2016-01-01
The aims of this study were to develop strategies and algorithms of calculating food commodity intake suitable for exposure assessment of residual chemicals by using the food intake database of Korea National Health and Nutrition Examination Survey (KNHANES). In this study, apples and their processed food products were chosen as a model food for accurate calculation of food commodity intakes uthrough the recently developed Korea food commodity intake calculation (KFCIC) software. The average daily intakes of total apples in Korea Health Statistics were 29.60 g in 2008, 32.40 g in 2009, 34.30 g in 2010, 28.10 g in 2011, and 24.60 g in 2012. The average daily intakes of apples by KFCIC software was 2.65 g higher than that by Korea Health Statistics. The food intake data in Korea Health Statistics might have less reflected the intake of apples from mixed and processed foods than KFCIC software has. These results can affect outcome of risk assessment for residual chemicals in foods. Therefore, the accurate estimation of the average daily intake of food commodities is very important, and more data for food intakes and recipes have to be applied to improve the quality of data. Nevertheless, this study can contribute to the predictive estimation of exposure to possible residual chemicals and subsequent analysis for their potential risks. PMID:27152299
Minică, Camelia C; Dolan, Conor V; Hottenga, Jouke-Jan; Willemsen, Gonneke; Vink, Jacqueline M; Boomsma, Dorret I
2013-05-01
When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of two statistical approaches suitable to model imputed genotype data: the mixture approach, which involves the full distribution of the imputed genotypes and the dosage approach, where the mean of the conditional distribution features as the imputed genotype. Simulations were run by varying sibship size, size of the phenotypic correlations among siblings, imputation accuracy and minor allele frequency of the causal SNP. Furthermore, as imputing sibling data and extending the model to include sibships of size two or greater requires modeling the familial covariance matrix, we inquired whether model misspecification affects power. Finally, the results obtained via simulations were empirically verified in two datasets with continuous phenotype data (height) and with a dichotomous phenotype (smoking initiation). Across the settings considered, the mixture and the dosage approach are equally powerful and both produce unbiased parameter estimates. In addition, the likelihood-ratio test in the linear mixed model appears to be robust to the considered misspecification in the background covariance structure, given low to moderate phenotypic correlations among siblings. Empirical results show that the inclusion in association analysis of imputed sibling genotypes does not always result in larger test statistic. The actual test statistic may drop in value due to small effect sizes. That is, if the power benefit is small, that the change in distribution of the test statistic under the alternative is relatively small, the probability is greater of obtaining a smaller test statistic. As the genetic effects are typically hypothesized to be small, in practice, the decision on whether family-based imputation could be used as a means to increase power should be informed by prior power calculations and by the consideration of the background correlation.
Finch, S J; Chen, C H; Gordon, D; Mendell, N R
2001-12-01
This study compared the performance of the maximum lod (MLOD), maximum heterogeneity lod (MHLOD), maximum non-parametric linkage score (MNPL), maximum Kong and Cox linear extension (MKC(lin)) of NPL, and maximum Kong and Cox exponential extension (MKC(exp)) of NPL as calculated in Genehunter 1.2 and Genehunter-Plus. Our performance measure was the distance between the marker with maximum value for each linkage statistic and the trait locus. We performed a simulation study considering: 1) four modes of transmission, 2) 100 replicates for each model, 3) 58 pedigrees (with 592 subjects) per replicate, 4) three linked marker loci each having three equally frequent alleles, and 5) either 0% unlinked families (linkage homogeneity) or 50% unlinked families (linkage heterogeneity). For each replicate, we obtained the Haldane map position of the location at which each of the five statistics is maximized. The MLOD and MHLOD were obtained by maximizing over penetrances, phenocopy rate, and risk-allele frequencies. For the models simulated, MHLOD appeared to be the best statistic both in terms of identifying a marker locus having the smallest mean distance from the trait locus and in terms of the strongest negative correlation between maximum linkage statistic and distance of the identified position and the trait locus. The marker loci with maximum value of the Kong and Cox extensions of the NPL statistic also were closer to the trait locus than the marker locus with maximum value of the NPL statistic. Copyright 2001 Wiley-Liss, Inc.
Teshima, Tara Lynn; Patel, Vaibhav; Mainprize, James G; Edwards, Glenn; Antonyshyn, Oleh M
2015-07-01
The utilization of three-dimensional modeling technology in craniomaxillofacial surgery has grown exponentially during the last decade. Future development, however, is hindered by the lack of a normative three-dimensional anatomic dataset and a statistical mean three-dimensional virtual model. The purpose of this study is to develop and validate a protocol to generate a statistical three-dimensional virtual model based on a normative dataset of adult skulls. Two hundred adult skull CT images were reviewed. The average three-dimensional skull was computed by processing each CT image in the series using thin-plate spline geometric morphometric protocol. Our statistical average three-dimensional skull was validated by reconstructing patient-specific topography in cranial defects. The experiment was repeated 4 times. In each case, computer-generated cranioplasties were compared directly to the original intact skull. The errors describing the difference between the prediction and the original were calculated. A normative database of 33 adult human skulls was collected. Using 21 anthropometric landmark points, a protocol for three-dimensional skull landmarking and data reduction was developed and a statistical average three-dimensional skull was generated. Our results show the root mean square error (RMSE) for restoration of a known defect using the native best match skull, our statistical average skull, and worst match skull was 0.58, 0.74, and 4.4 mm, respectively. The ability to statistically average craniofacial surface topography will be a valuable instrument for deriving missing anatomy in complex craniofacial defects and deficiencies as well as in evaluating morphologic results of surgery.
An alternative way to evaluate chemistry-transport model variability
NASA Astrophysics Data System (ADS)
Menut, Laurent; Mailler, Sylvain; Bessagnet, Bertrand; Siour, Guillaume; Colette, Augustin; Couvidat, Florian; Meleux, Frédérik
2017-03-01
A simple and complementary model evaluation technique for regional chemistry transport is discussed. The methodology is based on the concept that we can learn about model performance by comparing the simulation results with observational data available for time periods other than the period originally targeted. First, the statistical indicators selected in this study (spatial and temporal correlations) are computed for a given time period, using colocated observation and simulation data in time and space. Second, the same indicators are used to calculate scores for several other years while conserving the spatial locations and Julian days of the year. The difference between the results provides useful insights on the model capability to reproduce the observed day-to-day and spatial variability. In order to synthesize the large amount of results, a new indicator is proposed, designed to compare several error statistics between all the years of validation and to quantify whether the period and area being studied were well captured by the model for the correct reasons.
Making Spatial Statistics Service Accessible On Cloud Platform
NASA Astrophysics Data System (ADS)
Mu, X.; Wu, J.; Li, T.; Zhong, Y.; Gao, X.
2014-04-01
Web service can bring together applications running on diverse platforms, users can access and share various data, information and models more effectively and conveniently from certain web service platform. Cloud computing emerges as a paradigm of Internet computing in which dynamical, scalable and often virtualized resources are provided as services. With the rampant growth of massive data and restriction of net, traditional web services platforms have some prominent problems existing in development such as calculation efficiency, maintenance cost and data security. In this paper, we offer a spatial statistics service based on Microsoft cloud. An experiment was carried out to evaluate the availability and efficiency of this service. The results show that this spatial statistics service is accessible for the public conveniently with high processing efficiency.
Study on the keV neutron capture reaction in 56Fe and 57Fe
NASA Astrophysics Data System (ADS)
Wang, Taofeng; Lee, Manwoo; Kim, Guinyun; Ro, Tae-Ik; Kang, Yeong-Rok; Igashira, Masayuki; Katabuchi, Tatsuya
2014-03-01
The neutron capture cross-sections and the radiative capture gamma-ray spectra from the broad resonances of 56Fe and 57Fe in the neutron energy range from 10 to 90keV and 550keV have been measured with an anti-Compton NaI(Tl) detector. Pulsed keV neutrons were produced from the 7Li 7Be reaction by bombarding the lithium target with the 1.5ns bunched proton beam from the 3MV Pelletron accelerator. The incident neutron spectrum on a capture sample was measured by means of a time-of-flight (TOF) method with a 6Li -glass detector. The number of weighted capture counts of the iron or gold sample was obtained by applying a pulse height weighting technique to the corresponding capture gamma-ray pulse height spectrum. The neutron capture gamma-ray spectra were obtained by unfolding the observed capture gamma-ray pulse height spectra. To achieve further understanding on the mechanism of neutron radiative capture reaction and study on physics models, theoretical calculations of the -ray spectra for 56Fe and 57Fe with the POD program have been performed by applying the Hauser-Feshbach statistical model. The dominant ingredients to perform the statistical calculation were the Optical Model Potential (OMP), the level densities described by the Mengoni-Nakajima approach, and the -ray transmission coefficients described by -ray strength functions. The comparison of the theoretical calculations, performed only for the 550keV point, show a good agreement with the present experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chihi, Hayet; Galli, Alain; Ravenne, Christian
2000-03-15
The object of this study is to build a three-dimensional (3D) geometric model of the stratigraphic units of the margin of the Rhone River on the basis of geophysical investigations by a network of seismic profiles at sea. The geometry of these units is described by depth charts of each surface identified by seismic profiling, which is done by geostatistics. The modeling starts by a statistical analysis by which we determine the parameters that enable us to calculate the variograms of the identified surfaces. After having determined the statistical parameters, we calculate the variograms of the variable Depth. By analyzingmore » the behavior of the variogram we then can deduce whether the situation is stationary and if the variable has an anisotropic behavior. We tried the following two nonstationary methods to obtain our estimates: (a) The method of universal kriging if the underlying variogram was directly accessible. (b) The method of increments if the underlying variogram was not directly accessible. After having modeled the variograms of the increments and of the variable itself, we calculated the surfaces by kriging the variable Depth on a small-mesh estimation grid. The two methods then are compared and their respective advantages and disadvantages are discussed, as well as their fields of application. These methods are capable of being used widely in earth sciences for automatic mapping of geometric surfaces or for variables such as a piezometric surface or a concentration, which are not 'stationary,' that is, essentially, possess a gradient or a tendency to develop systematically in space.« less
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.
2001-01-01
We completed the formulation of the smoothness penalty functional this past quarter. We used a simplified procedure for estimating the statistics of the FCA solution spectral coefficients from the results of the unconstrained, low-truncation FCA (stopping criterion) solutions. During the current reporting period we have completed the calculation of GEOS-2 model-equivalent brightness temperatures for the 6.7 micron and 11 micron window channels used in the GOES imagery for all 10 cases from August 1999. These were simulated using the AER-developed Optimal Spectral Sampling (OSS) model.
Modelling the CO emission in southern Bok globules
NASA Astrophysics Data System (ADS)
Cecchi-Pestellini, Cesare; Casu, Silvia; Scappini, Flavio
2001-10-01
The analysis of the sample of southern globules investigated by Scappini et al. in the CO (4-3) transition has been extended using a statistical equilibrium-radiative transfer model and making use of the results of Bourke et al. and Henning & Launardt for those globules which are in common among these samples. CO column densities and excitation temperatures have been calculated and the results compared with a chemical model representative of the chemistry of a spherical dark cloud. In a number of cases the gas kinetic temperatures have been constrained.
NASA Astrophysics Data System (ADS)
Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.
2018-02-01
In this paper we design a nonparametric method for failures detection and localization in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on algebraic solvability conditions for the aircraft model identification problem. This makes it possible to significantly increase the efficiency of detection and localization problem solution by completely eliminating errors, associated with aircraft model uncertainties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, R. D.
2013-09-06
We have developed a set of modeled nuclear reaction cross sections for use in radiochemical diagnostics. Systematics for the input parameters required by the Hauser-Feshbach statistical model were developed and used to calculate neutron induced nuclear reaction cross sections for targets ranging from Terbium (Z = 65) to Rhenium (Z = 75). Of particular interest are the cross sections on Tm, Lu, and Ta including reactions on isomeric targets.
Numerical solution for weight reduction model due to health campaigns in Spain
NASA Astrophysics Data System (ADS)
Mohammed, Maha A.; Noor, Noor Fadiya Mohd; Siri, Zailan; Ibrahim, Adriana Irawati Nur
2015-10-01
Transition model between three subpopulations based on Body Mass Index of Valencia community in Spain is considered. No changes in population nutritional habits and public health strategies on weight reduction until 2030 are assumed. The system of ordinary differential equations is solved using Runge-Kutta method of higher order. The numerical results obtained are compared with the predicted values of subpopulation proportion based on statistical estimation in 2013, 2015 and 2030. Relative approximate error is calculated. The consistency of the Runge-Kutta method in solving the model is discussed.
NASA Astrophysics Data System (ADS)
Iguchi, Kazumoto
We discuss the statistical mechanical foundation for the two-state transition in the protein folding of small globular proteins. In the standard arguments of protein folding, the statistical search for the ground state is carried out from astronomically many conformations in the configuration space. This leads us to the famous Levinthal's paradox. To resolve the paradox, Gō first postulated that the two-state transition - all-or-none type transition - is very crucial for the protein folding of small globular proteins and used the Gō's lattice model to show the two-state transition nature. Recently, there have been accumulated many experimental results that support the two-state transition for small globular proteins. Stimulated by such recent experiments, Zwanzig has introduced a minimal statistical mechanical model that exhibits the two-state transition. Also, Finkelstein and coworkers have discussed the solution of the paradox by considering the sequential folding of a small globular protein. On the other hand, recently Iguchi have introduced a toy model of protein folding using the Rubik's magic snake model, in which all folded structures are exactly known and mathematically represented in terms of the four types of conformations: cis-, trans-, left and right gauche-configurations between the unit polyhedrons. In this paper, we study the relationship between the Gō's two-state transition, the Zwanzig's statistical mechanics model and the Finkelsteinapos;s sequential folding model by applying them to the Rubik's magic snake models. We show that the foundation of the Gō's two-state transition model relies on the search within the equienergy surface that is labeled by the contact order of the hydrophobic condensation. This idea reproduces the Zwanzig's statistical model as a special case, realizes the Finkelstein's sequential folding model and fits together to understand the nature of the two-state transition of a small globular protein by calculating the physical quantities such as the free energy, the contact order and the specific heat. We point out the similarity between the liquid-gas transition in statistical mechanics and the two-state transition of protein folding. We also study morphology of the Rubik's magic snake models to give a prototype model for understanding the differences between α-helices proteins and β-sheets proteins.
Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten
2011-05-01
To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.
The challenge of identifying greenhouse gas-induced climatic change
NASA Technical Reports Server (NTRS)
Maccracken, Michael C.
1992-01-01
Meeting the challenge of identifying greenhouse gas-induced climatic change involves three steps. First, observations of critical variables must be assembled, evaluated, and analyzed to determine that there has been a statistically significant change. Second, reliable theoretical (model) calculations must be conducted to provide a definitive set of changes for which to search. Third, a quantitative and statistically significant association must be made between the projected and observed changes to exclude the possibility that the changes are due to natural variability or other factors. This paper provides a qualitative overview of scientific progress in successfully fulfilling these three steps.
Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation
Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.
2013-01-01
Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205
NASA Astrophysics Data System (ADS)
Chu, Huaqiang; Gu, Mingyan; Consalvi, Jean-Louis; Liu, Fengshan; Zhou, Huaichun
2016-03-01
The effects of total pressure on gas radiation heat transfer are investigated in 1D parallel plate geometry containing isothermal and homogeneous media and an inhomogeneous and non-isothermal CO2-H2O mixture under conditions relevant to oxy-fuel combustion using the line-by-line (LBL), statistical narrow-band (SNB), statistical narrow-band correlated-k (SNBCK), weighted-sum-of-grey-gases (WSGG), and full-spectrum correlated-k (FSCK) models. The LBL calculations were conducted using the HITEMP2010 and CDSD-1000 databases and the LBL results serve as the benchmark solution to evaluate the accuracy of the other models. Calculations of the SNB, SNBCK, and FSCK were conducted using both the 1997 EM2C SNB parameters and their recently updated 2012 parameters to investigate how the SNB model parameters affect the results under oxy-fuel combustion conditions at high pressures. The WSGG model considered is the recently developed one by Bordbar et al. [19] for oxy-fuel combustion based on LBL calculations using HITEMP2010. The total pressure considered ranges from 1 up to 30 atm. The total pressure significantly affects gas radiation transfer primarily through the increase in molecule number density and only slightly through spectral line broadening. Using the 1997 EM2C SNB model parameters the accuracy of SNB and SNBCK is very good and remains essentially independent of the total pressure. When using the 2012 EM2C SNB model parameters the SNB and SNBCK results are less accurate and their error increases with increasing the total pressure. The WSGG model has the lowest accuracy and the best computational efficiency among the models investigated. The errors of both WSGG and FSCK using the 2012 EM2C SNB model parameters increase when the total pressure is increased from 1 to 10 atm, but remain nearly independent of the total pressure beyond 10 atm. When using the 1997 EM2C SNB model parameters the accuracy of FSCK only slightly decreases with increasing the total pressure.
Multivariate model of female black bear habitat use for a Geographic Information System
Clark, Joseph D.; Dunn, James E.; Smith, Kimberly G.
1993-01-01
Simple univariate statistical techniques may not adequately assess the multidimensional nature of habitats used by wildlife. Thus, we developed a multivariate method to model habitat-use potential using a set of female black bear (Ursus americanus) radio locations and habitat data consisting of forest cover type, elevation, slope, aspect, distance to roads, distance to streams, and forest cover type diversity score in the Ozark Mountains of Arkansas. The model is based on the Mahalanobis distance statistic coupled with Geographic Information System (GIS) technology. That statistic is a measure of dissimilarity and represents a standardized squared distance between a set of sample variates and an ideal based on the mean of variates associated with animal observations. Calculations were made with the GIS to produce a map containing Mahalanobis distance values within each cell on a 60- × 60-m grid. The model identified areas of high habitat use potential that could not otherwise be identified by independent perusal of any single map layer. This technique avoids many pitfalls that commonly affect typical multivariate analyses of habitat use and is a useful tool for habitat manipulation or mitigation to favor terrestrial vertebrates that use habitats on a landscape scale.
Pasta Nucleosynthesis: Molecular dynamics simulations of nuclear statistical equilibrium
NASA Astrophysics Data System (ADS)
Caplan, Matthew; Horowitz, Charles; da Silva Schneider, Andre; Berry, Donald
2014-09-01
We simulate the decompression of cold dense nuclear matter, near the nuclear saturation density, in order to study the role of nuclear pasta in r-process nucleosynthesis in neutron star mergers. Our simulations are performed using a classical molecular dynamics model with 51 200 and 409 600 nucleons, and are run on GPUs. We expand our simulation region to decompress systems from initial densities of 0.080 fm-3 down to 0.00125 fm-3. We study proton fractions of YP = 0.05, 0.10, 0.20, 0.30, and 0.40 at T = 0.5, 0.75, and 1 MeV. We calculate the composition of the resulting systems using a cluster algorithm. This composition is in good agreement with nuclear statistical equilibrium models for temperatures of 0.75 and 1 MeV. However, for proton fractions greater than YP = 0.2 at a temperature of T = 0.5 MeV, the MD simulations produce non-equilibrium results with large rod-like nuclei. Our MD model is valid at higher densities than simple nuclear statistical equilibrium models and may help determine the initial temperatures and proton fractions of matter ejected in mergers.
Forecasting coconut production in the Philippines with ARIMA model
NASA Astrophysics Data System (ADS)
Lim, Cristina Teresa
2015-02-01
The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.
Computer program for the calculation of grain size statistics by the method of moments
Sawyer, Michael B.
1977-01-01
A computer program is presented for a Hewlett-Packard Model 9830A desk-top calculator (1) which calculates statistics using weight or point count data from a grain-size analysis. The program uses the method of moments in contrast to the more commonly used but less inclusive graphic method of Folk and Ward (1957). The merits of the program are: (1) it is rapid; (2) it can accept data in either grouped or ungrouped format; (3) it allows direct comparison with grain-size data in the literature that have been calculated by the method of moments; (4) it utilizes all of the original data rather than percentiles from the cumulative curve as in the approximation technique used by the graphic method; (5) it is written in the computer language BASIC, which is easily modified and adapted to a wide variety of computers; and (6) when used in the HP-9830A, it does not require punching of data cards. The method of moments should be used only if the entire sample has been measured and the worker defines the measured grain-size range. (1) Use of brand names in this paper does not imply endorsement of these products by the U.S. Geological Survey.
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Asano, Masanari; Basieva, Irina; Khrennikov, Andrei; Ohya, Masanori; Tanaka, Yoshiharu; Yamato, Ichiro
2012-06-01
We developed a quantum-like model describing the gene regulation of glucose/lactose metabolism in a bacterium, Escherichia coli. Our quantum-like model can be considered as a kind of the operational formalism for microbiology and genetics. Instead of trying to describe processes in a cell in the very detail, we propose a formal operator description. Such a description may be very useful in situation in which the detailed description of processes is impossible or extremely complicated. We analyze statistical data obtained from experiments, and we compute the degree of E. coli's preference within adaptive dynamics. It is known that there are several types of E. coli characterized by the metabolic system. We demonstrate that the same type of E. coli can be described by the well determined operators; we find invariant operator quantities characterizing each type. Such invariant quantities can be calculated from the obtained statistical data.
Confusion Noise Due To Asteroids: From Mid-infrared To Millimetre Wavelengths
NASA Astrophysics Data System (ADS)
Kelemen, Janos; Kiss, C.; Pal, A.; Muller, T.; Abraham, P.
2006-12-01
We developed a statistical model for the asteroid component of the infrared sky for wavelengths 5 μm <= λ <= 1000 μm based on the Statistical Asteroid Model (Tedesco et al., 2005). Far-infrared fluxes of 1.9 million asteroids -derived with the help of the Standard Thermal Model -are used to calculate confusion noise values and expected asteroid counts for infrared space instruments in operation or in the near future (e.g. Akari, Herschel and Planck). Our results show that the confusion noise due to asteroids will not increase the detection threshold for most of the sky. However, there are specific areas near the ecliptic plane where the effect of asteroids can be comparable to the contribution of Galactic cirrus emission and that of the extragalactic background. This work was supported by the European Space Agency (PECS #98011) and by the Hungarian Space Office (TP286)
Properties of JP=1/2+ baryon octets at low energy
NASA Astrophysics Data System (ADS)
Kaur, Amanpreet; Gupta, Pallavi; Upadhyay, Alka
2017-06-01
The statistical model in combination with the detailed balance principle is able to phenomenologically calculate and analyze spin- and flavor-dependent properties like magnetic moments (with effective masses, with effective charge, or with both effective mass and effective charge), quark spin polarization and distribution, the strangeness suppression factor, and \\overline{d}-\\overline{u} asymmetry incorporating the strange sea. The s\\overline{s} in the sea is said to be generated via the basic quark mechanism but suppressed by the strange quark mass factor ms>m_{u,d}. The magnetic moments of the octet baryons are analyzed within the statistical model, by putting emphasis on the SU(3) symmetry-breaking effects generated by the mass difference between the strange and non-strange quarks. The work presented here assumes hadrons with a sea having an admixture of quark gluon Fock states. The results obtained have been compared with theoretical models and experimental data.
Near-equilibrium dumb-bell-shaped figures for cohesionless small bodies
NASA Astrophysics Data System (ADS)
Descamps, Pascal
2016-02-01
In a previous paper (Descamps, P. [2015]. Icarus 245, 64-79), we developed a specific method aimed to retrieve the main physical characteristics (shape, density, surface scattering properties) of highly elongated bodies from their rotational lightcurves through the use of dumb-bell-shaped equilibrium figures. The present work is a test of this method. For that purpose we introduce near-equilibrium dumb-bell-shaped figures which are base dumb-bell equilibrium shapes modulated by lognormal statistics. Such synthetic irregular models are used to generate lightcurves from which our method is successfully applied. Shape statistical parameters of such near-equilibrium dumb-bell-shaped objects are in good agreement with those calculated for example for the Asteroid (216) Kleopatra from its dog-bone radar model. It may suggest that such bilobed and elongated asteroids can be approached by equilibrium figures perturbed be the interplay with a substantial internal friction modeled by a Gaussian random sphere.
Statistical Modeling of Robotic Random Walks on Different Terrain
NASA Astrophysics Data System (ADS)
Naylor, Austin; Kinnaman, Laura
Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.
NASA Astrophysics Data System (ADS)
Anikin, A. S.
2018-06-01
Conditional statistical characteristics of the phase difference are considered depending on the ratio of instantaneous output signal amplitudes of spatially separated weakly directional antennas for the normal field model for paths with radio-wave scattering. The dependences obtained are related to the physical processes on the radio-wave propagation path. The normal model parameters are established at which the statistical characteristics of the phase difference depend on the ratio of the instantaneous amplitudes and hence can be used to measure the phase difference. Using Shannon's formula, the amount of information on the phase difference of signals contained in the ratio of their amplitudes is calculated depending on the parameters of the normal field model. Approaches are suggested to reduce the shift of phase difference measured for paths with radio-wave scattering. A comparison with results of computer simulation by the Monte Carlo method is performed.
A Backscatter-Lidar Forward-Operator
NASA Astrophysics Data System (ADS)
Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Vogel, Bernhard; Mattis, Ina; Flentje, Harald; Förstner, Jochen; Potthast, Roland
2015-04-01
We have developed a forward-operator which is capable of calculating virtual lidar profiles from atmospheric state simulations. The operator allows us to compare lidar measurements and model simulations based on the same measurement parameter: the lidar backscatter profile. This method simplifies qualitative comparisons and also makes quantitative comparisons possible, including statistical error quantification. Implemented into an aerosol-capable model system, the operator will act as a component to assimilate backscatter-lidar measurements. As many weather services maintain already networks of backscatter-lidars, such data are acquired already in an operational manner. To estimate and quantify errors due to missing or uncertain aerosol information, we started sensitivity studies about several scattering parameters such as the aerosol size and both the real and imaginary part of the complex index of refraction. Furthermore, quantitative and statistical comparisons between measurements and virtual measurements are shown in this study, i.e. applying the backscatter-lidar forward-operator on model output.
The mean time-limited crash rate of stock price
NASA Astrophysics Data System (ADS)
Li, Yun-Xian; Li, Jiang-Cheng; Yang, Ai-Jun; Tang, Nian-Sheng
2017-05-01
In this article we investigate the occurrence of stock market crash in an economy cycle. Bayesian approach, Heston model and statistical-physical method are considered. Specifically, Heston model and an effective potential are employed to address the dynamic changes of stock price. Bayesian approach has been utilized to estimate the Heston model's unknown parameters. Statistical physical method is used to investigate the occurrence of stock market crash by calculating the mean time-limited crash rate. The real financial data from the Shanghai Composite Index is analyzed with the proposed methods. The mean time-limited crash rate of stock price is used to describe the occurrence of stock market crash in an economy cycle. The monotonous and nonmonotonous behaviors are observed in the behavior of the mean time-limited crash rate versus volatility of stock for various cross correlation coefficient between volatility and price. Also a minimum occurrence of stock market crash matching an optimal volatility is discovered.
Magnetospheric space plasma investigations
NASA Technical Reports Server (NTRS)
Comfort, Richard H.; Horwitz, James L.
1994-01-01
A time dependent semi-kinetic model that includes self collisions and ion-neutral collisions and chemistry was developed. Light ion outflow in the polar cap transition region was modeled and compared with data results. A model study of wave heating of O+ ions in the topside transition region was carried out using a code which does local calculations that include ion-neutral and Coulomb self collisions as well as production and loss of O+. Another project is a statistical study of hydrogen spin curve characteristics in the polar cap. A statistical study of the latitudinal distribution of core plasmas along the L=4.6 field line using DE-1/RIMS data was completed. A short paper on dual spacecraft estimates of ion temperature profiles and heat flows in the plasmasphere ionosphere system was prepared. An automated processing code was used to process RIMS data from 1981 to 1984.
Calculating system reliability with SRFYDO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M
2010-01-01
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less
Notes on numerical reliability of several statistical analysis programs
Landwehr, J.M.; Tasker, Gary D.
1999-01-01
This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tatarenko, V.A.; Tsysman, C.L.; Oltarzhevskaya, Y.T.
1994-12-31
The calculations in a majority of previous works for the fulleride (AqC{sub 60}) crystals were performed within the framework of the rigid-lattice model, neglecting the distoration relaxation of the host fullerene (C{sub 60}) crystal caused by the interstitial alkali-metal (A) cations. However, an each cation is a source of a static distoration field, and the resulting field is a superposition of such fields generated by all cations. This is a reason why the host-crystal distortions depend on the A-cations configurations, i.e. on a type of a spatial bulk distribution of interstitial cations. This paper seeks to find a functional relationmore » between the amplitudes of the doping-induced structure-distortion waves and of statistic concentration ones. A semiphenomenological model is constructed here within the scope of statistical-thermodynamic treatment and using the lattice-statistics simulation method. In this model the effects due to the presence of q solute A cations over available interstices (per unit cell) on the statistic inherent reorientation and/or displacements of the solvent molecules from the average-lattice sites as well as on the lattice parameter a of the elastically-anysotropic cubic C{sub 60} crystal are taken into account.« less
NASA Astrophysics Data System (ADS)
Paramonov, P. V.; Vorontsov, A. M.; Kunitsyn, V. E.
2015-10-01
Numerical modeling of optical wave propagation in atmospheric turbulence is traditionally performed with using the so-called "split"-operator method, when the influence of the propagation medium's refractive index inhomogeneities is accounted for only within a system of infinitely narrow layers (phase screens) where phase is distorted. Commonly, under certain assumptions, such phase screens are considered as mutually statistically uncorrelated. However, in several important applications including laser target tracking, remote sensing, and atmospheric imaging, accurate optical field propagation modeling assumes upper limitations on interscreen spacing. The latter situation can be observed, for instance, in the presence of large-scale turbulent inhomogeneities or in deep turbulence conditions, where interscreen distances become comparable with turbulence outer scale and, hence, corresponding phase screens cannot be statistically uncorrelated. In this paper, we discuss correlated phase screens. The statistical characteristics of screens are calculated based on a representation of turbulent fluctuations of three-dimensional (3D) refractive index random field as a set of sequentially correlated 3D layers displaced in the wave propagation direction. The statistical characteristics of refractive index fluctuations are described in terms of the von Karman power spectrum density. In the representation of these 3D layers by corresponding phase screens, the geometrical optics approximation is used.
NASA Astrophysics Data System (ADS)
Meredith, K. T.; Han, L. F.; Cendón, D. I.; Crawford, J.; Hankin, S.; Peterson, M.; Hollins, S. E.
2018-01-01
The Canning Basin is the largest sedimentary basin in Western Australia and is located in one of the most cyclone prone regions of Australia. Despite its importance as a future resource, limited groundwater data is available for the Basin. The main aims of this paper are to provide a detailed understanding of the source of groundwater recharge, the chemical evolution of dissolved inorganic carbon (DIC) and provide groundwater age estimations using radiocarbon (14CDIC). To do this we combine hydrochemical and isotopic techniques to investigate the type of precipitation that recharge the aquifer and identify the carbon processes influencing 14CDIC, δ13CDIC, and [DIC]. This enables us to select an appropriate model for calculating radiocarbon ages in groundwater. The aquifer was found to be recharged by precipitation originating from tropical cyclones imparting lower average δ2H and δ18O values in groundwater (-56.9‰ and -7.87‰, respectively). Water recharges the soil zone rapidly after these events and the groundwater undergoes silicate mineral weathering and clay mineral transformation processes. It was also found that partial carbonate dissolution processes occur within the saturated zone under closed system conditions. Additionally, the processes could be lumped into a pseudo-first-order process and the age could be estimated using the 14C statistical approach. In the single-sample-based 14C models, 14C0 is the initial 14CDIC value used in the decay equation that considers only 14C decay rate. A major advantage of using the statistical approach is that both 14C decay and geochemical processes that cause the decrease in 14CDIC are accounted for in the calculation. The 14CDIC values of groundwater were found to increase from 89 pmc in the south east to around 16 pmc along the groundwater flow path towards the coast indicating ages ranging from modern to 5.3 ka. A test of the sensitivity of this method showed that a ∼15% error could be found for the oldest water. This error was low when compared to single-sample-based models. This study not only provides the first groundwater age estimations for the Canning Basin but is the first groundwater dating study to test the sensitivity of the statistical approach and provide meaningful error calculations for groundwater dating.
NASA Astrophysics Data System (ADS)
Gomo, M.; Vermeulen, D.
2015-03-01
An investigation was conducted to statistically compare the influence of non-purging and purging groundwater sampling methods on analysed inorganic chemistry parameters and calculated saturation indices. Groundwater samples were collected from 15 monitoring wells drilled in Karoo aquifers before and after purging for the comparative study. For the non-purging method, samples were collected from groundwater flow zones located in the wells using electrical conductivity (EC) profiling. The two data sets of non-purged and purged groundwater samples were analysed for inorganic chemistry parameters at the Institute of Groundwater Studies (IGS) laboratory of the Free University in South Africa. Saturation indices for mineral phases that were found in the data base of PHREEQC hydrogeochemical model were calculated for each data set. Four one-way ANOVA tests were conducted using Microsoft excel 2007 to investigate if there is any statistically significant difference between: (1) all inorganic chemistry parameters measured in the non-purged and purged groundwater samples per each specific well, (2) all mineral saturation indices calculated for the non-purged and purged groundwater samples per each specific well, (3) individual inorganic chemistry parameters measured in the non-purged and purged groundwater samples across all wells and (4) Individual mineral saturation indices calculated for non-purged and purged groundwater samples across all wells. For all the ANOVA tests conducted, the calculated alpha values (p) are greater than 0.05 (significance level) and test statistic (F) is less than the critical value (Fcrit) (F < Fcrit). The results imply that there was no statistically significant difference between the two data sets. With a 95% confidence, it was therefore concluded that the variance between groups was rather due to random chance and not to the influence of the sampling methods (tested factor). It is therefore be possible that in some hydrogeologic conditions, non-purged groundwater samples might be just as representative as the purged ones. The findings of this study can provide an important platform for future evidence oriented research investigations to establish the necessity of purging prior to groundwater sampling in different aquifer systems.
Bozkurt, Hayriye; D'Souza, Doris H; Davidson, P Michael
2014-05-01
Hepatitis A virus (HAV) is a food-borne enteric virus responsible for outbreaks of hepatitis associated with shellfish consumption. The objectives of this study were to determine the thermal inactivation behavior of HAV in blue mussels, to compare the first-order and Weibull models to describe the data, to calculate Arrhenius activation energy for each model, and to evaluate model efficiency by using selected statistical criteria. The times required to reduce the population by 1 log cycle (D-values) calculated from the first-order model (50 to 72°C) ranged from 1.07 to 54.17 min for HAV. Using the Weibull model, the times required to destroy 1 log unit (tD = 1) of HAV at the same temperatures were 1.57 to 37.91 min. At 72°C, the treatment times required to achieve a 6-log reduction were 7.49 min for the first-order model and 8.47 min for the Weibull model. The z-values (changes in temperature required for a 90% change in the log D-values) calculated for HAV were 15.88 ± 3.97°C (R(2), 0.94) with the Weibull model and 12.97 ± 0.59°C (R(2), 0.93) with the first-order model. The calculated activation energies for the first-order model and the Weibull model were 165 and 153 kJ/mol, respectively. The results revealed that the Weibull model was more appropriate for representing the thermal inactivation behavior of HAV in blue mussels. Correct understanding of the thermal inactivation behavior of HAV could allow precise determination of the thermal process conditions to prevent food-borne viral outbreaks associated with the consumption of contaminated mussels.
David E. Rupp,
2016-05-05
The 20th century climate for the Southeastern United States and surrounding areas as simulated by global climate models used in the Coupled Model Intercomparison Project Phase 5 (CMIP5) was evaluated. A suite of statistics that characterize various aspects of the regional climate was calculated from both model simulations and observation-based datasets. CMIP5 global climate models were ranked by their ability to reproduce the observed climate. Differences in the performance of the models between regions of the United States (the Southeastern and Northwestern United States) warrant a regional-scale assessment of CMIP5 models.
NASA Astrophysics Data System (ADS)
Press, J.; Broughton, J.; Kudela, R. M.
2014-12-01
Suspended and dissolved trace elements are key determinants of water quality in estuarine and coastal waters. High concentrations of trace element pollutants in the San Francisco Bay estuary necessitate consistent and thorough monitoring to mitigate adverse effects on biological systems and the contamination of water and food resources. Although existing monitoring programs collect annual in situ samples from fixed locations, models proposed by Benoit, Kudela, & Flegal (2010) enable calculation of the water column total concentration (WCT) and the water column dissolved concentration (WCD) of 14 trace elements in the San Francisco Bay from a more frequently sampled metric—suspended solids concentration (SSC). This study tests the application of these models with SSC calculated from remote sensing data, with the aim of validating a tool for continuous synoptic monitoring of trace elements in the San Francisco Bay. Using HICO imagery, semi-analytical and empirical SSC algorithms were tested against a USGS dataset. A single-band method with statistically significant linear fit (p < 0.001) was chosen as the proxy for SSC values. The numerical models for WCT and the distribution ratio D were applied in MATLAB with terms to account for regional and seasonal effects, and results were used to calculate WCD. The modeled results were assessed against in situ data from the San Francisco Estuary Regional Monitoring Program. Quantile regression was used to evaluate model sensitivity to the distribution of regions, and outliers displaying regional aberrations were removed before robust regression was applied. Statistically significant and highly correlated results for WCT were found for 10 elements, with goodness of fit greater than or equal to that of the original models of seven elements. WCD was successfully modeled for six elements, with goodness of fit for each exceeding that of the original models. Concentrations of Arsenic, Iron, and Lead in the southern region of the Bay were found to exceed EPA water quality criteria for human health and aquatic life. The results of this study demonstrate the potential of monitoring programs using remote observation of trace element concentrations, and provide the foundation for investigation of pollutant sources and pathways over time.
Blum, Thomas; Chowdhury, Saumitra; Hayakawa, Masashi; Izubuchi, Taku
2015-01-09
The most compelling possibility for a new law of nature beyond the four fundamental forces comprising the standard model of high-energy physics is the discrepancy between measurements and calculations of the muon anomalous magnetic moment. Until now a key part of the calculation, the hadronic light-by-light contribution, has only been accessible from models of QCD, the quantum description of the strong force, whose accuracy at the required level may be questioned. A first principles calculation with systematically improvable errors is needed, along with the upcoming experiments, to decisively settle the matter. For the first time, the form factor that yields the light-by-light scattering contribution to the muon anomalous magnetic moment is computed in such a framework, lattice QCD+QED and QED. A nonperturbative treatment of QED is used and checked against perturbation theory. The hadronic contribution is calculated for unphysical quark and muon masses, and only the diagram with a single quark loop is computed for which statistically significant signals are obtained. Initial results are promising, and the prospect for a complete calculation with physical masses and controlled errors is discussed.
The Thomas–Fermi quark model: Non-relativistic aspects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Quan, E-mail: quan_liu@baylor.edu; Wilcox, Walter, E-mail: walter_wilcox@baylor.edu
The first numerical investigation of non-relativistic aspects of the Thomas–Fermi (TF) statistical multi-quark model is given. We begin with a review of the traditional TF model without an explicit spin interaction and find that the spin splittings are too small in this approach. An explicit spin interaction is then introduced which entails the definition of a generalized spin “flavor”. We investigate baryonic states in this approach which can be described with two inequivalent wave functions; such states can however apply to multiple degenerate flavors. We find that the model requires a spatial separation of quark flavors, even if completely degenerate.more » Although the TF model is designed to investigate the possibility of many-quark states, we find surprisingly that it may be used to fit the low energy spectrum of almost all ground state octet and decuplet baryons. The charge radii of such states are determined and compared with lattice calculations and other models. The low energy fit obtained allows us to extrapolate to the six-quark doubly strange H-dibaryon state, flavor symmetric strange states of higher quark content and possible six quark nucleon–nucleon resonances. The emphasis here is on the systematics revealed in this approach. We view our model as a versatile and convenient tool for quickly assessing the characteristics of new, possibly bound, particle states of higher quark number content. -- Highlights: • First application of the statistical Thomas–Fermi quark model to baryonic systems. • Novel aspects: spin as generalized flavor; spatial separation of quark flavor phases. • The model is statistical, but the low energy baryonic spectrum is successfully fit. • Numerical applications include the H-dibaryon, strange states and nucleon resonances. • The statistical point of view does not encourage the idea of bound many-quark baryons.« less
Statistical analysis of low level atmospheric turbulence
NASA Technical Reports Server (NTRS)
Tieleman, H. W.; Chen, W. W. L.
1974-01-01
The statistical properties of low-level wind-turbulence data were obtained with the model 1080 total vector anemometer and the model 1296 dual split-film anemometer, both manufactured by Thermo Systems Incorporated. The data obtained from the above fast-response probes were compared with the results obtained from a pair of Gill propeller anemometers. The digitized time series representing the three velocity components and the temperature were each divided into a number of blocks, the length of which depended on the lowest frequency of interest and also on the storage capacity of the available computer. A moving-average and differencing high-pass filter was used to remove the trend and the low frequency components in the time series. The calculated results for each of the anemometers used are represented in graphical or tabulated form.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, He
2016-11-20
Angular velocity information is a requisite for a spacecraft guidance, navigation, and control system. In this paper, an approach for angular velocity estimation based merely on star vector measurement with an improved current statistical model Kalman filter is proposed. High-precision angular velocity estimation can be achieved under dynamic conditions. The amount of calculation is also reduced compared to a Kalman filter. Different trajectories are simulated to test this approach, and experiments with real starry sky observation are implemented for further confirmation. The estimation accuracy is proved to be better than 10-4 rad/s under various conditions. Both the simulation and the experiment demonstrate that the described approach is effective and shows an excellent performance under both static and dynamic conditions.
Prediction of N-nitrosodimethylamine (NDMA) formation as a disinfection by-product.
Kim, Jongo; Clevenger, Thomas E
2007-06-25
This study investigated the possibility of a statistical model application for the prediction of N-nitrosodimethylamine (NDMA) formation. The NDMA formation was studied as a function of monochloramine concentration (0.001-5mM) at fixed dimethylamine (DMA) concentrations of 0.01mM or 0.05mM. Excellent linear correlations were observed between the molar ratio of monochloramine to DMA and the NDMA formation on a log scale at pH 7 and 8. When a developed prediction equation was applied to a previously reported study, a good result was obtained. The statistical model appears to predict adequately NDMA concentrations if other NDMA precursors are excluded. Using the predictive tool, a simple and approximate calculation of NDMA formation can be obtained in drinking water systems.
A study of two statistical methods as applied to shuttle solid rocket booster expenditures
NASA Technical Reports Server (NTRS)
Perlmutter, M.; Huang, Y.; Graves, M.
1974-01-01
The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.
Sivers asymmetries for inclusive pion and kaon production in deep-inelastic scattering
NASA Astrophysics Data System (ADS)
Ellis, John; Hwang, Dae Sung; Kotzinian, Aram
2009-10-01
We calculate the Sivers distribution functions induced by the final-state interaction due to one-gluon exchange in diquark models of a nucleon structure, treating the cases of scalar and axial-vector diquarks with both dipole and Gaussian form factors. We use these distribution functions to calculate the Sivers single-spin asymmetries for inclusive pion and kaon production in deep-inelastic scattering. We compare our calculations with the results of HERMES and COMPASS, finding good agreement for π+ production at HERMES, and qualitative agreement for π0 and K+ production. Our predictions for pion and kaon production at COMPASS could be probed with increased statistics. The successful comparison of our calculations with the HERMES data constitutes prima facie evidence that the quarks in the nucleon have some orbital angular momentum in the infinite-momentum frame.
Pitfalls in statistical landslide susceptibility modelling
NASA Astrophysics Data System (ADS)
Schröder, Boris; Vorpahl, Peter; Märker, Michael; Elsenbeer, Helmut
2010-05-01
The use of statistical methods is a well-established approach to predict landslide occurrence probabilities and to assess landslide susceptibility. This is achieved by applying statistical methods relating historical landslide inventories to topographic indices as predictor variables. In our contribution, we compare several new and powerful methods developed in machine learning and well-established in landscape ecology and macroecology for predicting the distribution of shallow landslides in tropical mountain rainforests in southern Ecuador (among others: boosted regression trees, multivariate adaptive regression splines, maximum entropy). Although these methods are powerful, we think it is necessary to follow a basic set of guidelines to avoid some pitfalls regarding data sampling, predictor selection, and model quality assessment, especially if a comparison of different models is contemplated. We therefore suggest to apply a novel toolbox to evaluate approaches to the statistical modelling of landslide susceptibility. Additionally, we propose some methods to open the "black box" as an inherent part of machine learning methods in order to achieve further explanatory insights into preparatory factors that control landslides. Sampling of training data should be guided by hypotheses regarding processes that lead to slope failure taking into account their respective spatial scales. This approach leads to the selection of a set of candidate predictor variables considered on adequate spatial scales. This set should be checked for multicollinearity in order to facilitate model response curve interpretation. Model quality assesses how well a model is able to reproduce independent observations of its response variable. This includes criteria to evaluate different aspects of model performance, i.e. model discrimination, model calibration, and model refinement. In order to assess a possible violation of the assumption of independency in the training samples or a possible lack of explanatory information in the chosen set of predictor variables, the model residuals need to be checked for spatial auto¬correlation. Therefore, we calculate spline correlograms. In addition to this, we investigate partial dependency plots and bivariate interactions plots considering possible interactions between predictors to improve model interpretation. Aiming at presenting this toolbox for model quality assessment, we investigate the influence of strategies in the construction of training datasets for statistical models on model quality.
Study of fatigue crack propagation in Ti-1Al-1Mn based on the calculation of cold work evolution
NASA Astrophysics Data System (ADS)
Plekhov, O. A.; Kostina, A. A.
2017-05-01
The work proposes a numerical method for lifetime assessment for metallic materials based on consideration of energy balance at crack tip. This method is based on the evaluation of the stored energy value per loading cycle. To calculate the stored and dissipated parts of deformation energy an elasto-plastic phenomenological model of energy balance in metals under the deformation and failure processes was proposed. The key point of the model is strain-type internal variable describing the stored energy process. This parameter is introduced based of the statistical description of defect evolution in metals as a second-order tensor and has a meaning of an additional strain due to the initiation and growth of the defects. The fatigue crack rate was calculated in a framework of a stationary crack approach (several loading cycles for every crack length was considered to estimate the energy balance at crack tip). The application of the proposed algorithm is illustrated by the calculation of the lifetime of the Ti-1Al-1Mn compact tension specimen under cyclic loading.
Simulation of 2D rarefied gas flows based on the numerical solution of the Boltzmann equation
NASA Astrophysics Data System (ADS)
Poleshkin, Sergey O.; Malkov, Ewgenij A.; Kudryavtsev, Alexey N.; Shershnev, Anton A.; Bondar, Yevgeniy A.; Kohanchik, A. A.
2017-10-01
There are various methods for calculating rarefied gas flows, in particular, statistical methods and deterministic methods based on the finite-difference solutions of the Boltzmann nonlinear kinetic equation and on the solutions of model kinetic equations. There is no universal method; each has its disadvantages in terms of efficiency or accuracy. The choice of the method depends on the problem to be solved and on parameters of calculated flows. Qualitative theoretical arguments help to determine the range of parameters of effectively solved problems for each method; however, it is advisable to perform comparative tests of calculations of the classical problems performed by different methods and with different parameters to have quantitative confirmation of this reasoning. The paper provides the results of the calculations performed by the authors with the help of the Direct Simulation Monte Carlo method and finite-difference methods of solving the Boltzmann equation and model kinetic equations. Based on this comparison, conclusions are made on selecting a particular method for flow simulations in various ranges of flow parameters.
Earth Observation System Flight Dynamics System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
Statistics of polymer extensions in turbulent channel flow.
Bagheri, Faranggis; Mitra, Dhrubaditya; Perlekar, Prasad; Brandt, Luca
2012-11-01
We present direct numerical simulations of turbulent channel flow with passive Lagrangian polymers. To understand the polymer behavior we investigate the behavior of infinitesimal line elements and calculate the probability distribution function (PDF) of finite-time Lyapunov exponents and from them the corresponding Cramer's function for the channel flow. We study the statistics of polymer elongation for both the Oldroyd-B model (for Weissenberg number Wi<1) and the FENE model. We use the location of the minima of the Cramer's function to define the Weissenberg number precisely such that we observe coil-stretch transition at Wi ≈1. We find agreement with earlier analytical predictions for PDF of polymer extensions made by Balkovsky, Fouxon, and Lebedev [Phys. Rev. Lett. 84, 4765 (2000)] for linear polymers (Oldroyd-B model) with Wi <1 and by Chertkov [Phys. Rev. Lett. 84, 4761 (2000)] for nonlinear FENE-P model of polymers. For Wi >1 (FENE model) the polymer are significantly more stretched near the wall than at the center of the channel where the flow is closer to homogenous isotropic turbulence. Furthermore near the wall the polymers show a strong tendency to orient along the streamwise direction of the flow, but near the center line the statistics of orientation of the polymers is consistent with analogous results obtained recently in homogeneous and isotropic flows.
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
Statistical mechanics of money and income
NASA Astrophysics Data System (ADS)
Dragulescu, Adrian; Yakovenko, Victor
2001-03-01
Money: In a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money will assume the exponential Boltzmann-Gibbs form characterized by an effective temperature. We demonstrate how the Boltzmann-Gibbs distribution emerges in computer simulations of economic models. We discuss thermal machines, the role of debt, and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold. Reference: A. Dragulescu and V. M. Yakovenko, "Statistical mechanics of money", Eur. Phys. J. B 17, 723-729 (2000), [cond-mat/0001432]. Income: Using tax and census data, we demonstrate that the distribution of individual income in the United States is exponential. Our calculated Lorenz curve without fitting parameters and Gini coefficient 1/2 agree well with the data. We derive the distribution function of income for families with two earners and show that it also agrees well with the data. The family data for the period 1947-1994 fit the Lorenz curve and Gini coefficient 3/8=0.375 calculated for two-earners families. Reference: A. Dragulescu and V. M. Yakovenko, "Evidence for the exponential distribution of income in the USA", cond-mat/0008305.
NASA Astrophysics Data System (ADS)
Hartini, Entin; Andiwijayakusuma, Dinan
2014-09-01
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id
2014-09-30
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less
Multifractality and freezing phenomena in random energy landscapes: An introduction
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.
2010-10-01
We start our lectures with introducing and discussing the general notion of multifractality spectrum for random measures on lattices, and how it can be probed using moments of that measure. Then we show that the Boltzmann-Gibbs probability distributions generated by logarithmically correlated random potentials provide a simple yet non-trivial example of disorder-induced multifractal measures. The typical values of the multifractality exponents can be extracted from calculating the free energy of the associated Statistical Mechanics problem. To succeed in such a calculation we introduce and discuss in some detail two analytically tractable models for logarithmically correlated potentials. The first model uses a special definition of distances between points in space and is based on the idea of multiplicative cascades which originated in theory of turbulent motion. It is essentially equivalent to statistical mechanics of directed polymers on disordered trees studied long ago by Derrida and Spohn (1988) in Ref. [12]. In this way we introduce the notion of the freezing transition which is identified with an abrupt change in the multifractality spectrum. Second model which allows for explicit analytical evaluation of the free energy is the infinite-dimensional version of the problem which can be solved by employing the replica trick. In particular, the latter version allows one to identify the freezing phenomenon with a mechanism of the replica symmetry breaking (RSB) and to elucidate its physical meaning. The corresponding one-step RSB solution turns out to be marginally stable everywhere in the low-temperature phase. We finish with a short discussion of recent developments and extensions of models with logarithmic correlations, in particular in the context of extreme value statistics. The first appendix summarizes the standard elementary information about Gaussian integrals and related subjects, and introduces the notion of the Gaussian free field characterized by logarithmic correlations. Three other appendices provide the detailed exposition of a few technical details underlying the replica analysis of the model discussed in the lectures.
Analysing the teleconnection systems affecting the climate of the Carpathian Basin
NASA Astrophysics Data System (ADS)
Kristóf, Erzsébet; Bartholy, Judit; Pongrácz, Rita
2017-04-01
Nowadays, the increase of the global average near-surface air temperature is unequivocal. Atmospheric low-frequency variabilities have substantial impacts on climate variables such as air temperature and precipitation. Therefore, assessing their effects is essential to improve global and regional climate model simulations for the 21st century. The North Atlantic Oscillation (NAO) is one of the best-known atmospheric teleconnection patterns affecting the Carpathian Basin in Central Europe. Besides NAO, we aim to analyse other interannual-to-decadal teleconnection patterns, which might have significant impacts on the Carpathian Basin, namely, the East Atlantic/West Russia pattern, the Scandinavian pattern, the Mediterranean Oscillation, and the North-Sea Caspian Pattern. For this purpose primarily the European Centre for Medium-Range Weather Forecasts' (ECMWF) ERA-20C atmospheric reanalysis dataset and multivariate statistical methods are used. The indices of each teleconnection pattern and their correlations with temperature and precipitation will be calculated for the period of 1961-1990. On the basis of these data first the long range (i. e. seasonal and/or annual scale) forecast ability is evaluated. Then, we aim to calculate the same indices of the relevant teleconnection patterns for the historical and future simulations of Coupled Model Intercomparison Project Phase 5 (CMIP5) models and compare them against each other using statistical methods. Our ultimate goal is to examine all available CMIP5 models and evaluate their abilities to reproduce the selected teleconnection systems. Thus, climate predictions for the 21st century for the Carpathian Basin may be improved using the best-performing models among all CMIP5 model simulations.
NASA Technical Reports Server (NTRS)
Lien, Guo-Yuan; Kalnay, Eugenia; Miyoshi, Takemasa; Huffman, George J.
2016-01-01
Assimilation of satellite precipitation data into numerical models presents several difficulties, with two of the most important being the non-Gaussian error distributions associated with precipitation, and large model and observation errors. As a result, improving the model forecast beyond a few hours by assimilating precipitation has been found to be difficult. To identify the challenges and propose practical solutions to assimilation of precipitation, statistics are calculated for global precipitation in a low-resolution NCEP Global Forecast System (GFS) model and the TRMM Multisatellite Precipitation Analysis (TMPA). The samples are constructed using the same model with the same forecast period, observation variables, and resolution as in the follow-on GFSTMPA precipitation assimilation experiments presented in the companion paper.The statistical results indicate that the T62 and T126 GFS models generally have positive bias in precipitation compared to the TMPA observations, and that the simulation of the marine stratocumulus precipitation is not realistic in the T62 GFS model. It is necessary to apply to precipitation either the commonly used logarithm transformation or the newly proposed Gaussian transformation to obtain a better relationship between the model and observational precipitation. When the Gaussian transformations are separately applied to the model and observational precipitation, they serve as a bias correction that corrects the amplitude-dependent biases. In addition, using a spatially andor temporally averaged precipitation variable, such as the 6-h accumulated precipitation, should be advantageous for precipitation assimilation.
Statistical Analysis of Spectral Properties and Prosodic Parameters of Emotional Speech
NASA Astrophysics Data System (ADS)
Přibil, J.; Přibilová, A.
2009-01-01
The paper addresses reflection of microintonation and spectral properties in male and female acted emotional speech. Microintonation component of speech melody is analyzed regarding its spectral and statistical parameters. According to psychological research of emotional speech, different emotions are accompanied by different spectral noise. We control its amount by spectral flatness according to which the high frequency noise is mixed in voiced frames during cepstral speech synthesis. Our experiments are aimed at statistical analysis of cepstral coefficient values and ranges of spectral flatness in three emotions (joy, sadness, anger), and a neutral state for comparison. Calculated histograms of spectral flatness distribution are visually compared and modelled by Gamma probability distribution. Histograms of cepstral coefficient distribution are evaluated and compared using skewness and kurtosis. Achieved statistical results show good correlation comparing male and female voices for all emotional states portrayed by several Czech and Slovak professional actors.
Individual risk of cutaneous melanoma in New Zealand: developing a clinical prediction aid.
Sneyd, Mary Jane; Cameron, Claire; Cox, Brian
2014-05-22
New Zealand and Australia have the highest melanoma incidence rates worldwide. In New Zealand, both the incidence and thickness have been increasing. Clinical decisions require accurate risk prediction but a simple list of genetic, phenotypic and behavioural risk factors is inadequate to estimate individual risk as the risk factors for melanoma have complex interactions. In order to offer tailored clinical management strategies, we developed a New Zealand prediction model to estimate individual 5-year absolute risk of melanoma. A population-based case-control study (368 cases and 270 controls) of melanoma risk factors provided estimates of relative risks for fair-skinned New Zealanders aged 20-79 years. Model selection techniques and multivariate logistic regression were used to determine the important predictors. The relative risks for predictors were combined with baseline melanoma incidence rates and non-melanoma mortality rates to calculate individual probabilities of developing melanoma within 5 years. For women, the best model included skin colour, number of moles > =5 mm on the right arm, having a 1st degree relative with large moles, and a personal history of non-melanoma skin cancer (NMSC). The model correctly classified 68% of participants; the C-statistic was 0.74. For men, the best model included age, place of occupation up to age 18 years, number of moles > =5 mm on the right arm, birthplace, and a history of NMSC. The model correctly classified 67% of cases; the C-statistic was 0.71. We have developed the first New Zealand risk prediction model that calculates individual absolute 5-year risk of melanoma. This model will aid physicians to identify individuals at high risk, allowing them to individually target surveillance and other management strategies, and thereby reduce the high melanoma burden in New Zealand.
Majumdar, Subhabrata; Basak, Subhash C
2018-04-26
Proper validation is an important aspect of QSAR modelling. External validation is one of the widely used validation methods in QSAR where the model is built on a subset of the data and validated on the rest of the samples. However, its effectiveness for datasets with a small number of samples but large number of predictors remains suspect. Calculating hundreds or thousands of molecular descriptors using currently available software has become the norm in QSAR research, owing to computational advances in the past few decades. Thus, for n chemical compounds and p descriptors calculated for each molecule, the typical chemometric dataset today has high value of p but small n (i.e. n < p). Motivated by the evidence of inadequacies of external validation in estimating the true predictive capability of a statistical model in recent literature, this paper performs an extensive and comparative study of this method with several other validation techniques. We compared four validation methods: leave-one-out, K-fold, external and multi-split validation, using statistical models built using the LASSO regression, which simultaneously performs variable selection and modelling. We used 300 simulated datasets and one real dataset of 95 congeneric amine mutagens for this evaluation. External validation metrics have high variation among different random splits of the data, hence are not recommended for predictive QSAR models. LOO has the overall best performance among all validation methods applied in our scenario. Results from external validation are too unstable for the datasets we analyzed. Based on our findings, we recommend using the LOO procedure for validating QSAR predictive models built on high-dimensional small-sample data. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Solubility of gases and liquids in glassy polymers.
De Angelis, Maria Grazia; Sarti, Giulio C
2011-01-01
This review discusses a macroscopic thermodynamic procedure to calculate the solubility of gases, vapors, and liquids in glassy polymers that is based on the general procedure provided by the nonequilibrium thermodynamics for glassy polymers (NET-GP) method. Several examples are presented using various nonequilibrium (NE) models including lattice fluid (NELF), statistical associating fluid theory (NE-SAFT), and perturbed hard sphere chain (NE-PHSC). Particular applications illustrate the calculation of infinite-dilution solubility coefficients in different glassy polymers and the prediction of solubility isotherms for different gases and vapors in pure polymers as well as in polymer blends. The determination of model parameters is discussed, and the predictive abilities of the models are illustrated. Attention is also given to the solubility of gas mixtures and solubility isotherms in nanocomposite mixed matrices. The fractional free volume determined from solubility data can be used to correlate solute diffusivities in mixed matrices.
Report to DHS on Summer Internship 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckwith, R H
2006-07-26
This summer I worked at Lawrence Livermore National Laboratory in a bioforensics collection and extraction research group under David Camp. The group is involved with researching efficiencies of various methods for collecting bioforensic evidence from crime scenes. The different methods under examination are a wipe, swab, HVAC filter and a vacuum. The vacuum is something that has particularly gone uncharacterized. My time was spent mostly on modeling and calculations work, but at the end of the summer I completed my internship with a few experiments to supplement my calculations. I had two major projects this summer. My first major projectmore » this summer involved fluid mechanics modeling of collection and extraction situations. This work examines different fluid dynamic models for the case of a micron spore attached to a fiber. The second project I was involved with was a statistical analysis of the different sampling techniques.« less
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
NASA Astrophysics Data System (ADS)
Ionita, M.; Grosfeld, K.; Scholz, P.; Lohmann, G.
2016-12-01
Sea ice in both Polar Regions is an important indicator for the expression of global climate change and its polar amplification. Consequently, a broad information interest exists on sea ice, its coverage, variability and long term change. Knowledge on sea ice requires high quality data on ice extent, thickness and its dynamics. However, its predictability depends on various climate parameters and conditions. In order to provide insights into the potential development of a monthly/seasonal signal, we developed a robust statistical model based on ocean heat content, sea surface temperature and atmospheric variables to calculate an estimate of the September minimum sea ice extent for every year. Although previous statistical attempts at monthly/seasonal forecasts of September sea ice minimum show a relatively reduced skill, here it is shown that more than 97% (r = 0.98) of the September sea ice extent can predicted three months in advance by using previous months conditions via a multiple linear regression model based on global sea surface temperature (SST), mean sea level pressure (SLP), air temperature at 850hPa (TT850), surface winds and sea ice extent persistence. The statistical model is based on the identification of regions with stable teleconnections between the predictors (climatological parameters) and the predictand (here sea ice extent). The results based on our statistical model contribute to the sea ice prediction network for the sea ice outlook report (https://www.arcus.org/sipn) and could provide a tool for identifying relevant regions and climate parameters that are important for the sea ice development in the Arctic and for detecting sensitive and critical regions in global coupled climate models with focus on sea ice formation.
A new method for detecting small and dim targets in starry background
NASA Astrophysics Data System (ADS)
Yao, Rui; Zhang, Yanning; Jiang, Lei
2011-08-01
Small visible optical space targets detection is one of the key issues in the research of long-range early warning and space debris surveillance. The SNR(Signal to Noise Ratio) of the target is very low because of the self influence of image device. Random noise and background movement also increase the difficulty of target detection. In order to detect small visible optical space targets effectively and rapidly, we bring up a novel detecting method based on statistic theory. Firstly, we get a reasonable statistical model of visible optical space image. Secondly, we extract SIFT(Scale-Invariant Feature Transform) feature of the image frames, and calculate the transform relationship, then use the transform relationship to compensate whole visual field's movement. Thirdly, the influence of star was wiped off by using interframe difference method. We find segmentation threshold to differentiate candidate targets and noise by using OTSU method. Finally, we calculate statistical quantity to judge whether there is the target for every pixel position in the image. Theory analysis shows the relationship of false alarm probability and detection probability at different SNR. The experiment result shows that this method could detect target efficiently, even the target passing through stars.