A first principles calculation and statistical mechanics modeling of defects in Al-H system
NASA Astrophysics Data System (ADS)
Ji, Min; Wang, Cai-Zhuang; Ho, Kai-Ming
2007-03-01
The behavior of defects and hydrogen in Al was investigated by first principles calculations and statistical mechanics modeling. The formation energy of different defects in Al+H system such as Al vacancy, H in institution and multiple H in Al vacancy were calculated by first principles method. Defect concentration in thermodynamical equilibrium was studied by total free energy calculation including configuration entropy and defect-defect interaction from low concentration limit to hydride limit. In our grand canonical ensemble model, hydrogen chemical potential under different environment plays an important role in determing the defect concentration and properties in Al-H system.
Statistical mechanics based on fractional classical and quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korichi, Z.; Meftah, M. T., E-mail: mewalid@yahoo.com
2014-03-15
The purpose of this work is to study some problems in statistical mechanics based on the fractional classical and quantum mechanics. At first stage we have presented the thermodynamical properties of the classical ideal gas and the system of N classical oscillators. In both cases, the Hamiltonian contains fractional exponents of the phase space (position and momentum). At the second stage, in the context of the fractional quantum mechanics, we have calculated the thermodynamical properties for the black body radiation, studied the Bose-Einstein statistics with the related problem of the condensation and the Fermi-Dirac statistics.
Quantum statistical mechanics of dense partially ionized hydrogen
NASA Technical Reports Server (NTRS)
Dewitt, H. E.; Rogers, F. J.
1972-01-01
The theory of dense hydrogen plasmas beginning with the two component quantum grand partition function is reviewed. It is shown that ionization equilibrium and molecular dissociation equilibrium can be treated in the same manner with proper consideration of all two-body states. A quantum perturbation expansion is used to give an accurate calculation of the equation of state of the gas for any degree of dissociation and ionization. The statistical mechanical calculation of the plasma equation of state is intended for stellar interiors. The general approach is extended to the calculation of the equation of state of the outer layers of large planets.
Brands, H; Maassen, S R; Clercx, H J
1999-09-01
In this paper the applicability of a statistical-mechanical theory to freely decaying two-dimensional (2D) turbulence on a bounded domain is investigated. We consider an ensemble of direct numerical simulations in a square box with stress-free boundaries, with a Reynolds number that is of the same order as in experiments on 2D decaying Navier-Stokes turbulence. The results of these simulations are compared with the corresponding statistical equilibria, calculated from different stages of the evolution. It is shown that the statistical equilibria calculated from early times of the Navier-Stokes evolution do not correspond to the dynamical quasistationary states. At best, the global topological structure is correctly predicted from a relatively late time in the Navier-Stokes evolution, when the quasistationary state has almost been reached. This failure of the (basically inviscid) statistical-mechanical theory is related to viscous dissipation and net leakage of vorticity in the Navier-Stokes dynamics at moderate values of the Reynolds number.
Mechanics and statistics of the worm-like chain
NASA Astrophysics Data System (ADS)
Marantan, Andrew; Mahadevan, L.
2018-02-01
The worm-like chain model is a simple continuum model for the statistical mechanics of a flexible polymer subject to an external force. We offer a tutorial introduction to it using three approaches. First, we use a mesoscopic view, treating a long polymer (in two dimensions) as though it were made of many groups of correlated links or "clinks," allowing us to calculate its average extension as a function of the external force via scaling arguments. We then provide a standard statistical mechanics approach, obtaining the average extension by two different means: the equipartition theorem and the partition function. Finally, we work in a probabilistic framework, taking advantage of the Gaussian properties of the chain in the large-force limit to improve upon the previous calculations of the average extension.
Using Bayes' theorem for free energy calculations
NASA Astrophysics Data System (ADS)
Rogers, David M.
Statistical mechanics is fundamentally based on calculating the probabilities of molecular-scale events. Although Bayes' theorem has generally been recognized as providing key guiding principals for setup and analysis of statistical experiments [83], classical frequentist models still predominate in the world of computational experimentation. As a starting point for widespread application of Bayesian methods in statistical mechanics, we investigate the central quantity of free energies from this perspective. This dissertation thus reviews the basics of Bayes' view of probability theory, and the maximum entropy formulation of statistical mechanics before providing examples of its application to several advanced research areas. We first apply Bayes' theorem to a multinomial counting problem in order to determine inner shell and hard sphere solvation free energy components of Quasi-Chemical Theory [140]. We proceed to consider the general problem of free energy calculations from samples of interaction energy distributions. From there, we turn to spline-based estimation of the potential of mean force [142], and empirical modeling of observed dynamics using integrator matching. The results of this research are expected to advance the state of the art in coarse-graining methods, as they allow a systematic connection from high-resolution (atomic) to low-resolution (coarse) structure and dynamics. In total, our work on these problems constitutes a critical starting point for further application of Bayes' theorem in all areas of statistical mechanics. It is hoped that the understanding so gained will allow for improvements in comparisons between theory and experiment.
ERIC Educational Resources Information Center
Findley, Bret R.; Mylon, Steven E.
2008-01-01
We introduce a computer exercise that bridges spectroscopy and thermodynamics using statistical mechanics and the experimental data taken from the commonly used laboratory exercise involving the rotational-vibrational spectrum of HCl. Based on the results from the analysis of their HCl spectrum, students calculate bulk thermodynamic properties…
NASA Astrophysics Data System (ADS)
Ha, Vu Thi Thanh; Hung, Vu Van; Hanh, Pham Thi Minh; Tuyen, Nguyen Viet; Hai, Tran Thi; Hieu, Ho Khac
2018-03-01
The thermodynamic and mechanical properties of III-V zinc-blende AlP, InP semiconductors and their alloys have been studied in detail from statistical moment method taking into account the anharmonicity effects of the lattice vibrations. The nearest neighbor distance, thermal expansion coefficient, bulk moduli, specific heats at the constant volume and constant pressure of the zincblende AlP, InP and AlyIn1-yP alloys are calculated as functions of the temperature. The statistical moment method calculations are performed by using the many-body Stillinger-Weber potential. The concentration dependences of the thermodynamic quantities of zinc-blende AlyIn1-yP crystals have also been discussed and compared with those of the experimental results. Our results are reasonable agreement with earlier density functional theory calculations and can provide useful qualitative information for future experiments. The moment method then can be developed extensively for studying the atomistic structure and thermodynamic properties of nanoscale materials as well.
Evolution of cosmic string networks
NASA Technical Reports Server (NTRS)
Albrecht, Andreas; Turok, Neil
1989-01-01
A discussion of the evolution and observable consequences of a network of cosmic strings is given. A simple model for the evolution of the string network is presented, and related to the statistical mechanics of string networks. The model predicts the long string density throughout the history of the universe from a single parameter, which researchers calculate in radiation era simulations. The statistical mechanics arguments indicate a particular thermal form for the spectrum of loops chopped off the network. Detailed numerical simulations of string networks in expanding backgrounds are performed to test the model. Consequences for large scale structure, the microwave and gravity wave backgrounds, nucleosynthesis and gravitational lensing are calculated.
Computer program for calculation of ideal gas thermodynamic data
NASA Technical Reports Server (NTRS)
Gordon, S.; Mc Bride, B. J.
1968-01-01
Computer program calculates ideal gas thermodynamic properties for any species for which molecular constant data is available. Partial functions and derivatives from formulas based on statistical mechanics are provided by the program which is written in FORTRAN 4 and MAP.
A statistical mechanical approach to restricted integer partition functions
NASA Astrophysics Data System (ADS)
Zhou, Chi-Chun; Dai, Wu-Sheng
2018-05-01
The main aim of this paper is twofold: (1) suggesting a statistical mechanical approach to the calculation of the generating function of restricted integer partition functions which count the number of partitions—a way of writing an integer as a sum of other integers under certain restrictions. In this approach, the generating function of restricted integer partition functions is constructed from the canonical partition functions of various quantum gases. (2) Introducing a new type of restricted integer partition functions corresponding to general statistics which is a generalization of Gentile statistics in statistical mechanics; many kinds of restricted integer partition functions are special cases of this restricted integer partition function. Moreover, with statistical mechanics as a bridge, we reveal a mathematical fact: the generating function of restricted integer partition function is just the symmetric function which is a class of functions being invariant under the action of permutation groups. Using this approach, we provide some expressions of restricted integer partition functions as examples.
Clinical calculators in hospital medicine: Availability, classification, and needs.
Dziadzko, Mikhail A; Gajic, Ognjen; Pickering, Brian W; Herasevich, Vitaly
2016-09-01
Clinical calculators are widely used in modern clinical practice, but are not generally applied to electronic health record (EHR) systems. Important barriers to the application of these clinical calculators into existing EHR systems include the need for real-time calculation, human-calculator interaction, and data source requirements. The objective of this study was to identify, classify, and evaluate the use of available clinical calculators for clinicians in the hospital setting. Dedicated online resources with medical calculators and providers of aggregated medical information were queried for readily available clinical calculators. Calculators were mapped by clinical categories, mechanism of calculation, and the goal of calculation. Online statistics from selected Internet resources and clinician opinion were used to assess the use of clinical calculators. One hundred seventy-six readily available calculators in 4 categories, 6 primary specialties, and 40 subspecialties were identified. The goals of calculation included prediction, severity, risk estimation, diagnostic, and decision-making aid. A combination of summation logic with cutoffs or rules was the most frequent mechanism of computation. Combined results, online resources, statistics, and clinician opinion identified 13 most utilized calculators. Although not an exhaustive list, a total of 176 validated calculators were identified, classified, and evaluated for usefulness. Most of these calculators are used for adult patients in the critical care or internal medicine settings. Thirteen of 176 clinical calculators were determined to be useful in our institution. All of these calculators have an interface for manual input. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Statistical Interpretation of the Local Field Inside Dielectrics.
ERIC Educational Resources Information Center
Berrera, Ruben G.; Mello, P. A.
1982-01-01
Compares several derivations of the Clausius-Mossotti relation to analyze consistently the nature of approximations used and their range of applicability. Also presents a statistical-mechanical calculation of the local field for classical system of harmonic oscillators interacting via the Coulomb potential. (Author/SK)
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-07
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007)]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H(5)(+) complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H(5)(+) complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011)] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
NASA Astrophysics Data System (ADS)
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-01
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007), 10.1063/1.2430711]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H_5^+ complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H_5^+ complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011), 10.1063/1.3587246] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model.
Wako, Hiroshi; Abe, Haruo
2016-01-01
The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding.
Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model
Wako, Hiroshi; Abe, Haruo
2016-01-01
The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding. PMID:28409079
Experimental Quiet Sprocket Design and Noise Reduction in Tracked Vehicles
1981-04-01
Track and Suspension Noise Reduction Statistical Energy Analysis Mechanical Impedance Measurement Finite Element Modal Analysis\\Noise Sources 2...shape and idler attachment are different. These differen- ces were investigated using the concepts of statistical energy analysis for hull generated noise...element r,’calculated from Statistical Energy Analysis . Such an approach will be valid within reasonable limits for frequencies of about 200 Hz and
The Heat Capacity of Ideal Gases
ERIC Educational Resources Information Center
Scott, Robert L.
2006-01-01
The heat capacity of an ideal gas has been shown to be calculable directly by statistical mechanics if the energies of the quantum states are known. However, unless one makes careful calculations, it is not easy for a student to understand the qualitative results. Why there are maxima (and occasionally minima) in heat capacity-temperature curves…
Realistic thermodynamic and statistical-mechanical measures for neural synchronization.
Kim, Sang-Yoon; Lim, Woochang
2014-04-15
Synchronized brain rhythms, associated with diverse cognitive functions, have been observed in electrical recordings of brain activity. Neural synchronization may be well described by using the population-averaged global potential VG in computational neuroscience. The time-averaged fluctuation of VG plays the role of a "thermodynamic" order parameter O used for describing the synchrony-asynchrony transition in neural systems. Population spike synchronization may be well visualized in the raster plot of neural spikes. The degree of neural synchronization seen in the raster plot is well measured in terms of a "statistical-mechanical" spike-based measure Ms introduced by considering the occupation and the pacing patterns of spikes. The global potential VG is also used to give a reference global cycle for the calculation of Ms. Hence, VG becomes an important collective quantity because it is associated with calculation of both O and Ms. However, it is practically difficult to directly get VG in real experiments. To overcome this difficulty, instead of VG, we employ the instantaneous population spike rate (IPSR) which can be obtained in experiments, and develop realistic thermodynamic and statistical-mechanical measures, based on IPSR, to make practical characterization of the neural synchronization in both computational and experimental neuroscience. Particularly, more accurate characterization of weak sparse spike synchronization can be achieved in terms of realistic statistical-mechanical IPSR-based measure, in comparison with the conventional measure based on VG. Copyright © 2014. Published by Elsevier B.V.
Commentary: Decaying Numerical Skills. "I Can't Divide by 60 in My Head!"
ERIC Educational Resources Information Center
Parslow, Graham R.
2010-01-01
As an undergraduate in the 1960s, the author mostly used a slide rule for calculations and a Marchant-brand motor-operated mechanical calculator for statistics. This was after an elementary education replete with learning multiplication tables and taking speed and accuracy tests in arithmetic. Times have changed and assuming even basic calculation…
Koner, Debasish; Barrios, Lizandra; González-Lezana, Tomás; Panda, Aditya N
2014-09-21
A real wave packet based time-dependent method and a statistical quantum method have been used to study the He + NeH(+) (v, j) reaction with the reactant in various ro-vibrational states, on a recently calculated ab initio ground state potential energy surface. Both the wave packet and statistical quantum calculations were carried out within the centrifugal sudden approximation as well as using the exact Hamiltonian. Quantum reaction probabilities exhibit dense oscillatory pattern for smaller total angular momentum values, which is a signature of resonances in a complex forming mechanism for the title reaction. Significant differences, found between exact and approximate quantum reaction cross sections, highlight the importance of inclusion of Coriolis coupling in the calculations. Statistical results are in fairly good agreement with the exact quantum results, for ground ro-vibrational states of the reactant. Vibrational excitation greatly enhances the reaction cross sections, whereas rotational excitation has relatively small effect on the reaction. The nature of the reaction cross section curves is dependent on the initial vibrational state of the reactant and is typical of a late barrier type potential energy profile.
[Micro-simulation of firms' heterogeneity on pollution intensity and regional characteristics].
Zhao, Nan; Liu, Yi; Chen, Ji-Ning
2009-11-01
In the same industrial sector, heterogeneity of pollution intensity exists among firms. There are some errors if using sector's average pollution intensity, which are calculated by limited number of firms in environmental statistic database to represent the sector's regional economic-environmental status. Based on the production function which includes environmental depletion as input, a micro-simulation model on firms' operational decision making is proposed. Then the heterogeneity of firms' pollution intensity can be mechanically described. Taking the mechanical manufacturing sector in Deyang city, 2005 as the case, the model's parameters were estimated. And the actual COD emission intensities of environmental statistic firms can be properly matched by the simulation. The model's results also show that the regional average COD emission intensity calculated by the environmental statistic firms (0.002 6 t per 10 000 yuan fixed asset, 0.001 5 t per 10 000 yuan production value) is lower than the regional average intensity calculated by all the firms in the region (0.003 0 t per 10 000 yuan fixed asset, 0.002 3 t per 10 000 yuan production value). The difference among average intensities in the six counties is significant as well. These regional characteristics of pollution intensity attribute to the sector's inner-structure (firms' scale distribution, technology distribution) and its spatial deviation.
Grosz, R; Stephanopoulos, G
1983-09-01
The need for the determination of the free energy of formation of biomass in bioreactor second law balances is well established. A statistical mechanical method for the calculation of the free energy of formation of E. coli biomass is introduced. In this method, biomass is modelled to consist of a system of biopolymer networks. The partition function of this system is proposed to consist of acoustic and optical modes of vibration. Acoustic modes are described by Tarasov's model, the parameters of which are evaluated with the aid of low-temperature calorimetric data for the crystalline protein bovine chymotrypsinogen A. The optical modes are described by considering the low-temperature thermodynamic properties of biological monomer crystals such as amino acid crystals. Upper and lower bounds are placed on the entropy to establish the maximum error associated with the statistical method. The upper bound is determined by endowing the monomers in biomass with ideal gas properties. The lower bound is obtained by limiting the monomers to complete immobility. On this basis, the free energy of formation is fixed to within 10%. Proposals are made with regard to experimental verification of the calculated value and extension of the calculation to other types of biomass.
Theoretical Insight into Shocked Gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leiding, Jeffery Allen
2016-09-29
I present the results of statistical mechanical calculations on shocked molecular gases. This work provides insight into the general behavior of shock Hugoniots of gas phase molecular targets with varying initial pressures. The dissociation behavior of the molecules is emphasized. Impedance matching calculations are performed to determine the maximum degree of dissociation accessible for a given flyer velocity as a function of initial gas pressure.
Condensate statistics in interacting and ideal dilute bose gases
Kocharovsky; Kocharovsky; Scully
2000-03-13
We obtain analytical formulas for the statistics, in particular, for the characteristic function and all cumulants, of the Bose-Einstein condensate in dilute weakly interacting and ideal equilibrium gases in the canonical ensemble via the particle-number-conserving operator formalism of Girardeau and Arnowitt. We prove that the ground-state occupation statistics is not Gaussian even in the thermodynamic limit. We calculate the effect of Bogoliubov coupling on suppression of ground-state occupation fluctuations and show that they are governed by a pair-correlation, squeezing mechanism.
Scheraga, H A; Paine, G H
1986-01-01
We are using a variety of theoretical and computational techniques to study protein structure, protein folding, and higher-order structures. Our earlier work involved treatments of liquid water and aqueous solutions of nonpolar and polar solutes, computations of the stabilities of the fundamental structures of proteins and their packing arrangements, conformations of small cyclic and open-chain peptides, structures of fibrous proteins (collagen), structures of homologous globular proteins, introduction of special procedures as constraints during energy minimization of globular proteins, and structures of enzyme-substrate complexes. Recently, we presented a new methodology for predicting polypeptide structure (described here); the method is based on the calculation of the probable and average conformation of a polypeptide chain by the application of equilibrium statistical mechanics in conjunction with an adaptive, importance sampling Monte Carlo algorithm. As a test, it was applied to Met-enkephalin.
Multi-fidelity machine learning models for accurate bandgap predictions of solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
Multi-fidelity machine learning models for accurate bandgap predictions of solids
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
2016-12-28
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
A quantum framework for likelihood ratios
NASA Astrophysics Data System (ADS)
Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.
The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.
Statistical mechanics of neocortical interactions: Path-integral evolution of short-term memory
NASA Astrophysics Data System (ADS)
Ingber, Lester
1994-05-01
Previous papers in this series of statistical mechanics of neocortical interactions (SMNI) have detailed a development from the relatively microscopic scales of neurons up to the macroscopic scales as recorded by electroencephalography (EEG), requiring an intermediate mesocolumnar scale to be developed at the scale of minicolumns (~=102 neurons) and macrocolumns (~=105 neurons). Opportunity was taken to view SMNI as sets of statistical constraints, not necessarily describing specific synaptic or neuronal mechanisms, on neuronal interactions, on some aspects of short-term memory (STM), e.g., its capacity, stability, and duration. A recently developed c-language code, pathint, provides a non-Monte Carlo technique for calculating the dynamic evolution of arbitrary-dimension (subject to computer resources) nonlinear Lagrangians, such as derived for the two-variable SMNI problem. Here, pathint is used to explicitly detail the evolution of the SMNI constraints on STM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawnsley, K.; Swaby, P.
1996-08-01
It is increasingly acknowledged that in order to understand and forecast the behavior of fracture influenced reservoirs we must attempt to reproduce the fracture system geometry and use this as a basis for fluid flow calculation. This article aims to present a recently developed fracture modelling prototype designed specifically for use in hydrocarbon reservoir environments. The prototype {open_quotes}FRAME{close_quotes} (FRActure Modelling Environment) aims to provide a tool which will allow the generation of realistic 3D fracture systems within a reservoir model, constrained to the known geology of the reservoir by both mechanical and statistical considerations, and which can be used asmore » a basis for fluid flow calculation. Two newly developed modelling techniques are used. The first is an interactive tool which allows complex fault surfaces and their associated deformations to be reproduced. The second is a {open_quotes}genetic{close_quotes} model which grows fracture patterns from seeds using conceptual models of fracture development. The user defines the mechanical input and can retrieve all the statistics of the growing fractures to allow comparison to assumed statistical distributions for the reservoir fractures. Input parameters include growth rate, fracture interaction characteristics, orientation maps and density maps. More traditional statistical stochastic fracture models are also incorporated. FRAME is designed to allow the geologist to input hard or soft data including seismically defined surfaces, well fractures, outcrop models, analogue or numerical mechanical models or geological {open_quotes}feeling{close_quotes}. The geologist is not restricted to {open_quotes}a priori{close_quotes} models of fracture patterns that may not correspond to the data.« less
NASA Astrophysics Data System (ADS)
Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo
2016-12-01
We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.
On the statistical distribution in a deformed solid
NASA Astrophysics Data System (ADS)
Gorobei, N. N.; Luk'yanenko, A. S.
2017-09-01
A modification of the Gibbs distribution in a thermally insulated mechanically deformed solid, where its linear dimensions (shape parameters) are excluded from statistical averaging and included among the macroscopic parameters of state alongside with the temperature, is proposed. Formally, this modification is reduced to corresponding additional conditions when calculating the statistical sum. The shape parameters and the temperature themselves are found from the conditions of mechanical and thermal equilibria of a body, and their change is determined using the first law of thermodynamics. Known thermodynamic phenomena are analyzed for the simple model of a solid, i.e., an ensemble of anharmonic oscillators, within the proposed formalism with an accuracy of up to the first order by the anharmonicity constant. The distribution modification is considered for the classic and quantum temperature regions apart.
NASA Astrophysics Data System (ADS)
Smolina, Irina Yu.
2015-10-01
Mechanical properties of a cable are of great importance in design and strength calculation of flexible cables. The problem of determination of elastic properties and rigidity characteristics of a cable modeled by anisotropic helical elastic rod is considered. These characteristics are calculated indirectly by means of the parameters received from statistical processing of experimental data. These parameters are considered as random quantities. With taking into account probable nature of these parameters the formulas for estimation of the macroscopic elastic moduli of a cable are obtained. The calculating expressions for macroscopic flexural rigidity, shear rigidity and torsion rigidity using the macroscopic elastic characteristics obtained before are presented. Statistical estimations of the rigidity characteristics of some cable grades are adduced. A comparison with those characteristics received on the basis of deterministic approach is given.
Blow molding electric drives of Mechanical Engineering
NASA Astrophysics Data System (ADS)
Bukhanov, S. S.; Ramazanov, M. A.; Tsirkunenko, A. T.
2018-03-01
The article considers the questions about the analysis of new possibilities, which gives the use of adjustable electric drives for blowing mechanisms of plastic production. Thus, the use of new semiconductor converters makes it possible not only to compensate the instability of the supply network by using special dynamic voltage regulators, but to improve (correct) the power factor. The calculation of economic efficiency in controlled electric drives of blowing mechanisms is given. On the basis of statistical analysis, the calculation of the reliability parameters of the regulated electric drives’ elements under consideration is given. It is shown that an increase in the reliability of adjustable electric drives is possible both due to overestimation of the electric drive’s installed power, and in simpler schemes with pulse-vector control.
Improta, Roberto; Vitagliano, Luigi; Esposito, Luciana
2015-11-01
The elucidation of the mutual influence between peptide bond geometry and local conformation has important implications for protein structure refinement, validation, and prediction. To gain insights into the structural determinants and the energetic contributions associated with protein/peptide backbone plasticity, we here report an extensive analysis of the variability of the peptide bond angles by combining statistical analyses of protein structures and quantum mechanics calculations on small model peptide systems. Our analyses demonstrate that all the backbone bond angles strongly depend on the peptide conformation and unveil the existence of regular trends as function of ψ and/or φ. The excellent agreement of the quantum mechanics calculations with the statistical surveys of protein structures validates the computational scheme here employed and demonstrates that the valence geometry of protein/peptide backbone is primarily dictated by local interactions. Notably, for the first time we show that the position of the H(α) hydrogen atom, which is an important parameter in NMR structural studies, is also dependent on the local conformation. Most of the trends observed may be satisfactorily explained by invoking steric repulsive interactions; in some specific cases the valence bond variability is also influenced by hydrogen-bond like interactions. Moreover, we can provide a reliable estimate of the energies involved in the interplay between geometry and conformations. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gevorkyan, A. S., E-mail: g-ashot@sci.am; Sahakyan, V. V.
We study the classical 1D Heisenberg spin glasses in the framework of nearest-neighboring model. Based on the Hamilton equations we obtained the system of recurrence equations which allows to perform node-by-node calculations of a spin-chain. It is shown that calculations from the first principles of classical mechanics lead to ℕℙ hard problem, that however in the limit of the statistical equilibrium can be calculated by ℙ algorithm. For the partition function of the ensemble a new representation is offered in the form of one-dimensional integral of spin-chains’ energy distribution.
NASA Astrophysics Data System (ADS)
Gulvi, Nitin R.; Patel, Priyanka; Badani, Purav M.
2018-04-01
Pathway for dissociation of multihalogenated alkyls is observed to be competitive between molecular and atomic elimination products. Factors such as molecular structure, temperature and pressure are known to influence the same. Hence present work is focussed to explore mechanism and kinetics of atomic (Br) and molecular (HBr and Br2) elimination upon pyrolysis of 1,1- and 1,2-ethyl dibromide (EDB). For this purpose, electronic structure calculations were performed at DFT and CCSD(T) level of theory. In addition to concerted mechanism, an alternate energetically efficient isomerisation pathway has been exploited for molecular elimination. Energy calculations are further complimented by detailed kinetic investigation, over wide range of temperature and pressure, using suitable models like Canonical Transition State Theory, Statistical Adiabatic Channel Model and Troe's formalism. Our calculations suggest high branching ratio for dehydrohalogentation reaction, from both isomers of EDB. Fall off curve depicts good agreement between theoretically estimated and experimentally reported values.
NASA Astrophysics Data System (ADS)
Jasper, Ahren W.; Dawes, Richard
2013-10-01
The lowest-energy singlet (1 1A') and two lowest-energy triplet (1 3A' and 1 3A″) electronic states of CO2 are characterized using dynamically weighted multireference configuration interaction (dw-MRCI+Q) electronic structure theory calculations extrapolated to the complete basis set (CBS) limit. Global analytic representations of the dw-MRCI+Q/CBS singlet and triplet surfaces and of their CASSCF/aug-cc-pVQZ spin-orbit coupling surfaces are obtained via the interpolated moving least squares (IMLS) semiautomated surface fitting method. The spin-forbidden kinetics of the title reaction is calculated using the coupled IMLS surfaces and coherent switches with decay of mixing non-Born-Oppenheimer molecular dynamics. The calculated spin-forbidden association rate coefficient (corresponding to the high pressure limit of the rate coefficient) is 7-35 times larger at 1000-5000 K than the rate coefficient used in many detailed chemical models of combustion. A dynamical analysis of the multistate trajectories is presented. The trajectory calculations reveal direct (nonstatistical) and indirect (statistical) spin-forbidden reaction mechanisms and may be used to test the suitability of transition-state-theory-like statistical methods for spin-forbidden kinetics. Specifically, we consider the appropriateness of the "double passage" approximation, of assuming statistical distributions of seam crossings, and of applications of the unified statistical model for spin-forbidden reactions.
NASA Astrophysics Data System (ADS)
Wong, Kin-Yiu
We have simulated two enzymatic reactions with molecular dynamics (MD) and combined quantum mechanical/molecular mechanical (QM/MM) techniques. One reaction is the hydrolysis of the insecticide paraoxon catalyzed by phosphotriesterase (PTE). PTE is a bioremediation candidate for environments contaminated by toxic nerve gases (e.g., sarin) or pesticides. Based on the potential of mean force (PMF) and the structural changes of the active site during the catalysis, we propose a revised reaction mechanism for PTE. Another reaction is the hydrolysis of the second-messenger cyclic adenosine 3'-5'-monophosphate (cAMP) catalyzed by phosphodiesterase (PDE). Cyclicnucleotide PDE is a vital protein in signal-transduction pathways and thus a popular target for inhibition by drugs (e.g., ViagraRTM). A two-dimensional (2-D) free-energy profile has been generated showing that the catalysis by PDE proceeds in a two-step SN2-type mechanism. Furthermore, to characterize a chemical reaction mechanism in experiment, a direct probe is measuring kinetic isotope effects (KIEs). KIEs primarily arise from internuclear quantum-statistical effects, e.g., quantum tunneling and quantization of vibration. To systematically incorporate the quantum-statistical effects during MD simulations, we have developed an automated integration-free path-integral (AIF-PI) method based on Kleinert's variational perturbation theory for the centroid density of Feynman's path integral. Using this analytic method, we have performed ab initio pathintegral calculations to study the origin of KIEs on several series of proton-transfer reactions from carboxylic acids to aryl substituted alpha-methoxystyrenes in water. In addition, we also demonstrate that the AIF-PI method can be used to systematically compute the exact value of zero-point energy (beyond the harmonic approximation) by simply minimizing the centroid effective potential.
ZERODUR: deterministic approach for strength design
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.
NASA Astrophysics Data System (ADS)
Zhou, Shiqi
2004-07-01
A universal formalism, which enables calculation of solvent-mediated potential (SMP) between two equal or non-equal solute particles with any shape immersed in solvent reservior consisting of atomic particle and/or polymer chain or their mixture, is proposed by importing a density functional theory externally into OZ equation systems. Only if size asymmetry of the solvent bath components is moderate, the present formalism can calculate the SMP in any complex fluids at the present development stage of statistical mechanics, and therefore avoids all of limitations of previous approaches for SMP. Preliminary calculation indicates the reliability of the present formalism.
Superstatistics with different kinds of distributions in the deformed formalism
NASA Astrophysics Data System (ADS)
Sargolzaeipor, S.; Hassanabadi, H.; Chung, W. S.
2018-03-01
In this article, after first introducing superstatistics, the effective Boltzmann factor in a deformed formalism for modified Dirac delta, uniform, two-level and Gamma distributions is derived. Then we make use of the superstatistics for four important problems in physics and the thermodynamic properties of the system are calculated. All results in the limit case are reduced to ordinary statistical mechanics. Furthermore, effects of all parameters in the problems are calculated and shown graphically.
NASA Astrophysics Data System (ADS)
Motornenko, A.; Bravina, L.; Gorenstein, M. I.; Magner, A. G.; Zabrodin, E.
2018-03-01
Properties of equilibrated nucleon system are studied within the ultra-relativistic quantum molecular dynamics (UrQMD) transport model. The UrQMD calculations are done within a finite box with periodic boundary conditions. The system achieves thermal equilibrium due to nucleon-nucleon elastic scattering. For the UrQMD-equilibrium state, nucleon energy spectra, equation of state, particle number fluctuations, and shear viscosity η are calculated. The UrQMD results are compared with both, statistical mechanics and Chapman-Enskog kinetic theory, for a classical system of nucleons with hard-core repulsion.
Duan, Yong; Wu, Chun; Chowdhury, Shibasish; Lee, Mathew C; Xiong, Guoming; Zhang, Wei; Yang, Rong; Cieplak, Piotr; Luo, Ray; Lee, Taisung; Caldwell, James; Wang, Junmei; Kollman, Peter
2003-12-01
Molecular mechanics models have been applied extensively to study the dynamics of proteins and nucleic acids. Here we report the development of a third-generation point-charge all-atom force field for proteins. Following the earlier approach of Cornell et al., the charge set was obtained by fitting to the electrostatic potentials of dipeptides calculated using B3LYP/cc-pVTZ//HF/6-31G** quantum mechanical methods. The main-chain torsion parameters were obtained by fitting to the energy profiles of Ace-Ala-Nme and Ace-Gly-Nme di-peptides calculated using MP2/cc-pVTZ//HF/6-31G** quantum mechanical methods. All other parameters were taken from the existing AMBER data base. The major departure from previous force fields is that all quantum mechanical calculations were done in the condensed phase with continuum solvent models and an effective dielectric constant of epsilon = 4. We anticipate that this force field parameter set will address certain critical short comings of previous force fields in condensed-phase simulations of proteins. Initial tests on peptides demonstrated a high-degree of similarity between the calculated and the statistically measured Ramanchandran maps for both Ace-Gly-Nme and Ace-Ala-Nme di-peptides. Some highlights of our results include (1) well-preserved balance between the extended and helical region distributions, and (2) favorable type-II poly-proline helical region in agreement with recent experiments. Backward compatibility between the new and Cornell et al. charge sets, as judged by overall agreement between dipole moments, allows a smooth transition to the new force field in the area of ligand-binding calculations. Test simulations on a large set of proteins are also discussed. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1999-2012, 2003
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
NASA Astrophysics Data System (ADS)
Knani, S.; Aouaini, F.; Bahloul, N.; Khalfaoui, M.; Hachicha, M. A.; Ben Lamine, A.; Kechaou, N.
2014-04-01
Analytical expression for modeling water adsorption isotherms of food or agricultural products is developed using the statistical mechanics formalism. The model developed in this paper is further used to fit and interpret the isotherms of four varieties of Tunisian olive leaves called “Chemlali, Chemchali, Chetoui and Zarrazi”. The parameters involved in the model such as the number of adsorbed water molecules per site, n, the receptor sites density, NM, and the energetic parameters, a1 and a2, were determined by fitting the experimental adsorption isotherms at temperatures ranging from 303 to 323 K. We interpret the results of fitting. After that, the model is further applied to calculate thermodynamic functions which govern the adsorption mechanism such as entropy, the free enthalpy of Gibbs and the internal energy.
Faheem, Muhammad; Heyden, Andreas
2014-08-12
We report the development of a quantum mechanics/molecular mechanics free energy perturbation (QM/MM-FEP) method for modeling chemical reactions at metal-water interfaces. This novel solvation scheme combines planewave density function theory (DFT), periodic electrostatic embedded cluster method (PEECM) calculations using Gaussian-type orbitals, and classical molecular dynamics (MD) simulations to obtain a free energy description of a complex metal-water system. We derive a potential of mean force (PMF) of the reaction system within the QM/MM framework. A fixed-size, finite ensemble of MM conformations is used to permit precise evaluation of the PMF of QM coordinates and its gradient defined within this ensemble. Local conformations of adsorbed reaction moieties are optimized using sequential MD-sampling and QM-optimization steps. An approximate reaction coordinate is constructed using a number of interpolated states and the free energy difference between adjacent states is calculated using the QM/MM-FEP method. By avoiding on-the-fly QM calculations and by circumventing the challenges associated with statistical averaging during MD sampling, a computational speedup of multiple orders of magnitude is realized. The method is systematically validated against the results of ab initio QM calculations and demonstrated for C-C cleavage in double-dehydrogenated ethylene glycol on a Pt (111) model surface.
SurfKin: an ab initio kinetic code for modeling surface reactions.
Le, Thong Nguyen-Minh; Liu, Bin; Huynh, Lam K
2014-10-05
In this article, we describe a C/C++ program called SurfKin (Surface Kinetics) to construct microkinetic mechanisms for modeling gas-surface reactions. Thermodynamic properties of reaction species are estimated based on density functional theory calculations and statistical mechanics. Rate constants for elementary steps (including adsorption, desorption, and chemical reactions on surfaces) are calculated using the classical collision theory and transition state theory. Methane decomposition and water-gas shift reaction on Ni(111) surface were chosen as test cases to validate the code implementations. The good agreement with literature data suggests this is a powerful tool to facilitate the analysis of complex reactions on surfaces, and thus it helps to effectively construct detailed microkinetic mechanisms for such surface reactions. SurfKin also opens a possibility for designing nanoscale model catalysts. Copyright © 2014 Wiley Periodicals, Inc.
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
NASA Astrophysics Data System (ADS)
Abe, Sumiyoshi
2014-11-01
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Quasi-chemical theory of F-(aq): The "no split occupancies rule" revisited
NASA Astrophysics Data System (ADS)
Chaudhari, Mangesh I.; Rempe, Susan B.; Pratt, Lawrence R.
2017-10-01
We use ab initio molecular dynamics (AIMD) calculations and quasi-chemical theory (QCT) to study the inner-shell structure of F-(aq) and to evaluate that single-ion free energy under standard conditions. Following the "no split occupancies" rule, QCT calculations yield a free energy value of -101 kcal/mol under these conditions, in encouraging agreement with tabulated values (-111 kcal/mol). The AIMD calculations served only to guide the definition of an effective inner-shell constraint. QCT naturally includes quantum mechanical effects that can be concerning in more primitive calculations, including electronic polarizability and induction, electron density transfer, electron correlation, molecular/atomic cooperative interactions generally, molecular flexibility, and zero-point motion. No direct assessment of the contribution of dispersion contributions to the internal energies has been attempted here, however. We anticipate that other aqueous halide ions might be treated successfully with QCT, provided that the structure of the underlying statistical mechanical theory is absorbed, i.e., that the "no split occupancies" rule is recognized.
Li, Li-Fen; Liang, Xi-Xia
2017-10-19
The antifreeze activity of type I antifreeze proteins (AFPIs) is studied on the basis of the statistical mechanics theory, by taking the AFP's adsorption orientation into account. The thermal hysteresis temperatures are calculated by determining the system Gibbs function as well as the AFP molecule coverage rate on the ice-crystal surface. The numerical results for the thermal hysteresis temperatures of AFP9, HPLC-6, and AAAA2kE are obtained for both of the cases with and without inclusion of the adsorption orientation. The results show that the influence of the adsorption orientation on the thermal hysteresis temperature cannot be neglected. The theoretical results are coincidental preferably with the experimental data.
Honvault, P; Jorfi, M; González-Lezana, T; Faure, A; Pagani, L
2011-07-08
We report extensive, accurate fully quantum, time-independent calculations of cross sections at low collision energies, and rate coefficients at low temperatures for the H⁺ + H₂(v = 0, j) → H⁺ + H₂(v = 0, j') reaction. Different transitions are considered, especially the ortho-para conversion (j = 1 → j' = 0) which is of key importance in astrophysics. This conversion process appears to be very efficient and dominant at low temperature, with a rate coefficient of 4.15 × 10⁻¹⁰ cm³ molecule⁻¹ s⁻¹ at 10 K. The quantum mechanical results are also compared with statistical quantum predictions and the reaction is found to be statistical in the low temperature regime (T < 100 K).
Yamamoto, Takeshi
2008-12-28
Conventional quantum chemical solvation theories are based on the mean-field embedding approximation. That is, the electronic wavefunction is calculated in the presence of the mean field of the environment. In this paper a direct quantum mechanical/molecular mechanical (QM/MM) analog of such a mean-field theory is formulated based on variational and perturbative frameworks. In the variational framework, an appropriate QM/MM free energy functional is defined and is minimized in terms of the trial wavefunction that best approximates the true QM wavefunction in a statistically averaged sense. Analytical free energy gradient is obtained, which takes the form of the gradient of effective QM energy calculated in the averaged MM potential. In the perturbative framework, the above variational procedure is shown to be equivalent to the first-order expansion of the QM energy (in the exact free energy expression) about the self-consistent reference field. This helps understand the relation between the variational procedure and the exact QM/MM free energy as well as existing QM/MM theories. Based on this, several ways are discussed for evaluating non-mean-field effects (i.e., statistical fluctuations of the QM wavefunction) that are neglected in the mean-field calculation. As an illustration, the method is applied to an S(N)2 Menshutkin reaction in water, NH(3)+CH(3)Cl-->NH(3)CH(3) (+)+Cl(-), for which free energy profiles are obtained at the Hartree-Fock, MP2, B3LYP, and BHHLYP levels by integrating the free energy gradient. Non-mean-field effects are evaluated to be <0.5 kcal/mol using a Gaussian fluctuation model for the environment, which suggests that those effects are rather small for the present reaction in water.
Brudnik, Katarzyna; Twarda, Maria; Sarzyński, Dariusz; Jodkowski, Jerzy T
2013-10-01
Ab initio calculations at the G3 level were used in a theoretical description of the kinetics and mechanism of the chlorine abstraction reactions from mono-, di-, tri- and tetra-chloromethane by chlorine atoms. The calculated profiles of the potential energy surface of the reaction systems show that the mechanism of the studied reactions is complex and the Cl-abstraction proceeds via the formation of intermediate complexes. The multi-step reaction mechanism consists of two elementary steps in the case of CCl4 + Cl, and three for the other reactions. Rate constants were calculated using the theoretical method based on the RRKM theory and the simplified version of the statistical adiabatic channel model. The temperature dependencies of the calculated rate constants can be expressed, in temperature range of 200-3,000 K as [Formula: see text]. The rate constants for the reverse reactions CH3/CH2Cl/CHCl2/CCl3 + Cl2 were calculated via the equilibrium constants derived theoretically. The kinetic equations [Formula: see text] allow a very good description of the reaction kinetics. The derived expressions are a substantial supplement to the kinetic data necessary to describe and model the complex gas-phase reactions of importance in combustion and atmospheric chemistry.
ERIC Educational Resources Information Center
1971
Computers have effected a comprehensive transformation of chemistry. Computers have greatly enhanced the chemist's ability to do model building, simulations, data refinement and reduction, analysis of data in terms of models, on-line data logging, automated control of experiments, quantum chemistry and statistical and mechanical calculations, and…
NASA Astrophysics Data System (ADS)
Obraztsov, S. M.; Konobeev, Yu. V.; Birzhevoy, G. A.; Rachkov, V. I.
2006-12-01
The dependence of mechanical properties of ferritic/martensitic (F/M) steels on irradiation temperature is of interest because these steels are used as structural materials for fast, fusion reactors and accelerator driven systems. Experimental data demonstrating temperature peaks in physical and mechanical properties of neutron irradiated pure iron, nickel, vanadium, and austenitic stainless steels are available in the literature. A lack of such an information for F/M steels forces one to apply a computational mathematical-statistical modeling methods. The bootstrap procedure is one of such methods that allows us to obtain the necessary statistical characteristics using only a sample of limited size. In the present work this procedure is used for modeling the frequency distribution histograms of ultimate strength temperature peaks in pure iron and Russian F/M steels EP-450 and EP-823. Results of fitting the sums of Lorentz or Gauss functions to the calculated distributions are presented. It is concluded that there are two temperature (at 360 and 390 °C) peaks of the ultimate strength in EP-450 steel and single peak at 390 °C in EP-823.
First principles statistical mechanics of alloys and magnetism
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Khan, Suffian N.; Li, Ying Wai
Modern high performance computing resources are enabling the exploration of the statistical physics of phase spaces with increasing size and higher fidelity of the Hamiltonian of the systems. For selected systems, this now allows the combination of Density Functional based first principles calculations with classical Monte Carlo methods for parameter free, predictive thermodynamics of materials. We combine our locally selfconsistent real space multiple scattering method for solving the Kohn-Sham equation with Wang-Landau Monte-Carlo calculations (WL-LSMS). In the past we have applied this method to the calculation of Curie temperatures in magnetic materials. Here we will present direct calculations of the chemical order - disorder transitions in alloys. We present our calculated transition temperature for the chemical ordering in CuZn and the temperature dependence of the short-range order parameter and specific heat. Finally we will present the extension of the WL-LSMS method to magnetic alloys, thus allowing the investigation of the interplay of magnetism, structure and chemical order in ferrous alloys. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division and it used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory.
Statistical mechanics of simple models of protein folding and design.
Pande, V S; Grosberg, A Y; Tanaka, T
1997-01-01
It is now believed that the primary equilibrium aspects of simple models of protein folding are understood theoretically. However, current theories often resort to rather heavy mathematics to overcome some technical difficulties inherent in the problem or start from a phenomenological model. To this end, we take a new approach in this pedagogical review of the statistical mechanics of protein folding. The benefit of our approach is a drastic mathematical simplification of the theory, without resort to any new approximations or phenomenological prescriptions. Indeed, the results we obtain agree precisely with previous calculations. Because of this simplification, we are able to present here a thorough and self contained treatment of the problem. Topics discussed include the statistical mechanics of the random energy model (REM), tests of the validity of REM as a model for heteropolymer freezing, freezing transition of random sequences, phase diagram of designed ("minimally frustrated") sequences, and the degree to which errors in the interactions employed in simulations of either folding and design can still lead to correct folding behavior. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 6 PMID:9414231
NASA Technical Reports Server (NTRS)
Sprowls, D. O.; Bucci, R. J.; Ponchel, B. M.; Brazill, R. L.; Bretz, P. E.
1984-01-01
A technique is demonstrated for accelerated stress corrosion testing of high strength aluminum alloys. The method offers better precision and shorter exposure times than traditional pass fail procedures. The approach uses data from tension tests performed on replicate groups of smooth specimens after various lengths of exposure to static stress. The breaking strength measures degradation in the test specimen load carrying ability due to the environmental attack. Analysis of breaking load data by extreme value statistics enables the calculation of survival probabilities and a statistically defined threshold stress applicable to the specific test conditions. A fracture mechanics model is given which quantifies depth of attack in the stress corroded specimen by an effective flaw size calculated from the breaking stress and the material strength and fracture toughness properties. Comparisons are made with experimental results from three tempers of 7075 alloy plate tested by the breaking load method and by traditional tests of statistically loaded smooth tension bars and conventional precracked specimens.
A Mechanism for Anonymous Credit Card Systems
NASA Astrophysics Data System (ADS)
Tamura, Shinsuke; Yanase, Tatsuro
This paper proposes a mechanism for anonymous credit card systems, in which each credit card holder can conceal individual transactions from the credit card company, while enabling the credit card company to calculate the total expenditures of transactions of individual card holders during specified periods, and to identify card holders who executed dishonest transactions. Based on three existing mechanisms, i.e. anonymous authentication, blind signature and secure statistical data gathering, together with implicit transaction links proposed here, the proposed mechanism enables development of anonymous credit card systems without assuming any absolutely trustworthy entity like tamper resistant devices or organizations faithful both to the credit card company and card holders.
Mansour, Joseph M; Gu, Di-Win Marine; Chung, Chen-Yuan; Heebner, Joseph; Althans, Jake; Abdalian, Sarah; Schluchter, Mark D; Liu, Yiying; Welter, Jean F
2014-10-01
Our ultimate goal is to non-destructively evaluate mechanical properties of tissue-engineered (TE) cartilage using ultrasound (US). We used agarose gels as surrogates for TE cartilage. Previously, we showed that mechanical properties measured using conventional methods were related to those measured using US, which suggested a way to non-destructively predict mechanical properties of samples with known volume fractions. In this study, we sought to determine whether the mechanical properties of samples, with unknown volume fractions could be predicted by US. Aggregate moduli were calculated for hydrogels as a function of SOS, based on concentration and density using a poroelastic model. The data were used to train a statistical model, which we then used to predict volume fractions and mechanical properties of unknown samples. Young's and storage moduli were measured mechanically. The statistical model generally predicted the Young's moduli in compression to within <10% of their mechanically measured value. We defined positive linear correlations between the aggregate modulus predicted from US and both the storage and Young's moduli determined from mechanical tests. Mechanical properties of hydrogels with unknown volume fractions can be predicted successfully from US measurements. This method has the potential to predict mechanical properties of TE cartilage non-destructively in a bioreactor.
Mansour, Joseph M.; Gu, Di-Win Marine; Chung, Chen-Yuan; Heebner, Joseph; Althans, Jake; Abdalian, Sarah; Schluchter, Mark D.; Liu, Yiying; Welter, Jean F.
2016-01-01
Introduction Our ultimate goal is to non-destructively evaluate mechanical properties of tissue-engineered (TE) cartilage using ultrasound (US). We used agarose gels as surrogates for TE cartilage. Previously, we showed that mechanical properties measured using conventional methods were related to those measured using US, which suggested a way to non-destructively predict mechanical properties of samples with known volume fractions. In this study, we sought to determine whether the mechanical properties of samples, with unknown volume fractions could be predicted by US. Methods Aggregate moduli were calculated for hydrogels as a function of SOS, based on concentration and density using a poroelastic model. The data were used to train a statistical model, which we then used to predict volume fractions and mechanical properties of unknown samples. Young's and storage moduli were measured mechanically. Results The statistical model generally predicted the Young's moduli in compression to within < 10% of their mechanically measured value. We defined positive linear correlations between the aggregate modulus predicted from US and both the storage and Young's moduli determined from mechanical tests. Conclusions Mechanical properties of hydrogels with unknown volume fractions can be predicted successfully from US measurements. This method has the potential to predict mechanical properties of TE cartilage non-destructively in a bioreactor. PMID:25092421
Eberl, Dennis D.; Drits, V.A.; Srodon, J.
2000-01-01
GALOPER is a computer program that simulates the shapes of crystal size distributions (CSDs) from crystal growth mechanisms. This manual describes how to use the program. The theory for the program's operation has been described previously (Eberl, Drits, and Srodon, 1998). CSDs that can be simulated using GALOPER include those that result from growth mechanisms operating in the open system, such as constant-rate nucleation and growth, nucleation with a decaying nucleation rate and growth, surface-controlled growth, supply-controlled growth, and constant-rate and random growth; and those that result from mechanisms operating in the closed system such as Ostwald ripening, random ripening, and crystal coalescence. In addition, CSDs for two types weathering reactions can be simulated. The operation of associated programs also is described, including two statistical programs used for comparing calculated with measured CSDs, a program used for calculating lognormal CSDs, and a program for arranging measured crystal sizes into size groupings (bins).
Onthe stability of carbonicacid under conditions in the atmosphere of Venus
NASA Technical Reports Server (NTRS)
Khanna, R. K.; Tossell, J. A.; Fox, K.
1994-01-01
Results of quantum statistical mechanical calculations and thermodynamic evaluation of the structure of H2CO3 and its stability against dissociation are reported. Under temperature and pressure conditions near the surface of Venus, carbonic acid would predominatly dissociate into H2O and CO2 and, hence, could not contribute to any significant absorption there.
Decompression Mechanisms and Decompression Schedule Calculations.
1984-01-20
phisiology - The effects of altitude. Handbook of Physiology, Section 3: Respiration, Vol. II. W.O. Fenn and H. Rahn eds. Wash, D.C.; Am. Physiol. Soc. 1 4...decompression studies from other laboratories. METHODS Ten experienced and physically qualified divers ( ages 22-42) were compressed at a rate of 60...STATISTICS* --- ---------------------------------------------------------- EXPERIMENT N AGE (yr) HEIGHT (cm) WEIGHT (Kg) BODY FAT
2006-05-31
dynamics (MD) and kinetic Monte Carlo ( KMC ) procedures. In 2D surface modeling our calculations project speedups of 9 orders of magnitude at 300 degrees...programming is used to perform customized statistical mechanics by bridging the different time scales of MD and KMC quickly and well. Speedups in
Structural and thermomechanical properties of the zinc-blende AlX (X = P, As, Sb) compounds
NASA Astrophysics Data System (ADS)
Ha, Vu Thi Thanh; Hung, Vu Van; Hanh, Pham Thi Minh; Nguyen, Viet Tuyen; Hieu, Ho Khac
2017-08-01
The structural and thermomechanical properties of zinc-blende aluminum class of III-V compounds have been studied based on the statistical moment method (SMM) in quantum statistical mechanics. Within the SMM scheme, we derived the analytical expressions of the nearest-neighbor distance, thermal expansion coefficient, atomic mean-square displacement and elastic moduli (Young’s modulus, bulk modulus and shear modulus). Numerical calculations have been performed for zinc-blende AlX (X = As, P, Sb) at ambient conditions up to the temperature of 1000 K. Our results are in good and reasonable agreements with earlier measurements and can provide useful references for future experimental and theoretical works. This research presents a systematic approach to investigate the thermodynamic and mechanical properties of materials.
Pressure calculation in hybrid particle-field simulations
NASA Astrophysics Data System (ADS)
Milano, Giuseppe; Kawakatsu, Toshihiro
2010-12-01
In the framework of a recently developed scheme for a hybrid particle-field simulation techniques where self-consistent field (SCF) theory and particle models (molecular dynamics) are combined [J. Chem. Phys. 130, 214106 (2009)], we developed a general formulation for the calculation of instantaneous pressure and stress tensor. The expressions have been derived from statistical mechanical definition of the pressure starting from the expression for the free energy functional in the SCF theory. An implementation of the derived formulation suitable for hybrid particle-field molecular dynamics-self-consistent field simulations is described. A series of test simulations on model systems are reported comparing the calculated pressure with those obtained from standard molecular dynamics simulations based on pair potentials.
A novel alkaloid isolated from Crotalaria paulina and identified by NMR and DFT calculations
NASA Astrophysics Data System (ADS)
Oliveira, Ramon Prata; Demuner, Antonio Jacinto; Alvarenga, Elson Santiago; Barbosa, Luiz Claudio Almeida; de Melo Silva, Thiago
2018-01-01
Pyrrolizidine alkaloids (PAs) are secondary metabolites found in Crotalaria genus and are known to have several biological activities. A novel macrocycle bislactone alkaloid, coined ethylcrotaline, was isolated and purified from the aerial parts of Crotalaria paulina. The novel macrocycle was identified with the aid of high resolution mass spectrometry and advanced nuclear magnetic resonance techniques. The relative stereochemistry of the alkaloid was defined by comparing the calculated quantum mechanical hydrogen and carbon chemical shifts of eight candidate structures with the experimental NMR data. The best fit between the eight candidate structures and the experimental NMR chemical shifts was defined by the DP4 statistical analyses and the Mean Absolute Error (MAE) calculations.
Statistical mechanics of money and income
NASA Astrophysics Data System (ADS)
Dragulescu, Adrian; Yakovenko, Victor
2001-03-01
Money: In a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money will assume the exponential Boltzmann-Gibbs form characterized by an effective temperature. We demonstrate how the Boltzmann-Gibbs distribution emerges in computer simulations of economic models. We discuss thermal machines, the role of debt, and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold. Reference: A. Dragulescu and V. M. Yakovenko, "Statistical mechanics of money", Eur. Phys. J. B 17, 723-729 (2000), [cond-mat/0001432]. Income: Using tax and census data, we demonstrate that the distribution of individual income in the United States is exponential. Our calculated Lorenz curve without fitting parameters and Gini coefficient 1/2 agree well with the data. We derive the distribution function of income for families with two earners and show that it also agrees well with the data. The family data for the period 1947-1994 fit the Lorenz curve and Gini coefficient 3/8=0.375 calculated for two-earners families. Reference: A. Dragulescu and V. M. Yakovenko, "Evidence for the exponential distribution of income in the USA", cond-mat/0008305.
SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Floros, D
Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less
NASA Astrophysics Data System (ADS)
Herman, Rhett; Ballowe, Abigail; Ashley, Joe
2017-11-01
Two students in a recent thermodynamics/statistical mechanics course needed to complete a course-related project to receive honors credit for the class. Such courses are typically theoretical, without an accompanying laboratory, although there are existing related hands-on exercises. The choice of the project was influenced by one student's desire to become a mechanical engineer after graduating while the other wanted a project that was "fun" without "just doing more calculations." The choice of this particular project was further refined by the future engineer's interest in the thermodynamics of car engines.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
Quantitative analysis of spatial variability of geotechnical parameters
NASA Astrophysics Data System (ADS)
Fang, Xing
2018-04-01
Geotechnical parameters are the basic parameters of geotechnical engineering design, while the geotechnical parameters have strong regional characteristics. At the same time, the spatial variability of geotechnical parameters has been recognized. It is gradually introduced into the reliability analysis of geotechnical engineering. Based on the statistical theory of geostatistical spatial information, the spatial variability of geotechnical parameters is quantitatively analyzed. At the same time, the evaluation of geotechnical parameters and the correlation coefficient between geotechnical parameters are calculated. A residential district of Tianjin Survey Institute was selected as the research object. There are 68 boreholes in this area and 9 layers of mechanical stratification. The parameters are water content, natural gravity, void ratio, liquid limit, plasticity index, liquidity index, compressibility coefficient, compressive modulus, internal friction angle, cohesion and SP index. According to the principle of statistical correlation, the correlation coefficient of geotechnical parameters is calculated. According to the correlation coefficient, the law of geotechnical parameters is obtained.
A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components
NASA Technical Reports Server (NTRS)
Abernethy, K.
1986-01-01
The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.
NASA Astrophysics Data System (ADS)
Hartmann, Alexander K.; Weigt, Martin
2005-10-01
A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.
Teaching Statistics Online Using "Excel"
ERIC Educational Resources Information Center
Jerome, Lawrence
2011-01-01
As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…
New statistical potential for quality assessment of protein models and a survey of energy functions
2010-01-01
Background Scoring functions, such as molecular mechanic forcefields and statistical potentials are fundamentally important tools in protein structure modeling and quality assessment. Results The performances of a number of publicly available scoring functions are compared with a statistical rigor, with an emphasis on knowledge-based potentials. We explored the effect on accuracy of alternative choices for representing interaction center types and other features of scoring functions, such as using information on solvent accessibility, on torsion angles, accounting for secondary structure preferences and side chain orientation. Partially based on the observations made, we present a novel residue based statistical potential, which employs a shuffled reference state definition and takes into account the mutual orientation of residue side chains. Atom- and residue-level statistical potentials and Linux executables to calculate the energy of a given protein proposed in this work can be downloaded from http://www.fiserlab.org/potentials. Conclusions Among the most influential terms we observed a critical role of a proper reference state definition and the benefits of including information about the microenvironment of interaction centers. Molecular mechanical potentials were also tested and found to be over-sensitive to small local imperfections in a structure, requiring unfeasible long energy relaxation before energy scores started to correlate with model quality. PMID:20226048
Kolakovic, Mirela; Held, Ulrike; Schmidlin, Patrick R; Sahrmann, Philipp
2014-12-22
Relevant benefits of adjunctive medication of antibiotica after conventional root surface debridement in terms of enhanced pocket depth (PD) reduction have been shown. However, means and standard deviations of enhanced reductions are difficult to translate into clinical relevant treatment outcomes such as pocket resolution or avoidance of additional surgical interventions. Accordingly, the aim of this systematic review was to calculate odds ratios for relevant cut-off values of PD after mechanical periodontal treatment with and without antibiotics, specifically the combination of amoxicilline and metronidazol, from published studies. As clinical relevant cut-off values "pocket closure" for PD ≤ 3mm and "avoidance of surgical intervention" for PD ≤ 5 mm were determined. The databases PubMed, Embase and Central were searched for randomized clinical studies assessing the beneficial effect of the combination of amoxicillin and metronidazole after non-surgical mechanical debridement. Titles, abstracts and finally full texts were scrutinized for possible inclusion by two independent investigators. Quality and heterogeneity of the studies were assessed and the study designs were examined. From published means and standard deviations for PD after therapy, odds ratios for the clinically relevant cut-off values were calculated using a specific statistical approach. Meta-analyses were performed for the time points 3 and 6 month after mechanical therapy. Generally, a pronounced chance for pocket closure from 3 to 6 months of healing was shown. The administration of antibiotics resulted in a 3.55 and 4.43 fold higher probability of pocket closure after 3 and 6 months as compared to mechanical therapy alone. However, as the estimated risk for residual pockets > 5 mm was 0 for both groups, no odds ratio could be calculated for persistent needs for surgery. Generally, studies showed a moderate to high quality and large heterogeneity regarding treatment protocol, dose of antibiotic medication and maintenance. With the performed statistical approach, a clear benefit in terms of an enhanced chance for pocket closure by co-administration of the combination of amoxicillin and metronidazole as an adjunct to non-surgical mechanical periodontal therapy has been shown. However, data calculation failed to show a benefit regarding the possible avoidance of surgical interventions.
Ingber, Lester; Nunez, Paul L
2011-02-01
The dynamic behavior of scalp potentials (EEG) is apparently due to some combination of global and local processes with important top-down and bottom-up interactions across spatial scales. In treating global mechanisms, we stress the importance of myelinated axon propagation delays and periodic boundary conditions in the cortical-white matter system, which is topologically close to a spherical shell. By contrast, the proposed local mechanisms are multiscale interactions between cortical columns via short-ranged non-myelinated fibers. A mechanical model consisting of a stretched string with attached nonlinear springs demonstrates the general idea. The string produces standing waves analogous to large-scale coherent EEG observed in some brain states. The attached springs are analogous to the smaller (mesoscopic) scale columnar dynamics. Generally, we expect string displacement and EEG at all scales to result from both global and local phenomena. A statistical mechanics of neocortical interactions (SMNI) calculates oscillatory behavior consistent with typical EEG, within columns, between neighboring columns via short-ranged non-myelinated fibers, across cortical regions via myelinated fibers, and also derives a string equation consistent with the global EEG model. Copyright © 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pandey, Preeti; Srivastava, Rakesh; Bandyopadhyay, Pradipta
2018-03-01
The relative performance of MM-PBSA and MM-3D-RISM methods to estimate the binding free energy of protein-ligand complexes is investigated by applying these to three proteins (Dihydrofolate Reductase, Catechol-O-methyltransferase, and Stromelysin-1) differing in the number of metal ions they contain. None of the computational methods could distinguish all the ligands based on their calculated binding free energies (as compared to experimental values). The difference between the two comes from both polar and non-polar part of solvation. For charged ligand case, MM-PBSA and MM-3D-RISM give a qualitatively different result for the polar part of solvation.
Path statistics, memory, and coarse-graining of continuous-time random walks on networks
Kion-Crosby, Willow; Morozov, Alexandre V.
2015-01-01
Continuous-time random walks (CTRWs) on discrete state spaces, ranging from regular lattices to complex networks, are ubiquitous across physics, chemistry, and biology. Models with coarse-grained states (for example, those employed in studies of molecular kinetics) or spatial disorder can give rise to memory and non-exponential distributions of waiting times and first-passage statistics. However, existing methods for analyzing CTRWs on complex energy landscapes do not address these effects. Here we use statistical mechanics of the nonequilibrium path ensemble to characterize first-passage CTRWs on networks with arbitrary connectivity, energy landscape, and waiting time distributions. Our approach can be applied to calculating higher moments (beyond the mean) of path length, time, and action, as well as statistics of any conservative or non-conservative force along a path. For homogeneous networks, we derive exact relations between length and time moments, quantifying the validity of approximating a continuous-time process with its discrete-time projection. For more general models, we obtain recursion relations, reminiscent of transfer matrix and exact enumeration techniques, to efficiently calculate path statistics numerically. We have implemented our algorithm in PathMAN (Path Matrix Algorithm for Networks), a Python script that users can apply to their model of choice. We demonstrate the algorithm on a few representative examples which underscore the importance of non-exponential distributions, memory, and coarse-graining in CTRWs. PMID:26646868
Statistical mechanics of neocortical interactions: Constraints on 40-Hz models of short-term memory
NASA Astrophysics Data System (ADS)
Ingber, Lester
1995-10-01
Calculations presented in L. Ingber and P.L. Nunez, Phys. Rev. E 51, 5074 (1995) detailed the evolution of short-term memory in the neocortex, supporting the empirical 7+/-2 rule of constraints on the capacity of neocortical processing. These results are given further support when other recent models of 40-Hz subcycles of low-frequency oscillations are considered.
ERIC Educational Resources Information Center
Sevilla, F. J.; Olivares-Quiroz, L.
2012-01-01
In this work, we address the concept of the chemical potential [mu] in classical and quantum gases towards the calculation of the equation of state [mu] = [mu](n, T) where n is the particle density and "T" the absolute temperature using the methods of equilibrium statistical mechanics. Two cases seldom discussed in elementary textbooks are…
NASA Astrophysics Data System (ADS)
Jambrina, P. G.; Lara, Manuel; Menéndez, M.; Launay, J.-M.; Aoiz, F. J.
2012-10-01
Cumulative reaction probabilities (CRPs) at various total angular momenta have been calculated for the barrierless reaction S(1D) + H2 → SH + H at total energies up to 1.2 eV using three different theoretical approaches: time-independent quantum mechanics (QM), quasiclassical trajectories (QCT), and statistical quasiclassical trajectories (SQCT). The calculations have been carried out on the widely used potential energy surface (PES) by Ho et al. [J. Chem. Phys. 116, 4124 (2002), 10.1063/1.1431280] as well as on the recent PES developed by Song et al. [J. Phys. Chem. A 113, 9213 (2009), 10.1021/jp903790h]. The results show that the differences between these two PES are relatively minor and mostly related to the different topologies of the well. In addition, the agreement between the three theoretical methodologies is good, even for the highest total angular momenta and energies. In particular, the good accordance between the CRPs obtained with dynamical methods (QM and QCT) and the statistical model (SQCT) indicates that the reaction can be considered statistical in the whole range of energies in contrast with the findings for other prototypical barrierless reactions. In addition, total CRPs and rate coefficients in the range of 20-1000 K have been calculated using the QCT and SQCT methods and have been found somewhat smaller than the experimental total removal rates of S(1D).
Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system
NASA Astrophysics Data System (ADS)
Freitas, Rodrigo; Frolov, Timofey; Asta, Mark
2017-10-01
Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.
Spatio-temporal analysis of aftershock sequences in terms of Non Extensive Statistical Physics.
NASA Astrophysics Data System (ADS)
Chochlaki, Kalliopi; Vallianatos, Filippos
2017-04-01
Earth's seismicity is considered as an extremely complicated process where long-range interactions and fracturing exist (Vallianatos et al., 2016). For this reason, in order to analyze it, we use an innovative methodological approach, introduced by Tsallis (Tsallis, 1988; 2009), named Non Extensive Statistical Physics. This approach introduce a generalization of the Boltzmann-Gibbs statistical mechanics and it is based on the definition of Tsallis entropy Sq, which maximized leads the the so-called q-exponential function that expresses the probability distribution function that maximizes the Sq. In the present work, we utilize the concept of Non Extensive Statistical Physics in order to analyze the spatiotemporal properties of several aftershock series. Marekova (Marekova, 2014) suggested that the probability densities of the inter-event distances between successive aftershocks follow a beta distribution. Using the same data set we analyze the inter-event distance distribution of several aftershocks sequences in different geographic regions by calculating non extensive parameters that determine the behavior of the system and by fitting the q-exponential function, which expresses the degree of non-extentivity of the investigated system. Furthermore, the inter-event times distribution of the aftershocks as well as the frequency-magnitude distribution has been analyzed. The results supports the applicability of Non Extensive Statistical Physics ideas in aftershock sequences where a strong correlation exists along with memory effects. References C. Tsallis, Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys. 52 (1988) 479-487. doi:10.1007/BF01016429 C. Tsallis, Introduction to nonextensive statistical mechanics: Approaching a complex world, 2009. doi:10.1007/978-0-387-85359-8. E. Marekova, Analysis of the spatial distribution between successive earthquakes in aftershocks series, Annals of Geophysics, 57, 5, doi:10.4401/ag-6556, 2014 F. Vallianatos, G. Papadakis, G. Michas, Generalized statistical mechanics approaches to earthquakes and tectonics. Proc. R. Soc. A, 472, 20160497, 2016.
Spontaneous pion emission as a new natural radioactivity
NASA Astrophysics Data System (ADS)
Ion, D. B.; Ivascu, M.; Ion-Mihai, R.
1986-10-01
In this paper the pionic nuclear radioactivity or spontaneous poin emission by a nucleus from its ground state is investigated. The Qπ-values as well as the statistical factors are calculated using the experimental masses tabulated by Wapstra and Audi. Then it was shown that the pionic radioactivity of the nuclear ground state is energetically possible via three-body channels for all nuclides with Z > 80. This new type of natural radioactivity is statistically favored especially for Z = 92 - 106 for which F π/F SF = 40 - 200 [ MeV] 2. Experimental detection of the neutral pion and also some possible emission mechanisms are discussed.
Generalized self-adjustment method for statistical mechanics of composite materials
NASA Astrophysics Data System (ADS)
Pan'kov, A. A.
1997-03-01
A new method is developed for the statistical mechanics of composite materials — the generalized selfadjustment method — which makes it possible to reduce the problem of predicting effective elastic properties of composites with random structures to the solution of two simpler "averaged" problems of an inclusion with transitional layers in a medium with the desired effective elastic properties. The inhomogeneous elastic properties and dimensions of the transitional layers take into account both the "approximate" order of mutual positioning, and also the variation in the dimensions and elastics properties of inclusions through appropriate special averaged indicator functions of the random structure of the composite. A numerical calculation of averaged indicator functions and effective elastic characteristics is performed by the generalized self-adjustment method for a unidirectional fiberglass on the basis of various models of actual random structures in the plane of isotropy.
Simulation of magnetoelastic response of iron nanowire loop
NASA Astrophysics Data System (ADS)
Huang, Junping; Peng, Xianghe; Wang, Zhongchang; Hu, Xianzhi
2018-03-01
We analyzed the magnetoelastic responses of one-dimensional iron nanowire loop systems with quantum statistical mechanics, treating the particles in the systems as identical bosons with an arbitrary integer spin. Under the assumptions adopted, we demonstrated that the Hamiltonian of the system can be separated into two parts, corresponding to two Ising subsystems, describing the particle spin and the particle displacement, respectively. Because the energy of the particle motion at atomic scale is quantized, there should be more the strict constraint on the particle displacement Ising subsystem. Making use of the existing results for Ising system, the partition function of the system was derived into two parts, corresponding respectively to the two Ising subsystems. Then the Gibbs distribution was obtained by statistical mechanics, and the description for the magnetoelastic response was derived. The magnetoelastic responses were predicted with the developed approach, and the comparison with the results calculated with VASP demonstrates the validity of the developed approach.
NASA Astrophysics Data System (ADS)
Li, Y.; Robertson, C.
2018-06-01
The influence of irradiation defect dispersions on plastic strain spreading is investigated by means of three-dimensional dislocation dynamics (DD) simulations, accounting for thermally activated slip and cross-slip mechanisms in Fe-2.5%Cr grains. The defect-induced evolutions of the effective screw dislocation mobility are evaluated by means of statistical comparisons, for various defect number density and defect size cases. Each comparison is systematically associated with a quantitative Defect-Induced Apparent Straining Temperature shift (or «ΔDIAT»), calculated without any adjustable parameters. In the investigated cases, the ΔDIAT level associated with a given defect dispersion closely replicates the measured ductile to brittle transition temperature shift (ΔDBTT) due to the same, actual defect dispersion. The results are further analyzed in terms of dislocation-based plasticity mechanisms and their possible relations with the dose-dependent changes of the ductile to brittle transition temperature.
Statistical Mechanics Model of Solids with Defects
NASA Astrophysics Data System (ADS)
Kaufman, M.; Walters, P. A.; Ferrante, J.
1997-03-01
Previously(M.Kaufman, J.Ferrante,NASA Tech. Memor.,1996), we examined the phase diagram for the failure of a solid under isotropic expansion and compression as a function of stress and temperature with the "springs" modelled by the universal binding energy relation (UBER)(J.H.Rose, J.R.Smith, F.Guinea, J.Ferrante, Phys.Rev.B29, 2963 (1984)). In the previous calculation we assumed that the "springs" failed independently and that the strain is uniform. In the present work, we have extended this statistical model of mechanical failure by allowing for correlations between "springs" and for thermal fluctuations in strains. The springs are now modelled in the harmonic approximation with a failure threshold energy E0, as an intermediate step in future studies to reinclude the full non-linear dependence of the UBER for modelling the interactions. We use the Migdal-Kadanoff renormalization-group method to determine the phase diagram of the model and to compute the free energy.
Noise and the statistical mechanics of distributed transport in a colony of interacting agents
NASA Astrophysics Data System (ADS)
Katifori, Eleni; Graewer, Johannes; Ronellenfitsch, Henrik; Mazza, Marco G.
Inspired by the process of liquid food distribution between individuals in an ant colony, in this work we consider the statistical mechanics of resource dissemination between interacting agents with finite carrying capacity. The agents move inside a confined space (nest), pick up the food at the entrance of the nest and share it with other agents that they encounter. We calculate analytically and via a series of simulations the global food intake rate for the whole colony as well as observables describing how uniformly the food is distributed within the nest. Our model and predictions provide a useful benchmark to assess which strategies can lead to efficient food distribution within the nest and also to what level the observed food uptake rates and efficiency in food distribution are due to stochastic fluctuations or specific food exchange strategies by an actual ant colony.
NASA Astrophysics Data System (ADS)
Zhou, Chi-Chun; Dai, Wu-Sheng
2018-02-01
In statistical mechanics, for a system with a fixed number of particles, e.g. a finite-size system, strictly speaking, the thermodynamic quantity needs to be calculated in the canonical ensemble. Nevertheless, the calculation of the canonical partition function is difficult. In this paper, based on the mathematical theory of the symmetric function, we suggest a method for the calculation of the canonical partition function of ideal quantum gases, including ideal Bose, Fermi, and Gentile gases. Moreover, we express the canonical partition functions of interacting classical and quantum gases given by the classical and quantum cluster expansion methods in terms of the Bell polynomial in mathematics. The virial coefficients of ideal Bose, Fermi, and Gentile gases are calculated from the exact canonical partition function. The virial coefficients of interacting classical and quantum gases are calculated from the canonical partition function by using the expansion of the Bell polynomial, rather than calculated from the grand canonical potential.
Orestes, Ednilsom; Bistafa, Carlos; Rivelino, Roberto; Canuto, Sylvio
2015-05-28
The vibrational circular dichroism (VCD) spectrum of l-alanine amino acid in aqueous solution in ambient conditions has been studied. The emphasis has been placed on the inclusion of the thermal disorder of the solute-solvent hydrogen bonds that characterize the aqueous solution condition. A combined and sequential use of molecular mechanics and quantum mechanics was adopted. To calculate the average VCD spectrum, the DFT B3LYP/6-311++G(d,p) level of calculation was employed, over one-hundred configurations composed of the solute plus all water molecules making hydrogen bonds with the solute. Simplified considerations including only four explicit solvent molecules and the polarizable continuum model were also made for comparison. Considering the large number of vibration frequencies with only limited experimental results a direct comparison is presented, when possible, and in addition a statistical analysis of the calculated values was performed. The results are found to be in line with the experiment, leading to the conclusion that including thermal disorder may improve the agreement of the vibrational frequencies with experimental results, but the thermal effects may be of greater value in the calculations of the rotational strengths.
A second order thermodynamic perturbation theory for hydrogen bond cooperativity in water
NASA Astrophysics Data System (ADS)
Marshall, Bennett D.
2017-05-01
It has been extensively demonstrated through first principles quantum mechanics calculations that water exhibits strong hydrogen bond cooperativity. Equations of state developed from statistical mechanics typically assume pairwise additivity, meaning they cannot account for these 3-body and higher cooperative effects. In this paper, we extend a second order thermodynamic perturbation theory to correct for hydrogen bond cooperativity in 4 site water. We demonstrate that the theory predicts hydrogen bonding structure consistent spectroscopy, neutron diffraction, and molecular simulation data. Finally, we implement the approach into a general equation of state for water.
NASA Astrophysics Data System (ADS)
Kumar, Jagadish; Ananthakrishna, G.
2018-01-01
Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.
Kumar, Jagadish; Ananthakrishna, G
2018-01-01
Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.
Statistical mechanics of influence maximization with thermal noise
NASA Astrophysics Data System (ADS)
Lynn, Christopher W.; Lee, Daniel D.
2017-03-01
The problem of optimally distributing a budget of influence among individuals in a social network, known as influence maximization, has typically been studied in the context of contagion models and deterministic processes, which fail to capture stochastic interactions inherent in real-world settings. Here, we show that by introducing thermal noise into influence models, the dynamics exactly resemble spins in a heterogeneous Ising system. In this way, influence maximization in the presence of thermal noise has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Using this statistical mechanical formulation, we demonstrate analytically that for small external-field budgets, the optimal influence solutions exhibit a highly non-trivial temperature dependence, focusing on high-degree hub nodes at high temperatures and on easily influenced peripheral nodes at low temperatures. For the general problem, we present a projected gradient ascent algorithm that uses the magnetic susceptibility to calculate locally optimal external-field distributions. We apply our algorithm to synthetic and real-world networks, demonstrating that our analytic results generalize qualitatively. Our work establishes a fruitful connection with statistical mechanics and demonstrates that influence maximization depends crucially on the temperature of the system, a fact that has not been appreciated by existing research.
Statistical mechanics of free particles on space with Lie-type noncommutativity
NASA Astrophysics Data System (ADS)
Shariati, Ahmad; Khorrami, Mohammad; Fatollahi, Amir H.
2010-07-01
Effects of Lie-type noncommutativity on thermodynamic properties of a system of free identical particles are investigated. A definition for finite volume of the configuration space is given, and the grandcanonical partition function in the thermodynamic limit is calculated. Two possible definitions for the pressure are discussed, which are equivalent when the noncommutativity vanishes. The thermodynamic observables are extracted from the partition function. Different limits are discussed where either the noncommutativity or the quantum effects are important. Finally, specific cases are discussed where the group is SU(2) or SO(3), and the partition function of a nondegenerate gas is calculated.
Statistical Earthquake Focal Mechanism Forecasts
NASA Astrophysics Data System (ADS)
Kagan, Y. Y.; Jackson, D. D.
2013-12-01
The new whole Earth focal mechanism forecast, based on the GCMT catalog, has been created. In the present forecast, the sum of normalized seismic moment tensors within 1000 km radius is calculated and the P- and T-axes for the focal mechanism are evaluated on the basis of the sum. Simultaneously we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms. This average angle shows tectonic complexity of a region and indicates the accuracy of the prediction. The method was originally proposed by Kagan and Jackson (1994, JGR). Recent interest by CSEP and GEM has motivated some improvements, particularly to extend the previous forecast to polar and near-polar regions. The major problem in extending the forecast is the focal mechanism calculation on a spherical surface. In the previous forecast as our average focal mechanism was computed, it was assumed that longitude lines are approximately parallel within 1000 km radius. This is largely accurate in the equatorial and near-equatorial areas. However, when one approaches the 75 degree latitude, the longitude lines are no longer parallel: the bearing (azimuthal) difference at points separated by 1000 km reach about 35 degrees. In most situations a forecast point where we calculate an average focal mechanism is surrounded by earthquakes, so a bias should not be strong due to the difference effect cancellation. But if we move into polar regions, the bearing difference could approach 180 degrees. In a modified program focal mechanisms have been projected on a plane tangent to a sphere at a forecast point. New longitude axes which are parallel in the tangent plane are corrected for the bearing difference. A comparison with the old 75S-75N forecast shows that in equatorial regions the forecasted focal mechanisms are almost the same, and the difference in the forecasted focal mechanisms rotation angle is close to zero. However, though the forecasted focal mechanisms are similar, closer to the 75 latitude degree, the difference in the rotation angle is large (around a factor 1.5 in some places). The Gamma-index was calculated for the average focal mechanism moment. A non-zero Index indicates that earthquake focal mechanisms around the forecast point have different orientations. Thus deformation complexity displays itself in the average rotation angle and in the Index. However, sometimes the rotation angle is close to zero, whereas the Index is large, testifying to a large CLVD presence. Both new 0.5x0.5 and 0.1x0.1 degree forecasts are posted at http://eq.ess.ucla.edu/~kagan/glob_gcmt_index.html.
Annular tautomerism: experimental observations and quantum mechanics calculations.
Cruz-Cabeza, Aurora J; Schreyer, Adrian; Pitt, William R
2010-06-01
The use of MP2 level quantum mechanical (QM) calculations on isolated heteroaromatic ring systems for the prediction of the tautomeric propensities of whole molecules in a crystalline environment was examined. A Polarisable Continuum Model was used in the calculations to account for environment effects on the tautomeric relative stabilities. The calculated relative energies of tautomers were compared to relative abundances within the Cambridge Structural Database (CSD) and the Protein Data Bank (PDB). The work was focussed on 84 annular tautomeric forms of 34 common ring systems. Good agreement was found between the calculations and the experimental data even if the quantity of these data was limited in many cases. The QM results were compared to those produced by much faster semiempirical calculations. In a search for other sources of the useful experimental data, the relative numbers of known compounds in which prototropic positions were often substituted by heavy atoms were also analysed. A scheme which groups all annular tautomeric transformations into 10 classes was developed. The scheme was designed to encompass a comprehensive set of known and theoretically possible tautomeric ring systems generated as part of a previous study. General trends across analogous ring systems were detected as a result. The calculations and statistics collected on crystallographic data as well as the general trends observed should be useful for the better modelling of annular tautomerism in the applications such as computer-aided drug design, small molecule crystal structure prediction, the naming of compounds and the interpretation of protein-small molecule crystal structures.
Annular tautomerism: experimental observations and quantum mechanics calculations
NASA Astrophysics Data System (ADS)
Cruz-Cabeza, Aurora J.; Schreyer, Adrian; Pitt, William R.
2010-06-01
The use of MP2 level quantum mechanical (QM) calculations on isolated heteroaromatic ring systems for the prediction of the tautomeric propensities of whole molecules in a crystalline environment was examined. A Polarisable Continuum Model was used in the calculations to account for environment effects on the tautomeric relative stabilities. The calculated relative energies of tautomers were compared to relative abundances within the Cambridge Structural Database (CSD) and the Protein Data Bank (PDB). The work was focussed on 84 annular tautomeric forms of 34 common ring systems. Good agreement was found between the calculations and the experimental data even if the quantity of these data was limited in many cases. The QM results were compared to those produced by much faster semiempirical calculations. In a search for other sources of the useful experimental data, the relative numbers of known compounds in which prototropic positions were often substituted by heavy atoms were also analysed. A scheme which groups all annular tautomeric transformations into 10 classes was developed. The scheme was designed to encompass a comprehensive set of known and theoretically possible tautomeric ring systems generated as part of a previous study. General trends across analogous ring systems were detected as a result. The calculations and statistics collected on crystallographic data as well as the general trends observed should be useful for the better modelling of annular tautomerism in the applications such as computer-aided drug design, small molecule crystal structure prediction, the naming of compounds and the interpretation of protein—small molecule crystal structures.
Quantum statistical mechanics of dense partially ionized hydrogen.
NASA Technical Reports Server (NTRS)
Dewitt, H. E.; Rogers, F. J.
1972-01-01
The theory of dense hydrogenic plasmas beginning with the two component quantum grand partition function is reviewed. It is shown that ionization equilibrium and molecular dissociation equilibrium can be treated in the same manner with proper consideration of all two-body states. A quantum perturbation expansion is used to give an accurate calculation of the equation of state of the gas for any degree of dissociation and ionization. In this theory, the effective interaction between any two charges is the dynamic screened potential obtained from the plasma dielectric function. We make the static approximation; and we carry out detailed numerical calculations with the bound and scattering states of the Debye potential, using the Beth-Uhlenbeck form of the quantum second virial coefficient. We compare our results with calculations from the Saha equation.
NASA Astrophysics Data System (ADS)
Miswan, M. A.; Gopir, G.; Anas, M. M.
2016-11-01
Geometry optimization is one of the most widely used methods to study in carbon cluster Cn to understand its structural properties. The total energy for each of the structures was calculated using Octopus software with conjugate gradient Broyden-Fletcher-Goldfarb-Shanno (CG-BFGS). Our calculation and other studies indicate that the linear forms are the most stable structures. However, the C3 isomers have equal probability to form, as the differences in our calculation of total energy are statistically insignificant. Despite there are two cohort of total energy, the calculations are acceptable due to the energy ratio between C3 to C2 and C2 to C1 are comparable to others work. Meanwhile, the bond properties of the C2 and C3 bonds also gives significant difference between our work and previous study.
NASA Astrophysics Data System (ADS)
Iguchi, Kazumoto
We discuss the statistical mechanical foundation for the two-state transition in the protein folding of small globular proteins. In the standard arguments of protein folding, the statistical search for the ground state is carried out from astronomically many conformations in the configuration space. This leads us to the famous Levinthal's paradox. To resolve the paradox, Gō first postulated that the two-state transition - all-or-none type transition - is very crucial for the protein folding of small globular proteins and used the Gō's lattice model to show the two-state transition nature. Recently, there have been accumulated many experimental results that support the two-state transition for small globular proteins. Stimulated by such recent experiments, Zwanzig has introduced a minimal statistical mechanical model that exhibits the two-state transition. Also, Finkelstein and coworkers have discussed the solution of the paradox by considering the sequential folding of a small globular protein. On the other hand, recently Iguchi have introduced a toy model of protein folding using the Rubik's magic snake model, in which all folded structures are exactly known and mathematically represented in terms of the four types of conformations: cis-, trans-, left and right gauche-configurations between the unit polyhedrons. In this paper, we study the relationship between the Gō's two-state transition, the Zwanzig's statistical mechanics model and the Finkelsteinapos;s sequential folding model by applying them to the Rubik's magic snake models. We show that the foundation of the Gō's two-state transition model relies on the search within the equienergy surface that is labeled by the contact order of the hydrophobic condensation. This idea reproduces the Zwanzig's statistical model as a special case, realizes the Finkelstein's sequential folding model and fits together to understand the nature of the two-state transition of a small globular protein by calculating the physical quantities such as the free energy, the contact order and the specific heat. We point out the similarity between the liquid-gas transition in statistical mechanics and the two-state transition of protein folding. We also study morphology of the Rubik's magic snake models to give a prototype model for understanding the differences between α-helices proteins and β-sheets proteins.
NASA Astrophysics Data System (ADS)
Le Gal, R.; Xie, C.; Herbst, E.; Talbi, D.; Guo, H.; Muller, S.
2017-12-01
Multi-hydrogenated species with proper symmetry properties can present different spin configurations, and thus exist under different spin symmetry forms, labeled as para and ortho for two-hydrogen molecules. We investigated here the ortho-to-para ratio (OPR) of H2Cl+ in the light of new observations performed in the z = 0.89 absorber toward the lensed quasar PKS 1830-211 with the Atacama Large Millimeter/submillimeter Array (ALMA). Two independent lines of sight were observed, to the southwest (SW) and northeast (NE) images of the quasar, with OPR values found to be 3.15 ± 0.13 and 3.1 ± 0.5 in each region, respectively, in agreement with a spin statistical weight of 3:1. An OPR of 3:1 for a molecule containing two identical hydrogen nuclei can refer to either a statistical result or a high-temperature limit depending on the reaction mechanism leading to its formation. It is thus crucial to identify rigorously how OPRs are produced in order to constrain the information that these probes can provide. To understand the production of the H2Cl+ OPR, we undertook a careful theoretical study of the reaction mechanisms involved with the aid of quasi-classical trajectory calculations on a new global potential energy surface fit to a large number of high-level ab initio data. Our study shows that the major formation reaction for H2Cl+ produces this ion via a hydrogen abstraction rather than a scrambling mechanism. Such a mechanism leads to a 3:1 OPR, which is not changed by destruction and possible thermalization reactions for H2Cl+ and is thus likely to be the cause of observed 3:1 OPR ratios, contrary to the normal assumption of scrambling.
Quantum signature of chaos and thermalization in the kicked Dicke model
NASA Astrophysics Data System (ADS)
Ray, S.; Ghosh, A.; Sinha, S.
2016-09-01
We study the quantum dynamics of the kicked Dicke model (KDM) in terms of the Floquet operator, and we analyze the connection between chaos and thermalization in this context. The Hamiltonian map is constructed by suitably taking the classical limit of the Heisenberg equation of motion to study the corresponding phase-space dynamics, which shows a crossover from regular to chaotic motion by tuning the kicking strength. The fixed-point analysis and calculation of the Lyapunov exponent (LE) provide us with a complete picture of the onset of chaos in phase-space dynamics. We carry out a spectral analysis of the Floquet operator, which includes a calculation of the quasienergy spacing distribution and structural entropy to show the correspondence to the random matrix theory in the chaotic regime. Finally, we analyze the thermodynamics and statistical properties of the bosonic sector as well as the spin sector, and we discuss how such a periodically kicked system relaxes to a thermalized state in accordance with the laws of statistical mechanics.
Quantum signature of chaos and thermalization in the kicked Dicke model.
Ray, S; Ghosh, A; Sinha, S
2016-09-01
We study the quantum dynamics of the kicked Dicke model (KDM) in terms of the Floquet operator, and we analyze the connection between chaos and thermalization in this context. The Hamiltonian map is constructed by suitably taking the classical limit of the Heisenberg equation of motion to study the corresponding phase-space dynamics, which shows a crossover from regular to chaotic motion by tuning the kicking strength. The fixed-point analysis and calculation of the Lyapunov exponent (LE) provide us with a complete picture of the onset of chaos in phase-space dynamics. We carry out a spectral analysis of the Floquet operator, which includes a calculation of the quasienergy spacing distribution and structural entropy to show the correspondence to the random matrix theory in the chaotic regime. Finally, we analyze the thermodynamics and statistical properties of the bosonic sector as well as the spin sector, and we discuss how such a periodically kicked system relaxes to a thermalized state in accordance with the laws of statistical mechanics.
Computational simulation of the creep-rupture process in filamentary composite materials
NASA Technical Reports Server (NTRS)
Slattery, Kerry T.; Hackett, Robert M.
1991-01-01
A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng
2015-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
Equation of state of detonation products based on statistical mechanical theory
NASA Astrophysics Data System (ADS)
Zhao, Yanhong; Liu, Haifeng; Zhang, Gongmu; Song, Haifeng; Iapcm Team
2013-06-01
The equation of state (EOS) of gaseous detonation products is calculated using Ross's modification of hard-sphere variation theory and the improved one-fluid van der Waals mixture model. The condensed phase of carbon is a mixture of graphite, diamond, graphite-like liquid and diamond-like liquid. For a mixed system of detonation products, the free energy minimization principle is used to calculate the equilibrium compositions of detonation products by solving chemical equilibrium equations. Meanwhile, a chemical equilibrium code is developed base on the theory proposed in this article, and then it is used in the three typical calculations as follow: (i) Calculation for detonation parameters of explosive, the calculated values of detonation velocity, the detonation pressure and the detonation temperature are in good agreement with experimental ones. (ii) Calculation for isentropic unloading line of RDX explosive, whose starting points is the CJ point. Comparison with the results of JWL EOS it is found that the calculated value of gamma is monotonically decreasing using the presented theory in this paper, while double peaks phenomenon appears using JWL EOS.
Methodological reporting of randomized trials in five leading Chinese nursing journals.
Shi, Chunhu; Tian, Jinhui; Ren, Dan; Wei, Hongli; Zhang, Lihuan; Wang, Quan; Yang, Kehu
2014-01-01
Randomized controlled trials (RCTs) are not always well reported, especially in terms of their methodological descriptions. This study aimed to investigate the adherence of methodological reporting complying with CONSORT and explore associated trial level variables in the Chinese nursing care field. In June 2012, we identified RCTs published in five leading Chinese nursing journals and included trials with details of randomized methods. The quality of methodological reporting was measured through the methods section of the CONSORT checklist and the overall CONSORT methodological items score was calculated and expressed as a percentage. Meanwhile, we hypothesized that some general and methodological characteristics were associated with reporting quality and conducted a regression with these data to explore the correlation. The descriptive and regression statistics were calculated via SPSS 13.0. In total, 680 RCTs were included. The overall CONSORT methodological items score was 6.34 ± 0.97 (Mean ± SD). No RCT reported descriptions and changes in "trial design," changes in "outcomes" and "implementation," or descriptions of the similarity of interventions for "blinding." Poor reporting was found in detailing the "settings of participants" (13.1%), "type of randomization sequence generation" (1.8%), calculation methods of "sample size" (0.4%), explanation of any interim analyses and stopping guidelines for "sample size" (0.3%), "allocation concealment mechanism" (0.3%), additional analyses in "statistical methods" (2.1%), and targeted subjects and methods of "blinding" (5.9%). More than 50% of trials described randomization sequence generation, the eligibility criteria of "participants," "interventions," and definitions of the "outcomes" and "statistical methods." The regression analysis found that publication year and ITT analysis were weakly associated with CONSORT score. The completeness of methodological reporting of RCTs in the Chinese nursing care field is poor, especially with regard to the reporting of trial design, changes in outcomes, sample size calculation, allocation concealment, blinding, and statistical methods.
Connes distance function on fuzzy sphere and the connection between geometry and statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devi, Yendrembam Chaoba, E-mail: chaoba@bose.res.in; Chakraborty, Biswajit, E-mail: biswajit@bose.res.in; Prajapat, Shivraj, E-mail: shraprajapat@gmail.com
An algorithm to compute Connes spectral distance, adaptable to the Hilbert-Schmidt operatorial formulation of non-commutative quantum mechanics, was developed earlier by introducing the appropriate spectral triple and used to compute infinitesimal distances in the Moyal plane, revealing a deep connection between geometry and statistics. In this paper, using the same algorithm, the Connes spectral distance has been calculated in the Hilbert-Schmidt operatorial formulation for the fuzzy sphere whose spatial coordinates satisfy the su(2) algebra. This has been computed for both the discrete and the Perelemov’s SU(2) coherent state. Here also, we get a connection between geometry and statistics which ismore » shown by computing the infinitesimal distance between mixed states on the quantum Hilbert space of a particular fuzzy sphere, indexed by n ∈ ℤ/2.« less
University of California Conference on Statistical Mechanics (4th) Held March 26-28, 1990
1990-03-28
and S. Lago, Chem. Phys., Z, 5750 (1983) Shear Viscosity Calculation via Equilibrium Molecular Dynamics: Einstenian vs. Green - Kubo Formalism by Adel A...through the application of the Green - Kubo approach. Although the theoretical equivalence between both formalisms was demonstrated by Helfand [3], their...like equations and of different expressions based on the Green - Kubo formalism. In contrast to Hoheisel and Vogelsang’s conclusions [2], we find that
Scout trajectory error propagation computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1982-01-01
Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.
Multifractality and freezing phenomena in random energy landscapes: An introduction
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.
2010-10-01
We start our lectures with introducing and discussing the general notion of multifractality spectrum for random measures on lattices, and how it can be probed using moments of that measure. Then we show that the Boltzmann-Gibbs probability distributions generated by logarithmically correlated random potentials provide a simple yet non-trivial example of disorder-induced multifractal measures. The typical values of the multifractality exponents can be extracted from calculating the free energy of the associated Statistical Mechanics problem. To succeed in such a calculation we introduce and discuss in some detail two analytically tractable models for logarithmically correlated potentials. The first model uses a special definition of distances between points in space and is based on the idea of multiplicative cascades which originated in theory of turbulent motion. It is essentially equivalent to statistical mechanics of directed polymers on disordered trees studied long ago by Derrida and Spohn (1988) in Ref. [12]. In this way we introduce the notion of the freezing transition which is identified with an abrupt change in the multifractality spectrum. Second model which allows for explicit analytical evaluation of the free energy is the infinite-dimensional version of the problem which can be solved by employing the replica trick. In particular, the latter version allows one to identify the freezing phenomenon with a mechanism of the replica symmetry breaking (RSB) and to elucidate its physical meaning. The corresponding one-step RSB solution turns out to be marginally stable everywhere in the low-temperature phase. We finish with a short discussion of recent developments and extensions of models with logarithmic correlations, in particular in the context of extreme value statistics. The first appendix summarizes the standard elementary information about Gaussian integrals and related subjects, and introduces the notion of the Gaussian free field characterized by logarithmic correlations. Three other appendices provide the detailed exposition of a few technical details underlying the replica analysis of the model discussed in the lectures.
Hagenfeld, Daniel; Koch, Raphael; Jünemann, Sebastian; Prior, Karola; Harks, Inga; Eickholz, Peter; Hoffmann, Thomas; Kim, Ti-Sun; Kocher, Thomas; Meyle, Jörg; Kaner, Doğan; Schlagenhauf, Ulrich; Ehmke, Benjamin; Harmsen, Dag
2018-01-01
Empiric antibiotics are often used in combination with mechanical debridement to treat patients suffering from periodontitis and to eliminate disease-associated pathogens. Until now, only a few next generation sequencing 16S rDNA amplicon based publications with rather small sample sizes studied the effect of those interventions on the subgingival microbiome. Therefore, we studied subgingival samples of 89 patients with chronic periodontitis (solely non-smokers) before and two months after therapy. Forty-seven patients received mechanical periodontal therapy only, whereas 42 patients additionally received oral administered amoxicillin plus metronidazole (500 and 400 mg, respectively; 3x/day for 7 days). Samples were sequenced with Illumina MiSeq 300 base pairs paired end technology (V3 and V4 hypervariable regions of the 16S rDNA). Inter-group differences before and after therapy of clinical variables (percentage of sites with pocket depth ≥ 5mm, percentage of sites with bleeding on probing) and microbiome variables (diversity, richness, evenness, and dissimilarity) were calculated, a principal coordinate analysis (PCoA) was conducted, and differential abundance of agglomerated ribosomal sequence variants (aRSVs) classified on genus level was calculated using a negative binomial regression model. We found statistically noticeable decreased richness, and increased dissimilarity in the antibiotic, but not in the placebo group after therapy. The PCoA revealed a clear compositional separation of microbiomes after therapy in the antibiotic group, which could not be seen in the group receiving mechanical therapy only. This difference was even more pronounced on aRSV level. Here, adjunctive antibiotics were able to induce a microbiome shift by statistically noticeably reducing aRSVs belonging to genera containing disease-associated species, e.g., Porphyromonas, Tannerella, Treponema, and Aggregatibacter, and by noticeably increasing genera containing health-associated species. Mechanical therapy alone did not statistically noticeably affect any disease-associated taxa. Despite the difference in microbiome modulation both therapies improved the tested clinical parameters after two months. These results cast doubt on the relevance of the elimination and/or reduction of disease-associated taxa as a main goal of periodontal therapy.
Ablation effects in oxygen-lead fragmentation at 2.1 GeV/nucleon
NASA Technical Reports Server (NTRS)
Townsend, L. W.
1984-01-01
The mechanism of particle evaporation was used to examine ablation effects in the fragmentation of 2.1 GeV/nucleon oxygen nuclei by lead targets. Following the initial abrasion process, the excited projectile prefragment is assumed to statistically decay in a manner analogous to that of a compound nucleus. The decay probabilities for the various particle emission channels are calculated by using the EVAP-4 Monte Carlo computer program. The input excitation energy spectrum for the prefragment is estimated from the geometric ""clean cut'' abrasion-ablation model. Isotope production cross sections are calculated and compared with experimental data and with the predictions from the standard geometric abrasion-ablation fragmentation model.
Calculations of the surface tensions of liquid metals
NASA Technical Reports Server (NTRS)
Stroud, D. G.
1981-01-01
The understanding of the surface tension of liquid metals and alloys from as close to first principles as possible is discussed. The two ingredients which are combined in these calculations are: the electron theory of metals, and the classical theory of liquids, as worked out within the framework of statistical mechanics. The results are a new theory of surface tensions and surface density profiles from knowledge purely of the bulk properties of the coexisting liquid and vapor phases. It is found that the method works well for the pure liquid metals on which it was tested; work is extended to mixtures of liquid metals, interfaces between immiscible liquid metals, and to the temperature derivative of the surface tension.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Statistics. 1065.602 Section 1065.602... PROCEDURES Calculations and Data Requirements § 1065.602 Statistics. (a) Overview. This section contains equations and example calculations for statistics that are specified in this part. In this section we use...
Calculation of streamflow statistics for Ontario and the Great Lakes states
Piggott, Andrew R.; Neff, Brian P.
2005-01-01
Basic, flow-duration, and n-day frequency statistics were calculated for 779 current and historical streamflow gages in Ontario and 3,157 streamflow gages in the Great Lakes states with length-of-record daily mean streamflow data ending on December 31, 2000 and September 30, 2001, respectively. The statistics were determined using the U.S. Geological Survey’s SWSTAT and IOWDM, ANNIE, and LIBANNE software and Linux shell and PERL programming that enabled the mass processing of the data and calculation of the statistics. Verification exercises were performed to assess the accuracy of the processing and calculations. The statistics and descriptions, longitudes and latitudes, and drainage areas for each of the streamflow gages are summarized in ASCII text files and ESRI shapefiles.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2016-01-01
This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.
Statistics, Formation and Stability of Exoplanetary Systems
NASA Astrophysics Data System (ADS)
Silburt, Ari
Over the past two decades scientists have detected thousands of exoplanets, and their collective properties are now emerging. This thesis contributes to the exoplanet field by analyzing the statistics, formation and stability of exoplanetary systems. The first part of this thesis conducts a statistical reconstruction of the radius and period distributions of Kepler planets. Accounting for observation and detection biases, as well as measurement errors, we calculate the occurrence of planetary systems, including the prevalence of Earth-like planets. This calculation is compared to related works, finding both similarities and differences. Second, the formation of Kepler planets near mean motion resonance (MMR) is investigated. In particular, 27 Kepler systems near 2:1 MMR are analyzed to determine whether tides are a viable mechanism for transporting Kepler planets from MMR. We find that tides alone cannot transport near-resonant planets from exact 2:1 MMR to their observed locations, and other mechanisms must be invoked to explain their formation. Third, a new hybrid integrator HERMES is presented, which is capable of simulating N-bodies undergoing close encounters. HERMES is specifically designed for planets embedded in planetesimal disks, and includes an adaptive routine for optimizing the close encounter boundary to help maintain accuracy. We find the performance of HERMES comparable to other popular hybrid integrators. Fourth, the longterm stability of planetary systems is investigated using machine learning techniques. Typical studies of longterm stability require thousands of realizations to acquire statistically rigorous results, which can take weeks or months to perform. Here we find that a trained machine is capable of quickly and accurately classifying longterm planet stability. Finally, the planetary system HD155358, consisting of two Jovian-sized planets near 2:1 MMR, is investigated using previously collected radial velocity data. New orbital parameters are derived using a Bayesian framework, and we find a high likelihood that the planets are in MMR. In addition, formation and stability constraints are placed on the HD155358 system.
Applied statistics in ecology: common pitfalls and simple solutions
E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick
2013-01-01
The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...
1988-01-01
ignored but the Volkersen model is extended to include adherend deformations will be discussed. STATISTICAL METHODOLOGY FOR DESIGN ALLOWABLES [15-17...structure. In the certification methodology , the development test program and the calculation of composite design allowables is orchestrated to support...Development of design methodology of thick composites and their test methods. (b) Role of interface in emerging composite systems. *CONTRACTS IMPROVED DAMAGE
[Study of beta-turns in globular proteins].
Amirova, S R; Milchevskiĭ, Iu V; Filatov, I V; Esipova, N G; Tumanian, V G
2005-01-01
The formation of beta-turns in globular proteins has been studied by the method of molecular mechanics. Statistical method of discriminant analysis was applied to calculate energy components and sequences of oligopeptide segments, and after this prediction of I type beta-turns has been drawn. The accuracy of true positive prediction is 65%. Components of conformational energy considerably affecting beta-turn formation were delineated. There are torsional energy, energy of hydrogen bonds, and van der Waals energy.
A look inside the actuarial black box.
Math, S E; Youngerman, H
1992-12-01
Hospital executives often rely on actuaries (and their "black boxes") to determine self-insurance program liabilities and funding contributions. Typically, the hospital supplies the actuary with a myriad of statistics, and eventually the hospital receives a liability estimate and recommended funding level. The mysterious actuarial calculations that occur in between data reporting and receipt of the actuary's report are akin to a black box--a complicated device whose internal mechanism is hidden from or mysterious to the user.
ZERODUR - bending strength: review of achievements
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2017-08-01
Increased demand for using the glass ceramic ZERODUR® with high mechanical loads called for strength data based on larger statistical samples. Design calculations for failure probability target value below 1: 100 000 cannot be made reliable with parameters derived from 20 specimen samples. The data now available for a variety of surface conditions, ground with different grain sizes and acid etched for full micro crack removal, allow stresses by factors four to ten times higher than before. The large sample revealed that breakage stresses of ground surfaces follow the three parameter Weibull distribution instead of the two parameter version. This is more reasonable considering that the micro cracks of such surfaces have a maximum depth which is reflected in the existence of a threshold breakage stress below which breakage probability is zero. This minimum strength allows calculating minimum lifetimes. Fatigue under load can be taken into account by using the stress corrosion coefficient for the actual environmental humidity. For fully etched surfaces Weibull statistics fails. The precondition of the Weibull distribution, the existence of one unique failure mechanism, is not given anymore. ZERODUR® with fully etched surfaces free from damages introduced after etching endures easily 100 MPa tensile stress. The possibility to use ZERODUR® for combined high precision and high stress application was confirmed by the successful launch and continuing operation of LISA Pathfinder the precursor experiment for the gravitational wave antenna satellite array eLISA.
Demonstration and resolution of the Gibbs paradox of the first kind
NASA Astrophysics Data System (ADS)
Peters, Hjalmar
2014-01-01
The Gibbs paradox of the first kind (GP1) refers to the false increase in entropy which, in statistical mechanics, is calculated from the process of combining two gas systems S1 and S2 consisting of distinguishable particles. Presented in a somewhat modified form, the GP1 manifests as a contradiction to the second law of thermodynamics. Contrary to popular belief, this contradiction affects not only classical but also quantum statistical mechanics. This paper resolves the GP1 by considering two effects. (i) The uncertainty about which particles are located in S1 and which in S2 contributes to the entropies of S1 and S2. (ii) S1 and S2 are correlated by the fact that if a certain particle is located in one system, it cannot be located in the other. As a consequence, the entropy of the total system consisting of S1 and S2 is not the sum of the entropies of S1 and S2.
Principle of maximum entropy for reliability analysis in the design of machine components
NASA Astrophysics Data System (ADS)
Zhang, Yimin
2018-03-01
We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.
Molecular vibrational energy flow
NASA Astrophysics Data System (ADS)
Gruebele, M.; Bigwood, R.
This article reviews some recent work in molecular vibrational energy flow (IVR), with emphasis on our own computational and experimental studies. We consider the problem in various representations, and use these to develop a family of simple models which combine specific molecular properties (e.g. size, vibrational frequencies) with statistical properties of the potential energy surface and wavefunctions. This marriage of molecular detail and statistical simplification captures trends of IVR mechanisms and survival probabilities beyond the abilities of purely statistical models or the computational limitations of full ab initio approaches. Of particular interest is IVR in the intermediate time regime, where heavy-atom skeletal modes take over the IVR process from hydrogenic motions even upon X H bond excitation. Experiments and calculations on prototype heavy-atom systems show that intermediate time IVR differs in many aspects from the early stages of hydrogenic mode IVR. As a result, IVR can be coherently frozen, with potential applications to selective chemistry.
Stavileci, Miranda; Hoxha, Veton; Görduysus, Ömer; Tatar, Ilkan; Laperre, Kjell; Hostens, Jeroen; Küçükkaya, Selen; Muhaxheri, Edmond
2015-01-01
Background Complete mechanical preparation of the root canal system is rarely achieved. Therefore, the purpose of this study was to evaluate and compare the root canal shaping efficacy of ProTaper rotary files and standard stainless steel K-files using micro-computed tomography. Material/Methods Sixty extracted upper second premolars were selected and divided into 2 groups of 30 teeth each. Before preparation, all samples were scanned by micro-computed tomography. Thirty teeth were prepared with the ProTaper system and the other 30 with stainless steel files. After preparation, the untouched surface and root canal straightening were evaluated with micro-computed tomography. The percentage of untouched root canal surface was calculated in the coronal, middle, and apical parts of the canal. We also calculated straightening of the canal after root canal preparation. Results from the 2 groups were statistically compared using the Minitab statistical package. Results ProTaper rotary files left less untouched root canal surface compared with manual preparation in coronal, middle, and apical sector (p<0.001). Similarly, there was a statistically significant difference in root canal straightening after preparation between the techniques (p<0.001). Conclusions Neither manual nor rotary techniques completely prepared the root canal, and both techniques caused slight straightening of the root canal. PMID:26092929
Stavileci, Miranda; Hoxha, Veton; Görduysus, Ömer; Tatar, Ilkan; Laperre, Kjell; Hostens, Jeroen; Küçükkaya, Selen; Muhaxheri, Edmond
2015-06-20
Complete mechanical preparation of the root canal system is rarely achieved. Therefore, the purpose of this study was to evaluate and compare the root canal shaping efficacy of ProTaper rotary files and standard stainless steel K-files using micro-computed tomography. Sixty extracted upper second premolars were selected and divided into 2 groups of 30 teeth each. Before preparation, all samples were scanned by micro-computed tomography. Thirty teeth were prepared with the ProTaper system and the other 30 with stainless steel files. After preparation, the untouched surface and root canal straightening were evaluated with micro-computed tomography. The percentage of untouched root canal surface was calculated in the coronal, middle, and apical parts of the canal. We also calculated straightening of the canal after root canal preparation. Results from the 2 groups were statistically compared using the Minitab statistical package. ProTaper rotary files left less untouched root canal surface compared with manual preparation in coronal, middle, and apical sector (p<0.001). Similarly, there was a statistically significant difference in root canal straightening after preparation between the techniques (p<0.001). Neither manual nor rotary techniques completely prepared the root canal, and both techniques caused slight straightening of the root canal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tratnyek, Paul G.; Bylaska, Eric J.; Weber, Eric J.
2017-01-01
Quantitative structure–activity relationships (QSARs) have long been used in the environmental sciences. More recently, molecular modeling and chemoinformatic methods have become widespread. These methods have the potential to expand and accelerate advances in environmental chemistry because they complement observational and experimental data with “in silico” results and analysis. The opportunities and challenges that arise at the intersection between statistical and theoretical in silico methods are most apparent in the context of properties that determine the environmental fate and effects of chemical contaminants (degradation rate constants, partition coefficients, toxicities, etc.). The main example of this is the calibration of QSARs usingmore » descriptor variable data calculated from molecular modeling, which can make QSARs more useful for predicting property data that are unavailable, but also can make them more powerful tools for diagnosis of fate determining pathways and mechanisms. Emerging opportunities for “in silico environmental chemical science” are to move beyond the calculation of specific chemical properties using statistical models and toward more fully in silico models, prediction of transformation pathways and products, incorporation of environmental factors into model predictions, integration of databases and predictive models into more comprehensive and efficient tools for exposure assessment, and extending the applicability of all the above from chemicals to biologicals and materials.« less
Statistical analysis of magnetically soft particles in magnetorheological elastomers
NASA Astrophysics Data System (ADS)
Gundermann, T.; Cremer, P.; Löwen, H.; Menzel, A. M.; Odenbach, S.
2017-04-01
The physical properties of magnetorheological elastomers (MRE) are a complex issue and can be influenced and controlled in many ways, e.g. by applying a magnetic field, by external mechanical stimuli, or by an electric potential. In general, the response of MRE materials to these stimuli is crucially dependent on the distribution of the magnetic particles inside the elastomer. Specific knowledge of the interactions between particles or particle clusters is of high relevance for understanding the macroscopic rheological properties and provides an important input for theoretical calculations. In order to gain a better insight into the correlation between the macroscopic effects and microstructure and to generate a database for theoretical analysis, x-ray micro-computed tomography (X-μCT) investigations as a base for a statistical analysis of the particle configurations were carried out. Different MREs with quantities of 2-15 wt% (0.27-2.3 vol%) of iron powder and different allocations of the particles inside the matrix were prepared. The X-μCT results were edited by an image processing software regarding the geometrical properties of the particles with and without the influence of an external magnetic field. Pair correlation functions for the positions of the particles inside the elastomer were calculated to statistically characterize the distributions of the particles in the samples.
Dielectric properties of classical and quantized ionic fluids.
Høye, Johan S
2010-06-01
We study time-dependent correlation functions of classical and quantum gases using methods of equilibrium statistical mechanics for systems of uniform as well as nonuniform densities. The basis for our approach is the path integral formalism of quantum mechanical systems. With this approach the statistical mechanics of a quantum mechanical system becomes the equivalent of a classical polymer problem in four dimensions where imaginary time is the fourth dimension. Several nontrivial results for quantum systems have been obtained earlier by this analogy. Here, we will focus upon the presence of a time-dependent electromagnetic pair interaction where the electromagnetic vector potential that depends upon currents, will be present. Thus both density and current correlations are needed to evaluate the influence of this interaction. Then we utilize that densities and currents can be expressed by polarizations by which the ionic fluid can be regarded as a dielectric one for which a nonlocal susceptibility is found. This nonlocality has as a consequence that we find no contribution from a possible transverse electric zero-frequency mode for the Casimir force between metallic plates. Further, we establish expressions for a leading correction to ab initio calculations for the energies of the quantized electrons of molecules where now retardation effects also are taken into account.
40 CFR Appendix IV to Part 265 - Tests for Significance
Code of Federal Regulations, 2010 CFR
2010-07-01
... introductory statistics texts. ... student's t-test involves calculation of the value of a t-statistic for each comparison of the mean... parameter with its initial background concentration or value. The calculated value of the t-statistic must...
Monte Carlo simulations of liquid tetrahydrofuran including pseudorotationa)
NASA Astrophysics Data System (ADS)
Chandrasekhar, Jayaraman; Jorgensen, William L.
1982-11-01
Monte Carlo statistical mechanics simulations have been carried out for liquid tetrahydrofuran (THF) with and without pseudorotation at 1 atm and 25 °C. The intermolecular potential functions consisted of Lennard-Jones and Coulomb terms in the TIPS format reported previously for ethers. Pseudorotation of the ring was described using the generalized coordinates defined by Cremer and Pople, viz., the puckering amplitude and the phase angle of the ring. The corresponding intramolecular potential function was derived from molecular mechanics (MM2) calculations. Compared to the gas phase, the rings tend to be more flat and the population of the C2 twist geometry is slightly higher in liquid THF. However, pseudorotation has negligible effect on the calculated intermolecular structure and thermodynamic properties. The computed density, heat of vaporization, and heat capacity are in good agreement with experiment. The results are also compared with those from previous simulations of acyclic ethers. The present study provides the foundation for investigations of the solvating ability of THF.
NASA Technical Reports Server (NTRS)
Macfarlane, J. J.
1984-01-01
A model free energy is developed for hydrogen-helium mixtures based on solid-state Thomas-Fermi-Dirac calculations at pressures relevant to the interiors of giant planets. Using a model potential similar to that for a two-component plasma, effective charges for the nuclei (which are in general smaller than the actual charges because of screening effects) are parameterized, being constrained by calculations at a number of densities, compositions, and lattice structures. These model potentials are then used to compute the equilibrium properties of H-He fluids using a charged hard-sphere model. The results find critical temperatures of about 0 K, 500 K, and 1500 K, for pressures of 10, 100, and 1000 Mbar, respectively. These phase separation temperatures are considerably lower (approximately 6,000-10,000 K) than those found from calculations using free electron perturbation theory, and suggest that H-He solutions should be stable against phase separation in the metallic zones of Jupiter and Saturn.
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
Insight into structural phase transitions from the decoupled anharmonic mode approximation
NASA Astrophysics Data System (ADS)
Adams, Donat J.; Passerone, Daniele
2016-08-01
We develop a formalism (decoupled anharmonic mode approximation, DAMA) that allows calculation of the vibrational free energy using density functional theory even for materials which exhibit negative curvature of the potential energy surface with respect to atomic displacements. We investigate vibrational modes beyond the harmonic approximation and approximate the potential energy surface with the superposition of the accurate potential along each normal mode. We show that the free energy can stabilize crystal structures at finite temperatures which appear dynamically unstable at T = 0. The DAMA formalism is computationally fast because it avoids statistical sampling through molecular dynamics calculations, and is in principle completely ab initio. It is free of statistical uncertainties and independent of model parameters, but can give insight into the mechanism of a structural phase transition. We apply the formalism to the perovskite cryolite, and investigate the temperature-driven phase transition from the P21/n to the Immm space group. We calculate a phase transition temperature between 710 and 950 K, in fair agreement with the experimental value of 885 K. This can be related to the underestimation of the interaction of the vibrational states. We also calculate the main axes of the thermal ellipsoid and can explain the experimentally observed increase of its volume for the fluorine by 200-300% throughout the phase transition. Our calculations suggest the appearance of tunneling states in the high temperature phase. The convergence of the vibrational DOS and of the critical temperature with respect of reciprocal space sampling is investigated using the polarizable-ion model.
NASA Astrophysics Data System (ADS)
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
Insight into structural phase transitions from the decoupled anharmonic mode approximation.
Adams, Donat J; Passerone, Daniele
2016-08-03
We develop a formalism (decoupled anharmonic mode approximation, DAMA) that allows calculation of the vibrational free energy using density functional theory even for materials which exhibit negative curvature of the potential energy surface with respect to atomic displacements. We investigate vibrational modes beyond the harmonic approximation and approximate the potential energy surface with the superposition of the accurate potential along each normal mode. We show that the free energy can stabilize crystal structures at finite temperatures which appear dynamically unstable at T = 0. The DAMA formalism is computationally fast because it avoids statistical sampling through molecular dynamics calculations, and is in principle completely ab initio. It is free of statistical uncertainties and independent of model parameters, but can give insight into the mechanism of a structural phase transition. We apply the formalism to the perovskite cryolite, and investigate the temperature-driven phase transition from the P21/n to the Immm space group. We calculate a phase transition temperature between 710 and 950 K, in fair agreement with the experimental value of 885 K. This can be related to the underestimation of the interaction of the vibrational states. We also calculate the main axes of the thermal ellipsoid and can explain the experimentally observed increase of its volume for the fluorine by 200-300% throughout the phase transition. Our calculations suggest the appearance of tunneling states in the high temperature phase. The convergence of the vibrational DOS and of the critical temperature with respect of reciprocal space sampling is investigated using the polarizable-ion model.
Systolic and Diastolic Left Ventricular Mechanics during and after Resistance Exercise.
Stöhr, Eric J; Stembridge, Mike; Shave, Rob; Samuel, T Jake; Stone, Keeron; Esformes, Joseph I
2017-10-01
To improve the current understanding of the impact of resistance exercise on the heart, by examining the acute responses of left ventricular (LV) strain, twist, and untwisting rate ("LV mechanics"). LV echocardiographic images were recorded in systole and diastole before, during and immediately after (7-12 s) double-leg press exercise at two intensities (30% and 60% of maximum strength, one-repetition maximum). Speckle tracking analysis generated LV strain, twist, and untwisting rate data. Additionally, beat-by-beat blood pressure was recorded and systemic vascular resistance (SVR) and LV wall stress were calculated. Responses in both exercise trials were statistically similar (P > 0.05). During effort, stroke volume decreased, whereas SVR and LV wall stress increased (P < 0.05). Immediately after effort, stroke volume returned to baseline, whereas SVR and wall stress decreased (P < 0.05). Similarly, acute exercise was accompanied by a significant decrease in systolic parameters of LV muscle mechanics (P < 0.05). However, diastolic parameters, including LV untwisting rate, were statistically unaltered (P > 0.05). Immediately after exercise, systolic LV mechanics returned to baseline levels (P < 0.05) but LV untwisting rate increased significantly (P < 0.05). A single, acute bout of double-leg press resistance exercise transiently reduces systolic LV mechanics, but increases diastolic mechanics after exercise, suggesting that resistance exercise has a differential impact on systolic and diastolic heart muscle function. The findings may explain why acute resistance exercise has been associated with reduced stroke volume but chronic exercise training may result in increased LV volumes.
Charge transfer of O3+ ions with atomic hydrogen
NASA Astrophysics Data System (ADS)
Wang, J. G.; Stancil, P. C.; Turner, A. R.; Cooper, D. L.
2003-01-01
Charge transfer processes due to collisions of ground state O3+(2s22p 2P) ions with atomic hydrogen are investigated using the quantum-mechanical molecular-orbital close-coupling (MOCC) method. The MOCC calculations utilize ab initio adiabatic potentials and nonadiabatic radial and rotational coupling matrix elements obtained with the spin-coupled valence-bond approach. Total and state-selective cross sections and rate coefficients are presented. Comparison with existing experimental and theoretical data shows our results to be in better agreement with the measurements than the previous calculations, although problems with some of the state-selective measurements are noted. Our calculations demonstrate that rotational coupling is not important for the total cross section, but for state-selective cross sections, its relevance increases with energy. For the ratios of triplet to singlet cross sections, significant departures from a statistical value are found, generally in harmony with experiment.
NASA Astrophysics Data System (ADS)
Bywater, R. J.
1980-01-01
Solutions are presented for the turbulent diffusion flame in a two-dimensional shear layer based upon a kinetic theory of turbulence (KTT). The fuel and oxidizer comprising the two streams are considered to react infinitely fast according to a one-step, irreversible kinetic mechanism. The solutions are obtained by direct numerical calculation of the transverse velocity probability density function (PDF) and the associated species distributions. The mean reactant profiles calculated from the solutions display the characteristic thick, turbulent flame zone. The phenomena result from the fact that in the context of the KTT, species react only when in the same velocity cell. This coincides with the known physical requirement that molecular mixing precedes reaction. The solutions demonstrate this behavior by showing how reactants can coexist in the mean, even when infinite reaction rates are enforced at each point (t,x,u) of velocity space.
Formulation of D-brane Dynamics
NASA Astrophysics Data System (ADS)
Evans, Thomas
2012-03-01
It is the purpose of this paper (within the context of STS rules & guidelines ``research report'') to formulate a statistical-mechanical form of D-brane dynamics. We consider first the path integral formulation of quantum mechanics, and extend this to a path-integral formulation of D-brane mechanics, summing over all the possible path integral sectors of R-R, NS charged states. We then investigate this generalization utilizing a path-integral formulation summing over all the possible path integral sectors of R-R charged states, calculated from the mean probability tree-level amplitude of type I, IIA, and IIB strings, serving as a generalization of all strings described by D-branes. We utilize this generalization to study black holes in regimes where the initial D-brane system is legitimate, and further this generalization to look at information loss near regions of nonlocality on a non-ordinary event horizon. We see here that in these specific regimes, we can calculate a path integral formulation, as describing D0-brane mechanics, tracing the dissipation of entropy throughout the event horizon. This is used to study the information paradox, and to propose a resolution between the phenomena and the correct and expected quantum mechanical description. This is done as our path integral throughout entropy entering the event horizon effectively and correctly encodes the initial state in subtle correlations in the Hawking radiation.
Ionescu, Crina-Maria; Sehnal, David; Falginella, Francesco L; Pant, Purbaj; Pravda, Lukáš; Bouchal, Tomáš; Svobodová Vařeková, Radka; Geidl, Stanislav; Koča, Jaroslav
2015-01-01
Partial atomic charges are a well-established concept, useful in understanding and modeling the chemical behavior of molecules, from simple compounds, to large biomolecular complexes with many reactive sites. This paper introduces AtomicChargeCalculator (ACC), a web-based application for the calculation and analysis of atomic charges which respond to changes in molecular conformation and chemical environment. ACC relies on an empirical method to rapidly compute atomic charges with accuracy comparable to quantum mechanical approaches. Due to its efficient implementation, ACC can handle any type of molecular system, regardless of size and chemical complexity, from drug-like molecules to biomacromolecular complexes with hundreds of thousands of atoms. ACC writes out atomic charges into common molecular structure files, and offers interactive facilities for statistical analysis and comparison of the results, in both tabular and graphical form. Due to high customizability and speed, easy streamlining and the unified platform for calculation and analysis, ACC caters to all fields of life sciences, from drug design to nanocarriers. ACC is freely available via the Internet at http://ncbr.muni.cz/ACC.
Barreto, Rafael C; Coutinho, Kaline; Georg, Herbert C; Canuto, Sylvio
2009-03-07
A combined and sequential use of Monte Carlo simulations and quantum mechanical calculations is made to analyze the spectral shift of the lowest pi-pi* transition of phenol in water. The solute polarization is included using electrostatic embedded calculations at the MP2/aug-cc-pVDZ level giving a dipole moment of 2.25 D, corresponding to an increase of 76% compared to the calculated gas-phase value. Using statistically uncorrelated configurations sampled from the MC simulation, first-principle size-extensive calculations are performed to obtain the solvatochromic shift. Analysis is then made of the origin of the blue shift. Results both at the optimized geometry and in room-temperature liquid water show that hydrogen bonds of water with phenol promote a red shift when phenol is the proton-donor and a blue shift when phenol is the proton-acceptor. In the case of the optimized clusters the calculated shifts are in very good agreement with results obtained from mass-selected free jet expansion experiments. In the liquid case the contribution of the solute-solvent hydrogen bonds partially cancels and the total shift obtained is dominated by the contribution of the outer solvent water molecules. Our best result, including both inner and outer water molecules, is 570 +/- 35 cm(-1), in very good agreement with the small experimental shift of 460 cm(-1) for the absorption maximum.
Temperature equilibration rate with Fermi-Dirac statistics.
Brown, Lowell S; Singleton, Robert L
2007-12-01
We calculate analytically the electron-ion temperature equilibration rate in a fully ionized, weakly to moderately coupled plasma, using an exact treatment of the Fermi-Dirac electrons. The temperature is sufficiently high so that the quantum-mechanical Born approximation to the scattering is valid. It should be emphasized that we do not build a model of the energy exchange mechanism, but rather, we perform a systematic first principles calculation of the energy exchange. At the heart of this calculation lies the method of dimensional continuation, a technique that we borrow from quantum field theory and use in a different fashion to regulate the kinetic equations in a consistent manner. We can then perform a systematic perturbation expansion and thereby obtain a finite first-principles result to leading and next-to-leading order. Unlike model building, this systematic calculation yields an estimate of its own error and thus prescribes its domain of applicability. The calculational error is small for a weakly to moderately coupled plasma, for which our result is nearly exact. It should also be emphasized that our calculation becomes unreliable for a strongly coupled plasma, where the perturbative expansion that we employ breaks down, and one must then utilize model building and computer simulations. Besides providing different and potentially useful results, we use this calculation as an opportunity to explain the method of dimensional continuation in a pedagogical fashion. Interestingly, in the regime of relevance for many inertial confinement fusion experiments, the degeneracy corrections are comparable in size to the subleading quantum correction below the Born approximation. For consistency, we therefore present this subleading quantum-to-classical transition correction in addition to the degeneracy correction.
Molecular Hydrogen Formation : Effect of Dust Grain Temperature Fluctuations
NASA Astrophysics Data System (ADS)
Bron, Emeric; Le Bourlot, Jacques; Le Petit, Franck
2013-06-01
H_{2} formation is a hot topic in astrochemistry. Thanks to Copernicus and FUSE satellites, its formation rate on dust grains in diffuse interstellar gas has been inferred (Jura 1974, Gry et al. 2002). Nevertheless, detection of H_2 emission in PDRs by ISO and Spitzer (Habart et al., 2004, 2005, 2011 ) showed that its formation mechanism can be efficient on warm grains (warmer than 30K), whereas experimental studies showed that Langmuir-Hinshelwood mechanism is only efficient in a narrow window of grain temperatures (typically between 10-20 K). The Eley-Rideal mechanism, in which H atoms are chemically bound to grains surfaces could explain such a formation rate in PDRs (Le Bourlot et al. 2012 ). Usual dust size distributions (e.g. Mathis et al. 1977 ) favor smaller grains in a way that makes most of the available grain surface belong to small grains. As small grains are subject to large temperature fluctuations due to UV-photons absorption, calculations at a fixed temperature give incorrect results under strong UV-fields. Here, we present a comprehensive study of the influence of this stochastic effect on H_2 formation by Langmuir-Hinshelwood and Eley-Rideal mechanisms. We use a master equation approach to calculate the statistics of coupled fluctuations of the temperature and adsorbed H population of a grain. Doing so, we are able to calculate the formation rate on a grain under a given radiation field and given gas conditions. We find that the Eley-Rideal mechanism remains an efficient mechanism in PDRs, and that the Langmuir-Hinshelwood mechanism is more efficient than expected on warm grains. This procedure is then coupled to full cloud simulations with the Meudon PDR code. We compare the new results with more classical evaluations of the formation rate, and present the differences in terms of chemical structure of the cloud and observable line intensities. We will also highlight the influence of some microphysical parameters on the results.
Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia
2016-01-01
Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis. PMID:27792763
Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia
2016-01-01
Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.
Lee, F K-H; Chan, C C-L; Law, C-K
2009-02-01
Contrast enhanced computed tomography (CECT) has been used for delineation of treatment target in radiotherapy. The different Hounsfield unit due to the injected contrast agent may affect radiation dose calculation. We investigated this effect on intensity modulated radiotherapy (IMRT) of nasopharyngeal carcinoma (NPC). Dose distributions of 15 IMRT plans were recalculated on CECT. Dose statistics for organs at risk (OAR) and treatment targets were recorded for the plain CT-calculated and CECT-calculated plans. Statistical significance of the differences was evaluated. Correlations were also tested, among magnitude of calculated dose difference, tumor size and level of enhancement contrast. Differences in nodal mean/median dose were statistically significant, but small (approximately 0.15 Gy for a 66 Gy prescription). In the vicinity of the carotid arteries, the difference in calculated dose was also statistically significant, but only with a mean of approximately 0.2 Gy. We did not observe any significant correlation between the difference in the calculated dose and the tumor size or level of enhancement. The results implied that the calculated dose difference was clinically insignificant and may be acceptable for IMRT planning.
ON THE DYNAMICAL DERIVATION OF EQUILIBRIUM STATISTICAL MECHANICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prigogine, I.; Balescu, R.; Henin, F.
1960-12-01
Work on nonequilibrium statistical mechanics, which allows an extension of the kinetic proof to all results of equilibrium statistical mechanics involving a finite number of degrees of freedom, is summarized. As an introduction to the general N-body problem, the scattering theory in classical mechanics is considered. The general N-body problem is considered for the case of classical mechanics, quantum mechanics with Boltzmann statistics, and quantum mechanics including quantum statistics. Six basic diagrams, which describe the elementary processes of the dynamics of correlations, were obtained. (M.C.G.)
A Bayesian perspective on Markovian dynamics and the fluctuation theorem
NASA Astrophysics Data System (ADS)
Virgo, Nathaniel
2013-08-01
One of E. T. Jaynes' most important achievements was to derive statistical mechanics from the maximum entropy (MaxEnt) method. I re-examine a relatively new result in statistical mechanics, the Evans-Searles fluctuation theorem, from a MaxEnt perspective. This is done in the belief that interpreting such results in Bayesian terms will lead to new advances in statistical physics. The version of the fluctuation theorem that I will discuss applies to discrete, stochastic systems that begin in a non-equilibrium state and relax toward equilibrium. I will show that for such systems the fluctuation theorem can be seen as a consequence of the fact that the equilibrium distribution must obey the property of detailed balance. Although the principle of detailed balance applies only to equilibrium ensembles, it puts constraints on the form of non-equilibrium trajectories. This will be made clear by taking a novel kind of Bayesian perspective, in which the equilibrium distribution is seen as a prior over the system's set of possible trajectories. Non-equilibrium ensembles are calculated from this prior using Bayes' theorem, with the initial conditions playing the role of the data. I will also comment on the implications of this perspective for the question of how to derive the second law.
Chi-Square Statistics, Tests of Hypothesis and Technology.
ERIC Educational Resources Information Center
Rochowicz, John A.
The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…
Zuend, Stephan J; Jacobsen, Eric N
2009-10-28
An experimental and computational investigation of amido-thiourea promoted imine hydrocyanation has revealed a new and unexpected mechanism of catalysis. Rather than direct activation of the imine by the thiourea, as had been proposed previously in related systems, the data are consistent with a mechanism involving catalyst-promoted proton transfer from hydrogen isocyanide to imine to generate diastereomeric iminium/cyanide ion pairs that are bound to catalyst through multiple noncovalent interactions; these ion pairs collapse to form the enantiomeric alpha-aminonitrile products. This mechanistic proposal is supported by the observation of a statistically significant correlation between experimental and calculated enantioselectivities induced by eight different catalysts (P < 0.01). The computed models reveal a basis for enantioselectivity that involves multiple stabilizing and destabilizing interactions between substrate and catalyst, including thiourea-cyanide and amide-iminium interactions.
Confocal Microscopy of Jammed Matter: From Elasticity to Granular Thermodynamics
NASA Astrophysics Data System (ADS)
Jorjadze, Ivane
Packings of particles are ubiquitous in nature and are of interest not only to the scientific community but also to the food, pharmaceutical, and oil industries. In this thesis we use confocal microscopy to investigate packing geometry and stress transmission in 3D jammed particulate systems. By introducing weak depletion attraction we probe the accessible phase-space and demonstrate that a microscopic approach to jammed matter gives validity to statistical mechanics framework, which is intriguing because our particles are not thermally activated. We show that the fluctuations of the local packing parameters can be successfully captured by the recently proposed 'granocentric' model, which generates packing statistics according to simple stochastic processes. This model enables us to calculate packing entropy and granular temperature, the so-called 'compactivity', therefore, providing a basis for a statistical mechanics of granular matter. At a jamming transition point at which there are formed just enough number of contacts to guarantee the mechanical stability, theoretical arguments suggest a singularity which gives rise to the surprising scaling behavior of the elastic moduli and the microstructure, as observed in numerical simulations. Since the contact network in 3D is typically hidden from view, experimental test of the scaling law between the coordination number and the applied pressure is lacking in the literature. Our data show corrections to the linear scaling of the pressure with density which takes into account the creation of contacts. Numerical studies of vibrational spectra, in turn, reveal sudden features such as excess of low frequency modes, dependence of mode localization and structure on the pressure. Chapter four describes the first calculation of vibrational density of states from the experimental 3D data and is in qualitative agreement with the analogous computer simulations. We study the configurational role of the pressure and demonstrate that low frequency modes become progressively localized as the packing density is increased. Another application of our oil-in-water emulsions serves to mimic cell adhesion in biological tissues. By analyzing the microstructure in 3D we find that a threshold compression force is necessary to overcome electrostatic repulsion and surface elasticity and establish protein-mediated adhesion.
Dong, Hong-ba; Yang, Yan-wen; Wang, Ying; Hong, Li
2012-11-01
Energy metabolism of critically ill children has its own characteristics, especially for those undergoing mechanical ventilation. We tried to assess the energy expenditure status and evaluate the use of predictive equations in such children. Moreover, the characteristics of the energy metabolism among various situation were explored. Fifty critically ill children undergoing mechanical ventilation were selected in this study. Data produced during the 24 hours of mechanical ventilation were collected for computation of severity of illness. Resting energy expenditure (REE) was determined at 24 hours after mechanical ventilation (MREE). Predictive resting energy expenditure (PREE) was calculated for each subject using age-appropriate equations (Schofield-HTWT, White). The study was approved by the hospital medical ethics committee and obtained parental written informed consent. The pediatric risk of mortality score 3 (PRISM3) and pediatric critical illness score (PCIS) were (7 ± 3) and (82 ± 4), respectively. MREE, Schofield-HTWT equation PREE and White equation PREE were (404.80 ± 178.28), (462.82 ± 160.38) and (427.97 ± 152.30) kcal/d, respectively; 70% were hypometabolic and 10% were hypermetabolic. MREE and PREE which were calculated using Schofield-HTWT equation and White equation, both were higher than MREE (P = 0.029). Correlation analysis was performed between PRISM3 and PCIS with MREE. There were no statistically significant correlation (P > 0.05). The hypometabolic response is apparent in critically ill children with mechanical ventilation; Schofield-HTWT equation and White equation could not predict energy requirements within acceptable clinical accuracy. In critically ill children undergoing mechanical ventilation, the energy expenditure is not correlated with the severity of illness.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Showalter, Brent L; Beckstein, Jesse C; Martin, John T; Beattie, Elizabeth E; Espinoza Orías, Alejandro A; Schaer, Thomas P; Vresilovic, Edward J; Elliott, Dawn M
2012-07-01
Experimental measurement and normalization of in vitro disc torsion mechanics and collagen content for several animal species used in intervertebral disc research and comparing these with the human disc. To aid in the selection of appropriate animal models for disc research by measuring torsional mechanical properties and collagen content. There is lack of data and variability in testing protocols for comparing animal and human disc torsion mechanics and collagen content. Intervertebral disc torsion mechanics were measured and normalized by disc height and polar moment of inertia for 11 disc types in 8 mammalian species: the calf, pig, baboon, goat, sheep, rabbit, rat, and mouse lumbar discs, and cow, rat, and mouse caudal discs. Collagen content was measured and normalized by dry weight for the same discs except the rat and the mouse. Collagen fiber stretch in torsion was calculated using an analytical model. Measured torsion parameters varied by several orders of magnitude across the different species. After geometric normalization, only the sheep and pig discs were statistically different from human discs. Fiber stretch was found to be highly dependent on the assumed initial fiber angle. The collagen content of the discs was similar, especially in the outer annulus where only the calf and goat discs were statistically different from human. Disc collagen content did not correlate with torsion mechanics. Disc torsion mechanics are comparable with human lumbar discs in 9 of 11 disc types after normalization by geometry. The normalized torsion mechanics and collagen content of the multiple animal discs presented are useful for selecting and interpreting results for animal disc models. Structural organization of the fiber angle may explain the differences that were noted between species after geometric normalization.
Showalter, Brent L.; Beckstein, Jesse C.; Martin, John T.; Beattie, Elizabeth E.; Orías, Alejandro A. Espinoza; Schaer, Thomas P.; Vresilovic, Edward J.; Elliott, Dawn M.
2012-01-01
Study Design Experimental measurement and normalization of in vitro disc torsion mechanics and collagen content for several animal species used in intervertebral disc research and comparing these to the human disc. Objective To aid in the selection of appropriate animal models for disc research by measuring torsional mechanical properties and collagen content. Summary of Background Data There is lack of data and variability in testing protocols for comparing animal and human disc torsion mechanics and collagen content. Methods Intervertebral disc torsion mechanics were measured and normalized by disc height and polar moment of inertia for 11 disc types in 8 mammalian species: the calf, pig, baboon, goat, sheep, rabbit, rat, and mouse lumbar, and cow, rat, and mouse caudal. Collagen content was measured and normalized by dry weight for the same discs except the rat and mouse. Collagen fiber stretch in torsion was calculated using an analytical model. Results Measured torsion parameters varied by several orders of magnitude across the different species. After geometric normalization, only the sheep and pig discs were statistically different from human. Fiber stretch was found to be highly dependent on the assumed initial fiber angle. The collagen content of the discs was similar, especially in the outer annulus where only the calf and goat discs were statistically different from human. Disc collagen content did not correlate with torsion mechanics. Conclusion Disc torsion mechanics are comparable to human lumbar discs in 9 of 11 disc types after normalization by geometry. The normalized torsion mechanics and collagen content of the multiple animal discs presented is useful for selecting and interpreting results for animal models of the disc. Structural composition of the disc, such as initial fiber angle, may explain the differences that were noted between species after geometric normalization. PMID:22333953
Dunne, Lawrence J; Manos, George
2018-03-13
Although crucial for designing separation processes little is known experimentally about multi-component adsorption isotherms in comparison with pure single components. Very few binary mixture adsorption isotherms are to be found in the literature and information about isotherms over a wide range of gas-phase composition and mechanical pressures and temperature is lacking. Here, we present a quasi-one-dimensional statistical mechanical model of binary mixture adsorption in metal-organic frameworks (MOFs) treated exactly by a transfer matrix method in the osmotic ensemble. The experimental parameter space may be very complex and investigations into multi-component mixture adsorption may be guided by theoretical insights. The approach successfully models breathing structural transitions induced by adsorption giving a good account of the shape of adsorption isotherms of CO 2 and CH 4 adsorption in MIL-53(Al). Binary mixture isotherms and co-adsorption-phase diagrams are also calculated and found to give a good description of the experimental trends in these properties and because of the wide model parameter range which reproduces this behaviour suggests that this is generic to MOFs. Finally, a study is made of the influence of mechanical pressure on the shape of CO 2 and CH 4 adsorption isotherms in MIL-53(Al). Quite modest mechanical pressures can induce significant changes to isotherm shapes in MOFs with implications for binary mixture separation processes.This article is part of the theme issue 'Modern theoretical chemistry'. © 2018 The Author(s).
NASA Astrophysics Data System (ADS)
Dunne, Lawrence J.; Manos, George
2018-03-01
Although crucial for designing separation processes little is known experimentally about multi-component adsorption isotherms in comparison with pure single components. Very few binary mixture adsorption isotherms are to be found in the literature and information about isotherms over a wide range of gas-phase composition and mechanical pressures and temperature is lacking. Here, we present a quasi-one-dimensional statistical mechanical model of binary mixture adsorption in metal-organic frameworks (MOFs) treated exactly by a transfer matrix method in the osmotic ensemble. The experimental parameter space may be very complex and investigations into multi-component mixture adsorption may be guided by theoretical insights. The approach successfully models breathing structural transitions induced by adsorption giving a good account of the shape of adsorption isotherms of CO2 and CH4 adsorption in MIL-53(Al). Binary mixture isotherms and co-adsorption-phase diagrams are also calculated and found to give a good description of the experimental trends in these properties and because of the wide model parameter range which reproduces this behaviour suggests that this is generic to MOFs. Finally, a study is made of the influence of mechanical pressure on the shape of CO2 and CH4 adsorption isotherms in MIL-53(Al). Quite modest mechanical pressures can induce significant changes to isotherm shapes in MOFs with implications for binary mixture separation processes. This article is part of the theme issue `Modern theoretical chemistry'.
Zurek, E; Woo, T K; Firman, T K; Ziegler, T
2001-01-15
Density functional theory (DFT) has been used to calculate the energies of 36 different methylaluminoxane (MAO) cage structures with the general formula (MeAlO)n, where n ranges from 4 to 16. A least-squares fit has been used to devise a formula which predicts the total energies of the MAO with different n's giving an rms deviation of 4.70 kcal/mol. These energies in conjunction with frequency calculations based on molecular mechanics have been used to estimate the finite temperature enthalpies, entropies, and free energies for these MAO structures. Furthermore, formulas have been devised which predict finite temperature enthalpies and entropies for MAO structures of any n for a temperature range of 198.15-598.15 K. Using these formulas, the free energies at different temperatures have been predicted for MAO structures where n ranges from 17 to 30. The free energy values were then used to predict the percentage of each n found at a given temperature. Our calculations give an average n value of 18.41, 17.23, 16.89, and 15.72 at 198.15, 298.15, 398.15, and 598.15 K, respectively. Topological arguments have also been used to show that the MAO cage structure contains a limited amount of square faces as compared to octagonal and hexagonal ones. It is also suggested that the limited number of square faces with their strained Al-O bonds explain the high molar Al:catalyst ratio required for activation. Moreover, in this study we outline a general methodology which may be used to calculate the percent abundance of an equilibrium mixture of oligomers with the general formula (X)n.
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Bouchaud, Jean-Philippe
2008-08-01
We construct an N-dimensional Gaussian landscape with multiscale, translation invariant, logarithmic correlations and investigate the statistical mechanics of a single particle in this environment. In the limit of high dimension N → ∞ the free energy of the system and overlap function are calculated exactly using the replica trick and Parisi's hierarchical ansatz. In the thermodynamic limit, we recover the most general version of the Derrida's generalized random energy model (GREM). The low-temperature behaviour depends essentially on the spectrum of length scales involved in the construction of the landscape. If the latter consists of K discrete values, the system is characterized by a K-step replica symmetry breaking solution. We argue that our construction is in fact valid in any finite spatial dimensions N >= 1. We discuss the implications of our results for the singularity spectrum describing multifractality of the associated Boltzmann-Gibbs measure. Finally we discuss several generalizations and open problems, such as the dynamics in such a landscape and the construction of a generalized multifractal random walk.
McMullan, Miriam; Jones, Ray; Lea, Susan
2010-04-01
This paper is a report of a correlational study of the relations of age, status, experience and drug calculation ability to numerical ability of nursing students and Registered Nurses. Competent numerical and drug calculation skills are essential for nurses as mistakes can put patients' lives at risk. A cross-sectional study was carried out in 2006 in one United Kingdom university. Validated numerical and drug calculation tests were given to 229 second year nursing students and 44 Registered Nurses attending a non-medical prescribing programme. The numeracy test was failed by 55% of students and 45% of Registered Nurses, while 92% of students and 89% of nurses failed the drug calculation test. Independent of status or experience, older participants (> or = 35 years) were statistically significantly more able to perform numerical calculations. There was no statistically significant difference between nursing students and Registered Nurses in their overall drug calculation ability, but nurses were statistically significantly more able than students to perform basic numerical calculations and calculations for solids, oral liquids and injections. Both nursing students and Registered Nurses were statistically significantly more able to perform calculations for solids, liquid oral and injections than calculations for drug percentages, drip and infusion rates. To prevent deskilling, Registered Nurses should continue to practise and refresh all the different types of drug calculations as often as possible with regular (self)-testing of their ability. Time should be set aside in curricula for nursing students to learn how to perform basic numerical and drug calculations. This learning should be reinforced through regular practice and assessment.
Statistical mechanics of a cat's cradle
NASA Astrophysics Data System (ADS)
Shen, Tongye; Wolynes, Peter G.
2006-11-01
It is believed that, much like a cat's cradle, the cytoskeleton can be thought of as a network of strings under tension. We show that both regular and random bond-disordered networks having bonds that buckle upon compression exhibit a variety of phase transitions as a function of temperature and extension. The results of self-consistent phonon calculations for the regular networks agree very well with computer simulations at finite temperature. The analytic theory also yields a rigidity onset (mechanical percolation) and the fraction of extended bonds for random networks. There is very good agreement with the simulations by Delaney et al (2005 Europhys. Lett. 72 990). The mean field theory reveals a nontranslationally invariant phase with self-generated heterogeneity of tautness, representing 'antiferroelasticity'.
Sun, Hongyan; Law, Chung K
2007-05-17
The reaction kinetics for the thermal decomposition of monomethylhydrazine (MMH) was studied with quantum Rice-Ramsperger-Kassel (QRRK) theory and a master equation analysis for pressure falloff. Thermochemical properties were determined by ab initio and density functional calculations. The entropies, S degrees (298.15 K), and heat capacities, Cp degrees (T) (0 < or = T/K < or = 1500), from vibrational, translational, and external rotational contributions were calculated using statistical mechanics based on the vibrational frequencies and structures obtained from the density functional study. Potential barriers for internal rotations were calculated at the B3LYP/6-311G(d,p) level, and hindered rotational contributions to S degrees (298.15 K) and Cp degrees (T) were calculated by solving the Schrödinger equation with free rotor wave functions, and the partition coefficients were treated by direct integration over energy levels of the internal rotation potentials. Enthalpies of formation, DeltafH degrees (298.15 K), for the parent MMH (CH3NHNH2) and its corresponding radicals CH3N*NH2, CH3NHN*H, and C*H2NHNH2 were determined to be 21.6, 48.5, 51.1, and 62.8 kcal mol(-1) by use of isodesmic reaction analysis and various ab initio methods. The kinetic analysis of the thermal decomposition, abstraction, and substitution reactions of MMH was performed at the CBS-QB3 level, with those of N-N and C-N bond scissions determined by high level CCSD(T)/6-311++G(3df,2p)//MPWB1K/6-31+G(d,p) calculations. Rate constants of thermally activated MMH to dissociation products were calculated as functions of pressure and temperature. An elementary reaction mechanism based on the calculated rate constants, thermochemical properties, and literature data was developed to model the experimental data on the overall MMH thermal decomposition rate. The reactions of N-N and C-N bond scission were found to be the major reaction paths for the modeling of MMH homogeneous decomposition at atmospheric conditions.
Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.
Lee, O-Sung; Ahn, Soyeon; Lee, Yong Seuk
2017-07-01
The purpose of this systematic review and meta-analysis was to evaluate the effectiveness and safety of early weight-bearing by comparing clinical and radiological outcomes between early and traditional delayed weight-bearing after OWHTO. A rigorous and systematic approach was used. The methodological quality was also assessed. Results that are possible to be compared in two or more than two articles were presented as forest plots. A 95% confidence interval was calculated for each effect size, and we calculated the I 2 statistic, which presents the percentage of total variation attributable to the heterogeneity among studies. The random-effects model was used to calculate the effect size. Six articles were included in the final analysis. All case groups were composed of early full weight-bearing within 2 weeks. All control groups were composed of late full weight-bearing between 6 weeks and 2 months. Pooled analysis was possible for the improvement in Lysholm score, but there was no statistically significant difference shown between groups. Other clinical results were also similar between groups. Four studies reported mechanical femorotibial angle (mFTA) and this result showed no statistically significant difference between groups in the pooled analysis. Furthermore, early weight-bearing showed more favorable results in some radiologic results (osseointegration and patellar height) and complications (thrombophlebitis and recurrence). Our analysis supports that early full weight-bearing after OWHTO using a locking plate leads to improvement in outcomes and was comparable to the delayed weight-bearing in terms of clinical and radiological outcomes. On the contrary, early weight-bearing was more favorable with respect to some radiologic parameters and complications compared with delayed weight-bearing.
Ab initio Study on Ionization Energies of 3-Amino-1-propanol
NASA Astrophysics Data System (ADS)
Wang, Ke-dong; Jia, Ying-bin; Lai, Zhen-jiang; Liu, Yu-fang
2011-06-01
Fourteen conformers of 3-amino-1-propanol as the minima on the potential energy surface are examined at the MP2/6-311++G** level. Their relative energies calculated at B3LYP, MP3 and MP4 levels of theory indicated that two most stable conformers display the intramolecular OH···N hydrogen bonds. The vertical ionization energies of these conformers calculated with ab initio electron propagator theory in the P3/aug-cc-pVTZ approximation are in agreement with experimental data from photoelectron spectroscopy. Natural bond orbital analyses were used to explain the differences of IEs of the highest occupied molecular ortibal of conformers. Combined with statistical mechanics principles, conformational distributions at various temperatures are obtained and the temperature dependence of photoelectron spectra is interpreted.
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler B.
2017-01-01
This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.
Statistical earthquake focal mechanism forecasts
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.; Jackson, David D.
2014-04-01
Forecasts of the focal mechanisms of future shallow (depth 0-70 km) earthquakes are important for seismic hazard estimates and Coulomb stress, and other models of earthquake occurrence. Here we report on a high-resolution global forecast of earthquake rate density as a function of location, magnitude and focal mechanism. In previous publications we reported forecasts of 0.5° spatial resolution, covering the latitude range from -75° to +75°, based on the Global Central Moment Tensor earthquake catalogue. In the new forecasts we have improved the spatial resolution to 0.1° and the latitude range from pole to pole. Our focal mechanism estimates require distance-weighted combinations of observed focal mechanisms within 1000 km of each gridpoint. Simultaneously, we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms, using the method of Kagan & Jackson proposed in 1994. This average angle reveals the level of tectonic complexity of a region and indicates the accuracy of the prediction. The procedure becomes problematical where longitude lines are not approximately parallel, and where shallow earthquakes are so sparse that an adequate sample spans very large distances. North or south of 75°, the azimuths of points 1000 km away may vary by about 35°. We solved this problem by calculating focal mechanisms on a plane tangent to the Earth's surface at each forecast point, correcting for the rotation of the longitude lines at the locations of earthquakes included in the averaging. The corrections are negligible between -30° and +30° latitude, but outside that band uncorrected rotations can be significantly off. Improved forecasts at 0.5° and 0.1° resolution are posted at http://eq.ess.ucla.edu/kagan/glob_gcmt_index.html.
Satellite disintegration dynamics
NASA Technical Reports Server (NTRS)
Dasenbrock, R. R.; Kaufman, B.; Heard, W. B.
1975-01-01
The subject of satellite disintegration is examined in detail. Elements of the orbits of individual fragments, determined by DOD space surveillance systems, are used to accurately predict the time and place of fragmentation. Dual time independent and time dependent analyses are performed for simulated and real breakups. Methods of statistical mechanics are used to study the evolution of the fragment clouds. The fragments are treated as an ensemble of non-interacting particles. A solution of Liouville's equation is obtained which enables the spatial density to be calculated as a function of position, time and initial velocity distribution.
On thermalization of electron-positron-photon plasma
NASA Astrophysics Data System (ADS)
Siutsou, I. A.; Aksenov, A. G.; Vereshchagin, G. V.
2015-12-01
Recently a progress has been made in understanding thermalization mechanism of relativistic plasma starting from a non-equilibrium state. Relativistic Boltzmann equations were solved numerically for homogeneous isotropic plasma with collision integrals for two- and three-particle interactions calculated from the first principles by means of QED matrix elements. All particles were assumed to fulfill Boltzmann statistics. In this work we follow plasma thermalization by accounting for Bose enhancement and Pauli blocking in particle interactions. Our results show that particle in equilibrium reach Bose-Einstein distribution for photons, and Fermi-Dirac one for electrons, respectively.
Duodenoscope hang time does not correlate with risk of bacterial contamination.
Heroux, Riley; Sheppard, Michelle; Wright, Sharon B; Sawhney, Mandeep; Hirsch, Elizabeth B; Kalaidjian, Robin; Snyder, Graham M
2017-04-01
Current professional guidelines recommend a maximum hang time for reprocessed duodenoscopes of 5-14 days. We sought to study the association between hang time and risk of duodenoscope contamination. We analyzed cultures of the elevator mechanism and working channel collected in a highly standardized fashion just before duodenoscope use. Hang time was calculated as the time from reprocessing to duodenoscope sampling. The relationship between hang time and duodenoscope contamination was estimated using a calculated correlation coefficient between hang time in days and degree of contamination on the elevator mechanism and working channel. The 18 study duodenoscopes were cultured 531 times, including 465 (87.6%) in the analysis dataset. Hang time ranged from 0.07-39.93 days, including 34 (7.3%) with hang time ≥7.00 days. Twelve cultures (2.6%) demonstrated elevator mechanism and/or working channel contamination. The correlation coefficients for hang time and degree of duodenoscope contamination were very small and not statistically significant (-0.0090 [P = .85] for elevator mechanism and -0.0002 [P = 1.00] for working channel). Odds ratios for hang time (dichotomized at ≥7.00 days) and elevator mechanism and/or working channel contamination were not significant. We did not find a significant association between hang time and risk of duodenoscope contamination. Future guidelines should consider a recommendation of no limit for hang time. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Structural propensities and entropy effects in peptide helix-coil transitions
NASA Astrophysics Data System (ADS)
Chemmama, Ilan E.; Pelea, Adam Colt; Bhandari, Yuba R.; Chapagain, Prem P.; Gerstman, Bernard S.
2012-09-01
The helix-coil transition in peptides is a critical structural transition leading to functioning proteins. Peptide chains have a large number of possible configurations that must be accounted for in statistical mechanical investigations. Using hydrogen bond and local helix propensity interaction terms, we develop a method for obtaining and incorporating the degeneracy factor that allows the exact calculation of the partition function for a peptide as a function of chain length. The partition function is used in calculations for engineered peptide chains of various lengths that allow comparison with a variety of different types of experimentally measured quantities, such as fraction of helicity as a function of both temperature and chain length, heat capacity, and denaturation studies. When experimental sensitivity in helicity measurements is properly accounted for in the calculations, the calculated curves fit well with the experimental curves. We determine values of interaction energies for comparison with known biochemical interactions, as well as quantify the difference in the number of configurations available to an amino acid in a random coil configuration compared to a helical configuration.
Non-Equilibrium Properties from Equilibrium Free Energy Calculations
NASA Technical Reports Server (NTRS)
Pohorille, Andrew; Wilson, Michael A.
2012-01-01
Calculating free energy in computer simulations is of central importance in statistical mechanics of condensed media and its applications to chemistry and biology not only because it is the most comprehensive and informative quantity that characterizes the eqUilibrium state, but also because it often provides an efficient route to access dynamic and kinetic properties of a system. Most of applications of equilibrium free energy calculations to non-equilibrium processes rely on a description in which a molecule or an ion diffuses in the potential of mean force. In general case this description is a simplification, but it might be satisfactorily accurate in many instances of practical interest. This hypothesis has been tested in the example of the electrodiffusion equation . Conductance of model ion channels has been calculated directly through counting the number of ion crossing events observed during long molecular dynamics simulations and has been compared with the conductance obtained from solving the generalized Nernst-Plank equation. It has been shown that under relatively modest conditions the agreement between these two approaches is excellent, thus demonstrating the assumptions underlying the diffusion equation are fulfilled. Under these conditions the electrodiffusion equation provides an efficient approach to calculating the full voltage-current dependence routinely measured in electrophysiological experiments.
The energetic cost of walking: a comparison of predictive methods.
Kramer, Patricia Ann; Sylvester, Adam D
2011-01-01
The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species.
Zuend, Stephan J.
2009-01-01
An experimental and computational investigation of amido-thiourea promoted imine hydrocyanation has revealed a new and unexpected mechanism of catalysis. Rather than direct activation of the imine by the thiourea, as had been proposed previously in related systems, the data are consistent with a mechanism involving catalyst-promoted proton transfer from hydrogen isocyanide to imine to generate diastereomeric iminium/cyanide ion pairs that are bound to catalyst through multiple non-covalent interactions; these ion pairs collapse to form the enantiomeric α-aminonitrile products. This mechanistic proposal is supported by the observation of a statistically significant correlation between experimental and calculated enantioselectivities induced by eight different catalysts (P ≪ 0.01). The computed models reveal a basis for enantioselectivity that involves multiple stabilizing and destabilizing interactions between substrate and catalyst, including thiourea-cyanide and amide-iminium interactions. PMID:19778044
Schreuder, Jan J; Castiglioni, Alessandro; Maisano, Francesco; Steendijk, Paul; Donelli, Andrea; Baan, Jan; Alfieri, Ottavio
2005-01-01
Surgical left ventricular restoration by means of endoventricular patch aneurysmectomy in patients with postinfarction aneurysm should result in acute improved left ventricular performance by decreasing mechanical dyssynchrony and increasing energy efficiency. Nine patients with left ventricular postinfarction aneurysm were studied intraoperatively before and after ventricular restoration with a conductance volume catheter to analyze pressure-volume relationships, energy efficiency, and mechanical dyssynchrony. The end-systolic elastance was used as a load-independent index of contractile state. Left ventricular energy efficiency was calculated from stroke work and total pressure-volume area. Segmental volume changes perpendicular to the long axis were used to calculate mechanical dyssynchrony. Statistical analysis was performed with the paired t test and least-squares linear regression. Endoventricular patch aneurysmectomy reduced end-diastolic volume by 37% (P < .001), with unchanged stroke volume. Systolic function improved, as derived from increased +dP/dt(max), by 42% (P < .03), peak ejection rate by 28% (P < .02), and ejection fraction by 16% (P < .0002). Early diastolic function improved, as shown by reduction of -dP/dt(max) by 34% (P < .006) and shortened tau by 30% (P < .001). Left ventricular end-systolic elastance increased from 1.2 +/- 0.6 to 2.2 +/- 1 mm Hg/mL (P < .001). Left ventricular energy efficiency increased by 36% (P < .002). Left ventricular mechanical dyssynchrony decreased during systole by 33% (P < .001) and during diastole by 20% (P < .005). Left ventricular restoration induced acute improvements in contractile state, energy efficiency, and relaxation, together with a decrease in left ventricular mechanical dyssynchrony.
Notes on numerical reliability of several statistical analysis programs
Landwehr, J.M.; Tasker, Gary D.
1999-01-01
This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.
Role of hydrogen in volatile behaviour of defects in SiO2-based electronic devices
NASA Astrophysics Data System (ADS)
Wimmer, Yannick; El-Sayed, Al-Moatasem; Gös, Wolfgang; Grasser, Tibor; Shluger, Alexander L.
2016-06-01
Charge capture and emission by point defects in gate oxides of metal-oxide-semiconductor field-effect transistors (MOSFETs) strongly affect reliability and performance of electronic devices. Recent advances in experimental techniques used for probing defect properties have led to new insights into their characteristics. In particular, these experimental data show a repeated dis- and reappearance (the so-called volatility) of the defect-related signals. We use multiscale modelling to explain the charge capture and emission as well as defect volatility in amorphous SiO2 gate dielectrics. We first briefly discuss the recent experimental results and use a multiphonon charge capture model to describe the charge-trapping behaviour of defects in silicon-based MOSFETs. We then link this model to ab initio calculations that investigate the three most promising defect candidates. Statistical distributions of defect characteristics obtained from ab initio calculations in amorphous SiO2 are compared with the experimentally measured statistical properties of charge traps. This allows us to suggest an atomistic mechanism to explain the experimentally observed volatile behaviour of defects. We conclude that the hydroxyl-E' centre is a promising candidate to explain all the observed features, including defect volatility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erdmann, Ewa; Labuda, Marta; Aguirre, Nestor F.
We present a complete exploration of the different fragmentation mechanisms of furan (C 4H 4O) operating at low and high energies. Three different theoretical approaches are combined to determine the structure of all possible reaction intermediates, many of them not described in previous studies, and a large number of pathways involving three types of fundamental elementary mechanisms: isomerization, fragmentation, and H/H 2 loss processes (this last one was not yet explored). Our results are compared with the existing experimental and theoretical investigations for furan fragmentation. At low energies the first processes to appear are isomerization, which always implies the breakingmore » of one C–O bond and one or several hydrogen transfers; at intermediate energies the fragmentation of the molecular skeleton becomes the most relevant mechanism; and H/H 2 loss is the dominant processes at high energy. However, the three mechanisms are active in very wide energy ranges and, therefore, at most energies there is a competition among them.« less
Erdmann, Ewa; Labuda, Marta; Aguirre, Nestor F.; ...
2018-03-15
We present a complete exploration of the different fragmentation mechanisms of furan (C 4H 4O) operating at low and high energies. Three different theoretical approaches are combined to determine the structure of all possible reaction intermediates, many of them not described in previous studies, and a large number of pathways involving three types of fundamental elementary mechanisms: isomerization, fragmentation, and H/H 2 loss processes (this last one was not yet explored). Our results are compared with the existing experimental and theoretical investigations for furan fragmentation. At low energies the first processes to appear are isomerization, which always implies the breakingmore » of one C–O bond and one or several hydrogen transfers; at intermediate energies the fragmentation of the molecular skeleton becomes the most relevant mechanism; and H/H 2 loss is the dominant processes at high energy. However, the three mechanisms are active in very wide energy ranges and, therefore, at most energies there is a competition among them.« less
Shear band formation in plastic bonded explosive (PBX)
NASA Astrophysics Data System (ADS)
Dey, T. N.; Johnson, J. N.
1998-07-01
Adiabatic shear bands can be a source of ignition and lead to detonation. At low to moderate deformation rates, 10-1000 s-1, two other mechanisms can also give rise to shear bands. These mechanisms are: 1) softening caused by micro-cracking and 2) a constitutive response with a non-associated flow rule as is observed in granular material such as soil. Brittle behavior at small strains and the granular nature of HMX suggest that PBX-9501 constitutive behavior may be similar to sand. A constitutive model for the first of these mechanisms is studied in a series of calculations. This viscoelastic constitutive model for PBX-9501 softens via a statistical crack model. A sand model is used to provide a non-associated flow rule and detailed results will be reported elsewhere. Both models generate shear band formation at 1-2% strain at nominal strain rates at and below 1000 s-1. Shear band formation is suppressed at higher strain rates. Both mechanisms may accelerate the formation of adiabatic shear bands.
Earth Observation System Flight Dynamics System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
NASA Technical Reports Server (NTRS)
Hubbard, W. B.; Dewitt, H. E.
1985-01-01
A model free energy is presented which accurately represents results from 45 high-precision Monte Carlo calculations of the thermodynamics of hydrogen-helium mixtures at pressures of astrophysical and planetophysical interest. The free energy is calculated using free-electron perturbation theory (dielectric function theory), and is an extension of the expression given in an earlier paper in this series. However, it fits the Monte Carlo results more accurately, and is valid for the full range of compositions from pure hydrogen to pure helium. Using the new free energy, the phase diagram of mixtures of liquid metallic hydrogen and helium is calculated and compared with earlier results. Sample results for mixing volumes are also presented, and the new free energy expression is used to compute a theoretical Jovian adiabat and compare the adiabat with results from three-dimensional Thomas-Fermi-Dirac theory. The present theory gives slightly higher densities at pressures of about 10 megabars.
Libstatmech and applications to astrophysics
NASA Astrophysics Data System (ADS)
Yu, Tianhong
In this work an introduction to Libstatmech is presented and applications especially to astrophysics are discussed. Libstatmech is a C toolkit for computing the statistical mechanics of fermions and bosons, written on top of libxml and gsl (GNU Scientific Library). Calculations of Thomas-Fermi Screening model and Bose-Einstein Condensate based on libstatmech demonstrate the expected results. For astrophysics application, a simple Type Ia Supernovae model is established to run the network calculation with weak reactions, in which libstatmech contributes to compute the electron chemical potential and allows the weak reverse rates to be calculated from detailed balance. Starting with pure 12C and T9=1.8, we find that at high initial density (rho~ 9x 109 g/cm3) there are relatively large abundances of neutron-rich iron-group isotopes (e.g. 66Ni, 50Ti, 48Ca) produced during the explosion, and Y e can drop to ~0.4, which indicates that the rare, high density Type Ia supernovae may help to explain the 48Ca and 50Ti effect in FUN CAIs.
Kontosic, I; Vukelić, M; Pancić, M; Kunisek, J
1994-12-01
Physical work load was estimated in a female conveyor-belt worker in a bottling plant. Estimation was based on continuous measurement and on calculation of average heart rate values in three-minute and one-hour periods and during the total measuring period. The thermal component of the heart rate was calculated by means of the corrected effective temperature, for the one-hour periods. The average heart rate at rest was also determined. The work component of the heart rate was calculated by subtraction of the resting heart rate and the heart rate measured at 50 W, using a regression equation. The average estimated gross energy expenditure during the work was 9.6 +/- 1.3 kJ/min corresponding to the category of light industrial work. The average estimated oxygen uptake was 0.42 +/- 0.06 L/min. The average performed mechanical work was 12.2 +/- 4.2 W, i.e. the energy expenditure was 8.3 +/- 1.5%.
The Statistical Basis of Chemical Equilibria.
ERIC Educational Resources Information Center
Hauptmann, Siegfried; Menger, Eva
1978-01-01
Describes a machine which demonstrates the statistical bases of chemical equilibrium, and in doing so conveys insight into the connections among statistical mechanics, quantum mechanics, Maxwell Boltzmann statistics, statistical thermodynamics, and transition state theory. (GA)
2017-01-01
Binding free energy calculations that make use of alchemical pathways are becoming increasingly feasible thanks to advances in hardware and algorithms. Although relative binding free energy (RBFE) calculations are starting to find widespread use, absolute binding free energy (ABFE) calculations are still being explored mainly in academic settings due to the high computational requirements and still uncertain predictive value. However, in some drug design scenarios, RBFE calculations are not applicable and ABFE calculations could provide an alternative. Computationally cheaper end-point calculations in implicit solvent, such as molecular mechanics Poisson–Boltzmann surface area (MMPBSA) calculations, could too be used if one is primarily interested in a relative ranking of affinities. Here, we compare MMPBSA calculations to previously performed absolute alchemical free energy calculations in their ability to correlate with experimental binding free energies for three sets of bromodomain–inhibitor pairs. Different MMPBSA approaches have been considered, including a standard single-trajectory protocol, a protocol that includes a binding entropy estimate, and protocols that take into account the ligand hydration shell. Despite the improvements observed with the latter two MMPBSA approaches, ABFE calculations were found to be overall superior in obtaining correlation with experimental affinities for the test cases considered. A difference in weighted average Pearson () and Spearman () correlations of 0.25 and 0.31 was observed when using a standard single-trajectory MMPBSA setup ( = 0.64 and = 0.66 for ABFE; = 0.39 and = 0.35 for MMPBSA). The best performing MMPBSA protocols returned weighted average Pearson and Spearman correlations that were about 0.1 inferior to ABFE calculations: = 0.55 and = 0.56 when including an entropy estimate, and = 0.53 and = 0.55 when including explicit water molecules. Overall, the study suggests that ABFE calculations are indeed the more accurate approach, yet there is also value in MMPBSA calculations considering the lower compute requirements, and if agreement to experimental affinities in absolute terms is not of interest. Moreover, for the specific protein–ligand systems considered in this study, we find that including an explicit ligand hydration shell or a binding entropy estimate in the MMPBSA calculations resulted in significant performance improvements at a negligible computational cost. PMID:28786670
Maximum caliber inference of nonequilibrium processes
NASA Astrophysics Data System (ADS)
Otten, Moritz; Stock, Gerhard
2010-07-01
Thirty years ago, Jaynes suggested a general theoretical approach to nonequilibrium statistical mechanics, called maximum caliber (MaxCal) [Annu. Rev. Phys. Chem. 31, 579 (1980)]. MaxCal is a variational principle for dynamics in the same spirit that maximum entropy is a variational principle for equilibrium statistical mechanics. Motivated by the success of maximum entropy inference methods for equilibrium problems, in this work the MaxCal formulation is applied to the inference of nonequilibrium processes. That is, given some time-dependent observables of a dynamical process, one constructs a model that reproduces these input data and moreover, predicts the underlying dynamics of the system. For example, the observables could be some time-resolved measurements of the folding of a protein, which are described by a few-state model of the free energy landscape of the system. MaxCal then calculates the probabilities of an ensemble of trajectories such that on average the data are reproduced. From this probability distribution, any dynamical quantity of the system can be calculated, including population probabilities, fluxes, or waiting time distributions. After briefly reviewing the formalism, the practical numerical implementation of MaxCal in the case of an inference problem is discussed. Adopting various few-state models of increasing complexity, it is demonstrated that the MaxCal principle indeed works as a practical method of inference: The scheme is fairly robust and yields correct results as long as the input data are sufficient. As the method is unbiased and general, it can deal with any kind of time dependency such as oscillatory transients and multitime decays.
A two-component rain model for the prediction of attenuation statistics
NASA Technical Reports Server (NTRS)
Crane, R. K.
1982-01-01
A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.
Transbuccal delivery of chlorpheniramine maleate from mucoadhesive buccal patches.
Sekhar, K Chandra; Naidu, K V S; Vishnu, Y Vamshi; Gannu, Ramesh; Kishan, V; Rao, Y Madhusudan
2008-01-01
This article describes buccal permeation of chlorpheniramine maleate (CPM) and its transbuccal delivery using mucoadhesive buccal patches. Permeation of CPM was calculated in vitro using porcine buccal membrane and in vivo in healthy humans. Buccal formulations were developed with hydroxyethylcellulose (HEC) and evaluated for in vitro release, moisture absorption, mechanical properties, and bioadhesion, and optimized formulation was subjected for bioavailability studies in healthy human volunteers. In vitro flux of CPM was calculated to be 0.14 +/- 0.03 mg.h(-1).cm(-2) and buccal absorption also was demonstrated in vivo in human volunteers. In vitro drug release and moisture absorbed were governed by HEC content and formulations exhibited good tensile and mucoadhesive properties. Bioavailability from optimized buccal patch was 1.46 times higher than the oral dosage form and the results showed statistically significant difference.
NASA Technical Reports Server (NTRS)
Macfarlane, J. J.; Hubbard, W. B.
1983-01-01
A numerical technique for solving the Thomas-Fermi-Dirac (TED) equation in three dimensions, for an array of ions obeying periodic boundary conditions, is presented. The technique is then used to calculate deviations from ideal mixing for an alloy of hydrogen and helium at zero temperature and high presures. Results are compared with alternative models which apply perturbation theory to calculation of the electron distribution, based upon the assumption of weak response of the electron gas to the ions. The TFD theory, which permits strong electron response, always predicts smaller deviations from ideal mixing than would be predicted by perturbation theory. The results indicate that predicted phase separation curves for hydrogen-helium alloys under conditions prevailing in the metallic zones of Jupiter and Saturn are very model dependent.
NASA Astrophysics Data System (ADS)
Ren, Hongjiang; Li, Xiaojun; Qu, Yingjuan; Li, Feng
2018-01-01
The H abstraction reaction mechanism for sevoflurane with an ·OH radical was investigated theoretically using dual levels B3LYP/6-311++G(d, p)//QCISD(T)/6-311G(d, p). Thermochemistry properties at 298.15-2000 K were analyzed with the standard statistical thermodynamics method. Three pathways P(1), P(2) and P(3) were found and corresponded to the H13, H14 and H15 abstractions reactions with the Gibbs free barriers of 54.86, 55.05 and 54.86 kJ mol-1, respectively. The corresponding rate constants for three pathways over a wide temperature range of 298.15-2000 K were calculated and the results are in good agreement with the experimental data.
Random mechanics: Nonlinear vibrations, turbulences, seisms, swells, fatigue
NASA Astrophysics Data System (ADS)
Kree, P.; Soize, C.
The random modeling of physical phenomena, together with probabilistic methods for the numerical calculation of random mechanical forces, are analytically explored. Attention is given to theoretical examinations such as probabilistic concepts, linear filtering techniques, and trajectory statistics. Applications of the methods to structures experiencing atmospheric turbulence, the quantification of turbulence, and the dynamic responses of the structures are considered. A probabilistic approach is taken to study the effects of earthquakes on structures and to the forces exerted by ocean waves on marine structures. Theoretical analyses by means of vector spaces and stochastic modeling are reviewed, as are Markovian formulations of Gaussian processes and the definition of stochastic differential equations. Finally, random vibrations with a variable number of links and linear oscillators undergoing the square of Gaussian processes are investigated.
Hayashi, Tomohiko; Chiba, Shuntaro; Kaneta, Yusuke; Furuta, Tadaomi; Sakurai, Minoru
2014-11-06
ATP binding cassette (ABC) proteins belong to a superfamily of active transporters. Recent experimental and computational studies have shown that binding of ATP to the nucleotide binding domains (NBDs) of ABC proteins drives the dimerization of NBDs, which, in turn, causes large conformational changes within the transmembrane domains (TMDs). To elucidate the active substrate transport mechanism of ABC proteins, it is first necessary to understand how the NBD dimerization is driven by ATP binding. In this study, we selected MalKs (NBDs of a maltose transporter) as a representative NBD and calculated the free-energy change upon dimerization using molecular mechanics calculations combined with a statistical thermodynamic theory of liquids, as well as a method to calculate the translational, rotational, and vibrational entropy change. This combined method is applied to a large number of snapshot structures obtained from molecular dynamics simulations containing explicit water molecules. The results suggest that the NBD dimerization proceeds with a large gain of water entropy when ATP molecules bind to the NBDs. The energetic gain arising from direct NBD-NBD interactions is canceled by the dehydration penalty and the configurational-entropy loss. ATP hydrolysis induces a loss of the shape complementarity between the NBDs, which leads to the dissociation of the dimer, due to a decrease in the water-entropy gain and an increase in the configurational-entropy loss. This interpretation of the NBD dimerization mechanism in concert with ATP, especially focused on the water-mediated entropy force, is potentially applicable to a wide variety of the ABC transporters.
Virtual screening using molecular simulations.
Yang, Tianyi; Wu, Johnny C; Yan, Chunli; Wang, Yuanfeng; Luo, Ray; Gonzales, Michael B; Dalby, Kevin N; Ren, Pengyu
2011-06-01
Effective virtual screening relies on our ability to make accurate prediction of protein-ligand binding, which remains a great challenge. In this work, utilizing the molecular-mechanics Poisson-Boltzmann (or Generalized Born) surface area approach, we have evaluated the binding affinity of a set of 156 ligands to seven families of proteins, trypsin β, thrombin α, cyclin-dependent kinase (CDK), cAMP-dependent kinase (PKA), urokinase-type plasminogen activator, β-glucosidase A, and coagulation factor Xa. The effect of protein dielectric constant in the implicit-solvent model on the binding free energy calculation is shown to be important. The statistical correlations between the binding energy calculated from the implicit-solvent approach and experimental free energy are in the range of 0.56-0.79 across all the families. This performance is better than that of typical docking programs especially given that the latter is directly trained using known binding data whereas the molecular mechanics is based on general physical parameters. Estimation of entropic contribution remains the barrier to accurate free energy calculation. We show that the traditional rigid rotor harmonic oscillator approximation is unable to improve the binding free energy prediction. Inclusion of conformational restriction seems to be promising but requires further investigation. On the other hand, our preliminary study suggests that implicit-solvent based alchemical perturbation, which offers explicit sampling of configuration entropy, can be a viable approach to significantly improve the prediction of binding free energy. Overall, the molecular mechanics approach has the potential for medium to high-throughput computational drug discovery. Copyright © 2011 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Nykyri, K.; Moore, T.; Dimmock, A. P.
2017-12-01
In the Earth's magnetosphere, the magnetotail plasma sheet ions are much hotter than in the shocked solar wind. On the dawn-sector, the cold-component ions are more abundant and hotter by 30-40 percent when compared to the dusk sector. Recent statistical studies of the flank magnetopause and magnetosheath have shown that the level of temperature asymmetry of the magnetosheath is unable to account for this, so additional physical mechanisms must be at play, either at the magnetopause or plasma sheet that contribute to this asymmetry. In this study, we perform a statistical analysis on the ion-scale wave properties in the three main plasma regimes common to flank magnetopause boundary crossings when the boundary is unstable to KHI: hot and tenuous magnetospheric, cold and dense magnetosheath and mixed [Hasegawa 2004 et al., 2004]. These statistics of ion-scale wave properties are compared to observations of fast magnetosonic wave modes that have recently been linked to Kelvin-Helmholtz vortex centered ion heating [Moore et al., 2016]. The statistical analysis shows that during KH events there is enhanced non-adiabatic heating calculated during (temporal) ion scale wave intervals when compared to non-KH events.
Spezia, Riccardo; Martínez-Nuñez, Emilio; Vazquez, Saulo; Hase, William L
2017-04-28
In this Introduction, we show the basic problems of non-statistical and non-equilibrium phenomena related to the papers collected in this themed issue. Over the past few years, significant advances in both computing power and development of theories have allowed the study of larger systems, increasing the time length of simulations and improving the quality of potential energy surfaces. In particular, the possibility of using quantum chemistry to calculate energies and forces 'on the fly' has paved the way to directly study chemical reactions. This has provided a valuable tool to explore molecular mechanisms at given temperatures and energies and to see whether these reactive trajectories follow statistical laws and/or minimum energy pathways. This themed issue collects different aspects of the problem and gives an overview of recent works and developments in different contexts, from the gas phase to the condensed phase to excited states.This article is part of the themed issue 'Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces'. © 2017 The Author(s).
Hitting Is Contagious in Baseball: Evidence from Long Hitting Streaks
Bock, Joel R.; Maewal, Akhilesh; Gough, David A.
2012-01-01
Data analysis is used to test the hypothesis that “hitting is contagious”. A statistical model is described to study the effect of a hot hitter upon his teammates’ batting during a consecutive game hitting streak. Box score data for entire seasons comprising streaks of length games, including a total observations were compiled. Treatment and control sample groups () were constructed from core lineups of players on the streaking batter’s team. The percentile method bootstrap was used to calculate confidence intervals for statistics representing differences in the mean distributions of two batting statistics between groups. Batters in the treatment group (hot streak active) showed statistically significant improvements in hitting performance, as compared against the control. Mean for the treatment group was found to be to percentage points higher during hot streaks (mean difference increased points), while the batting heat index introduced here was observed to increase by points. For each performance statistic, the null hypothesis was rejected at the significance level. We conclude that the evidence suggests the potential existence of a “statistical contagion effect”. Psychological mechanisms essential to the empirical results are suggested, as several studies from the scientific literature lend credence to contagious phenomena in sports. Causal inference from these results is difficult, but we suggest and discuss several latent variables that may contribute to the observed results, and offer possible directions for future research. PMID:23251507
Statistical Properties of SEE Rate Calculation in the Limits of Large and Small Event Counts
NASA Technical Reports Server (NTRS)
Ladbury, Ray
2007-01-01
This viewgraph presentation reviews the Statistical properties of Single Event Effects (SEE) rate calculations. The goal of SEE rate calculation is to bound the SEE rate, though the question is by how much. The presentation covers: (1) Understanding errors on SEE cross sections, (2) Methodology: Maximum Likelihood and confidence Contours, (3) Tests with Simulated data and (4) Applications.
NASA Astrophysics Data System (ADS)
Lin, Shu; Wang, Rui; Xia, Ning; Li, Yongdong; Liu, Chunliang
2018-01-01
Statistical multipactor theories are critical prediction approaches for multipactor breakdown determination. However, these approaches still require a negotiation between the calculation efficiency and accuracy. This paper presents an improved stationary statistical theory for efficient threshold analysis of two-surface multipactor. A general integral equation over the distribution function of the electron emission phase with both the single-sided and double-sided impacts considered is formulated. The modeling results indicate that the improved stationary statistical theory can not only obtain equally good accuracy of multipactor threshold calculation as the nonstationary statistical theory, but also achieve high calculation efficiency concurrently. By using this improved stationary statistical theory, the total time consumption in calculating full multipactor susceptibility zones of parallel plates can be decreased by as much as a factor of four relative to the nonstationary statistical theory. It also shows that the effect of single-sided impacts is indispensable for accurate multipactor prediction of coaxial lines and also more significant for the high order multipactor. Finally, the influence of secondary emission yield (SEY) properties on the multipactor threshold is further investigated. It is observed that the first cross energy and the energy range between the first cross and the SEY maximum both play a significant role in determining the multipactor threshold, which agrees with the numerical simulation results in the literature.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Endovascular Treatment of Ischemic Stroke: An Updated Meta-Analysis of Efficacy and Safety.
Vidale, Simone; Agostoni, Elio
2017-05-01
Recent randomized trials demonstrated the superiority of the mechanical thrombectomy over the best medical treatment in patients with acute ischemic stroke due to an occlusion of arteries of proximal anterior circulation. In this updated meta-analysis, we aimed to summarize the total clinical effects of the treatment, including the last trials. We performed literature search of Randomized Crontrolled Trials (RCTs) published between 2010 and October 2016, comparing endovenous thrombolysis plus mechanical thrombectomy (intervention group) with best medical care alone (control group). We identified 8 trials. Primary outcomes were reduced disability at 90 days from the event and symptomatic intracranial hemorrhage. Statistical analysis was performed pooling data into the 2 groups, evaluating outcome heterogeneity. The Mantel-Haenszel method was used to calculate odds ratios (ORs). We analyzed data for 1845 patients (interventional group: 911; control group: 934). Mechanical thrombectomy contributed to a significant reduction in disability rate compared to the best medical treatment alone (OR: 2.087; 95% confidence interval [CI]: 1.718-2.535; P < .001). We calculated that for every 100 treated patients, 16 more participants have a good outcome as a result of mechanical treatment. No significant differences between groups were observed concerning the occurrence of symptomatic hemorrhage (OR: 1.021; 95% CI: 0.641-1.629; P = .739). Mechanical thrombectomy contributes to significantly increase the functional benefit of endovenous thrombolysis in patients with acute ischemic stroke caused by arterial occlusion of proximal anterior circulation, without reduction in safety. These findings are relevant for the optimization of the acute stroke management, including the implementation of networks between stroke centers.
Compton-Scattering Cross Section on the Proton at High Momentum Transfer
NASA Astrophysics Data System (ADS)
Danagoulian, A.; Mamyan, V. H.; Roedelbronn, M.; Aniol, K. A.; Annand, J. R. M.; Bertin, P. Y.; Bimbot, L.; Bosted, P.; Calarco, J. R.; Camsonne, A.; Chang, C. C.; Chang, T.-H.; Chen, J.-P.; Choi, Seonho; Chudakov, E.; Degtyarenko, P.; de Jager, C. W.; Deur, A.; Dutta, D.; Egiyan, K.; Gao, H.; Garibaldi, F.; Gayou, O.; Gilman, R.; Glamazdin, A.; Glashausser, C.; Gomez, J.; Hamilton, D. J.; Hansen, J.-O.; Hayes, D.; Higinbotham, D. W.; Hinton, W.; Horn, T.; Howell, C.; Hunyady, T.; Hyde, C. E.; Jiang, X.; Jones, M. K.; Khandaker, M.; Ketikyan, A.; Kubarovsky, V.; Kramer, K.; Kumbartzki, G.; Laveissière, G.; Lerose, J.; Lindgren, R. A.; Margaziotis, D. J.; Markowitz, P.; McCormick, K.; Meekins, D. G.; Meziani, Z.-E.; Michaels, R.; Moussiegt, P.; Nanda, S.; Nathan, A. M.; Nikolenko, D. M.; Nelyubin, V.; Norum, B. E.; Paschke, K.; Pentchev, L.; Perdrisat, C. F.; Piasetzky, E.; Pomatsalyuk, R.; Punjabi, V. A.; Rachek, I.; Radyushkin, A.; Reitz, B.; Roche, R.; Ron, G.; Sabatié, F.; Saha, A.; Savvinov, N.; Shahinyan, A.; Shestakov, Y.; Širca, S.; Slifer, K.; Solvignon, P.; Stoler, P.; Tajima, S.; Sulkosky, V.; Todor, L.; Vlahovic, B.; Weinstein, L. B.; Wang, K.; Wojtsekhowski, B.; Voskanyan, H.; Xiang, H.; Zheng, X.; Zhu, L.
2007-04-01
Cross-section values for Compton scattering on the proton were measured at 25 kinematic settings over the range s=5 11 and -t=2 7GeV2 with a statistical accuracy of a few percent. The scaling power for the s dependence of the cross section at fixed center-of-mass angle was found to be 8.0±0.2, strongly inconsistent with the prediction of perturbative QCD. The observed cross-section values are in fair agreement with the calculations using the handbag mechanism, in which the external photons couple to a single quark.
Analysis of Yb3+/Er3+-codoped microring resonator cross-grid matrices
NASA Astrophysics Data System (ADS)
Vallés, Juan A.; Gǎlǎtuş, Ramona
2014-09-01
An analytic model of the scattering response of a highly Yb3+/Er3+-codoped phosphate glass microring resonator matrix is considered to obtain the transfer functions of an M x N cross-grid microring resonator structure. Then a detailed model is used to calculate the pump and signal propagation, including a microscopic statistical formalism to describe the high-concentration induced energy-transfer mechanisms and passive and active features are combined to realistically simulate the performance as a wavelength-selective amplifier or laser. This analysis allows the optimization of these structures for telecom or sensing applications.
Designing high speed diagnostics
NASA Astrophysics Data System (ADS)
Veliz Carrillo, Gerardo; Martinez, Adam; Mula, Swathi; Prestridge, Kathy; Extreme Fluids Team Team
2017-11-01
Timing and firing for shock-driven flows is complex because of jitter in the shock tube mechanical drivers. Consequently, experiments require dynamic triggering of diagnostics from pressure transducers. We explain the design process and criteria for setting up re-shock experiments at the Los Alamos Vertical Shock Tube facility, and the requirements for particle image velocimetry and planar laser induced fluorescence measurements necessary for calculating Richtmeyer-Meshkov variable density turbulent statistics. Dynamic triggering of diagnostics allows for further investigation of the development of the Richtemeyer-Meshkov instability at both initial shock and re-shock. Thanks to the Los Alamos National Laboratory for funding our project.
NASA Astrophysics Data System (ADS)
Sargolzaeipor, S.; Hassanabadi, H.; Chung, W. S.
2018-04-01
The Klein-Gordon equation is extended in the presence of an Aharonov-Bohm magnetic field for the Cornell potential and the corresponding wave functions as well as the spectra are obtained. After introducing the superstatistics in the statistical mechanics, we first derived the effective Boltzmann factor in the deformed formalism with modified Dirac delta distribution. We then use the concepts of the superstatistics to calculate the thermodynamics properties of the system. The well-known results are recovered by the vanishing of deformation parameter and some graphs are plotted for the clarity of our results.
Electroencephalographic field influence on calcium momentum waves.
Ingber, Lester; Pappalepore, Marco; Stesiak, Ronald R
2014-02-21
Macroscopic electroencephalographic (EEG) fields can be an explicit top-down neocortical mechanism that directly drives bottom-up processes that describe memory, attention, and other neuronal processes. The top-down mechanism considered is macrocolumnar EEG firings in neocortex, as described by a statistical mechanics of neocortical interactions (SMNI), developed as a magnetic vector potential A. The bottom-up process considered is Ca(2+) waves prominent in synaptic and extracellular processes that are considered to greatly influence neuronal firings. Here, the complimentary effects are considered, i.e., the influence of A on Ca(2+) momentum, p. The canonical momentum of a charged particle in an electromagnetic field, Π=p+qA (SI units), is calculated, where the charge of Ca(2+) is q=-2e, e is the magnitude of the charge of an electron. Calculations demonstrate that macroscopic EEG A can be quite influential on the momentum p of Ca(2+) ions, in both classical and quantum mechanics. Molecular scales of Ca(2+) wave dynamics are coupled with A fields developed at macroscopic regional scales measured by coherent neuronal firing activity measured by scalp EEG. The project has three main aspects: fitting A models to EEG data as reported here, building tripartite models to develop A models, and studying long coherence times of Ca(2+) waves in the presence of A due to coherent neuronal firings measured by scalp EEG. The SMNI model supports a mechanism wherein the p+qA interaction at tripartite synapses, via a dynamic centering mechanism (DCM) to control background synaptic activity, acts to maintain short-term memory (STM) during states of selective attention. © 2013 Published by Elsevier Ltd. All rights reserved.
Test particle propagation in magnetostatic turbulence. 2: The local approximation method
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.
1976-01-01
An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.
A computational DFT study of structural transitions in textured solid-fluid interfaces
NASA Astrophysics Data System (ADS)
Yatsyshin, Petr; Parry, Andrew O.; Kalliadasis, Serafim
2015-11-01
Fluids adsorbed at walls, in capillary pores and slits, and in more exotic, sculpted geometries such as grooves and wedges can exhibit many new phase transitions, including wetting, pre-wetting, capillary-condensation and filling, compared to their bulk counterparts. As well as being of fundamental interest to the modern statistical mechanical theory of inhomogeneous fluids, these are also relevant to nanofluidics, chemical- and bioengineering. In this talk we will show using a microscopic Density Functional Theory (DFT) for fluids how novel, continuous, interfacial transitions associated with the first-order prewetting line, can occur on steps, in grooves and in wedges, that are sensitive to both the range of the intermolecular forces and interfacial fluctuation effects. These transitions compete with wetting, filling and condensation producing very rich phase diagrams even for relatively simple geometries. We will also discuss practical aspects of DFT calculations, and demonstrate how this statistical-mechanical framework is capable of yielding complex fluid structure, interfacial tensions, and regions of thermodynamic stability of various fluid configurations. As a side note, this demonstrates that DFT is an excellent tool for the investigations of complex multiphase systems. We acknowledge financial support from the European Research Council via Advanced Grant No. 247031.
Ergodic Theory, Interpretations of Probability and the Foundations of Statistical Mechanics
NASA Astrophysics Data System (ADS)
van Lith, Janneke
The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic. This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.
Properties of JP=1/2+ baryon octets at low energy
NASA Astrophysics Data System (ADS)
Kaur, Amanpreet; Gupta, Pallavi; Upadhyay, Alka
2017-06-01
The statistical model in combination with the detailed balance principle is able to phenomenologically calculate and analyze spin- and flavor-dependent properties like magnetic moments (with effective masses, with effective charge, or with both effective mass and effective charge), quark spin polarization and distribution, the strangeness suppression factor, and \\overline{d}-\\overline{u} asymmetry incorporating the strange sea. The s\\overline{s} in the sea is said to be generated via the basic quark mechanism but suppressed by the strange quark mass factor ms>m_{u,d}. The magnetic moments of the octet baryons are analyzed within the statistical model, by putting emphasis on the SU(3) symmetry-breaking effects generated by the mass difference between the strange and non-strange quarks. The work presented here assumes hadrons with a sea having an admixture of quark gluon Fock states. The results obtained have been compared with theoretical models and experimental data.
Statistical Modeling of Robotic Random Walks on Different Terrain
NASA Astrophysics Data System (ADS)
Naylor, Austin; Kinnaman, Laura
Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.
Quantum-statistical theory of microwave detection using superconducting tunnel junctions
NASA Astrophysics Data System (ADS)
Deviatov, I. A.; Kuzmin, L. S.; Likharev, K. K.; Migulin, V. V.; Zorin, A. B.
1986-09-01
A quantum-statistical theory of microwave and millimeter-wave detection using superconducting tunnel junctions is developed, with a rigorous account of quantum, thermal, and shot noise arising from fluctuation sources associated with the junctions, signal source, and matching circuits. The problem of the noise characterization in the quantum sensitivity range is considered and a general noise parameter Theta(N) is introduced. This parameter is shown to be an adequate figure of merit for most receivers of interest while some devices can require a more complex characterization. Analytical expressions and/or numerically calculated plots for Theta(N) are presented for the most promising detection modes including the parametric amplification, heterodyne mixing, and quadratic videodetection, using both the quasiparticle-current and the Cooper-pair-current nonlinearities. Ultimate minimum values of Theta(N) for each detection mode are compared and found to be in agreement with limitations imposed by the quantum-mechanical uncertainty principle.
Maximum one-shot dissipated work from Rényi divergences
NASA Astrophysics Data System (ADS)
Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko
2018-05-01
Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.
Maximum one-shot dissipated work from Rényi divergences.
Yunger Halpern, Nicole; Garner, Andrew J P; Dahlsten, Oscar C O; Vedral, Vlatko
2018-05-01
Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.
Valid statistical inference methods for a case-control study with missing data.
Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun
2018-04-01
The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.
The choice of statistical methods for comparisons of dosimetric data in radiotherapy.
Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques
2014-09-18
Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (-5 ± 4.4 SD) for MB and (-4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods. This paper illustrates and justifies the use of statistical tests and graphical representations for dosimetric comparisons in radiotherapy. The statistical analysis shows the significance of dose differences resulting from two or more techniques in radiotherapy.
Price Analysis of Railway Freight Transport under Marketing Mechanism
NASA Astrophysics Data System (ADS)
Shi, Ying; Fang, Xiaoping; Chen, Zhiya
Regarding the problems in the reform of the railway tariff system and the pricing of the transport, by means of assaying the influence of the price elasticity on the artifice used for price, this article proposed multiple regressive model which analyzed price elasticity quantitatively. This model conclude multi-factors which influences on the price elasticity, such as the averagely railway freight charge, the averagely freight haulage of proximate supersede transportation mode, the GDP per capita in the point of origin, and a series of dummy variable which can reflect the features of some productive and consume demesne. It can calculate the price elasticity of different classes in different domains, and predict the freight traffic volume on different rate levels. It can calculate confidence-level, and evaluate the relevance of each parameter to get rid of irrelevant or little relevant variables. It supplied a good theoretical basis for directing the pricing of transport enterprises in market economic conditions, which is suitable for railway freight, passenger traffic and other transportation manner as well. SPSS (Statistical Package for the Social Science) software was used to calculate and analysis the example. This article realized the calculation by HYFX system(Ministry of Railways fund).
Efficient free energy calculations of quantum systems through computer simulations
NASA Astrophysics Data System (ADS)
Antonelli, Alex; Ramirez, Rafael; Herrero, Carlos; Hernandez, Eduardo
2009-03-01
In general, the classical limit is assumed in computer simulation calculations of free energy. This approximation, however, is not justifiable for a class of systems in which quantum contributions for the free energy cannot be neglected. The inclusion of quantum effects is important for the determination of reliable phase diagrams of these systems. In this work, we present a new methodology to compute the free energy of many-body quantum systems [1]. This methodology results from the combination of the path integral formulation of statistical mechanics and efficient non-equilibrium methods to estimate free energy, namely, the adiabatic switching and reversible scaling methods. A quantum Einstein crystal is used as a model to show the accuracy and reliability the methodology. This new method is applied to the calculation of solid-liquid coexistence properties of neon. Our findings indicate that quantum contributions to properties such as, melting point, latent heat of fusion, entropy of fusion, and slope of melting line can be up to 10% of the calculated values using the classical approximation. [1] R. M. Ramirez, C. P. Herrero, A. Antonelli, and E. R. Hernández, Journal of Chemical Physics 129, 064110 (2008)
Analytical theory of mesoscopic Bose-Einstein condensation in an ideal gas
NASA Astrophysics Data System (ADS)
Kocharovsky, Vitaly V.; Kocharovsky, Vladimir V.
2010-03-01
We find the universal structure and scaling of the Bose-Einstein condensation (BEC) statistics and thermodynamics (Gibbs free energy, average energy, heat capacity) for a mesoscopic canonical-ensemble ideal gas in a trap with an arbitrary number of atoms, any volume, and any temperature, including the whole critical region. We identify a universal constraint-cutoff mechanism that makes BEC fluctuations strongly non-Gaussian and is responsible for all unusual critical phenomena of the BEC phase transition in the ideal gas. The main result is an analytical solution to the problem of critical phenomena. It is derived by, first, calculating analytically the universal probability distribution of the noncondensate occupation, or a Landau function, and then using it for the analytical calculation of the universal functions for the particular physical quantities via the exact formulas which express the constraint-cutoff mechanism. We find asymptotics of that analytical solution as well as its simple analytical approximations which describe the universal structure of the critical region in terms of the parabolic cylinder or confluent hypergeometric functions. The obtained results for the order parameter, all higher-order moments of BEC fluctuations, and thermodynamic quantities perfectly match the known asymptotics outside the critical region for both low and high temperature limits. We suggest two- and three-level trap models of BEC and find their exact solutions in terms of the cutoff negative binomial distribution (which tends to the cutoff gamma distribution in the continuous limit) and the confluent hypergeometric distribution, respectively. Also, we present an exactly solvable cutoff Gaussian model of BEC in a degenerate interacting gas. All these exact solutions confirm the universality and constraint-cutoff origin of the strongly non-Gaussian BEC statistics. We introduce a regular refinement scheme for the condensate statistics approximations on the basis of the infrared universality of higher-order cumulants and the method of superposition and show how to model BEC statistics in the actual traps. In particular, we find that the three-level trap model with matching the first four or five cumulants is enough to yield remarkably accurate results for all interesting quantities in the whole critical region. We derive an exact multinomial expansion for the noncondensate occupation probability distribution and find its high-temperature asymptotics (Poisson distribution) and corrections to it. Finally, we demonstrate that the critical exponents and a few known terms of the Taylor expansion of the universal functions, which were calculated previously from fitting the finite-size simulations within the phenomenological renormalization-group theory, can be easily obtained from the presented full analytical solutions for the mesoscopic BEC as certain approximations in the close vicinity of the critical point.
WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.
Grech, Victor
2018-03-01
The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.
Non-equilibrium statistical mechanics theory for the large scales of geophysical flows
NASA Astrophysics Data System (ADS)
Eric, S.; Bouchet, F.
2010-12-01
The aim of any theory of turbulence is to understand the statistical properties of the velocity field. As a huge number of degrees of freedom is involved, statistical mechanics is a natural approach. The self-organization of two-dimensional and geophysical turbulent flows is addressed based on statistical mechanics methods. We discuss classical and recent works on this subject; from the statistical mechanics basis of the theory up to applications to Jupiter’s troposphere and ocean vortices and jets. The equilibrium microcanonical measure is built from the Liouville theorem. Important statistical mechanics concepts (large deviations, mean field approach) and thermodynamic concepts (ensemble inequivalence, negative heat capacity) are briefly explained and used to predict statistical equilibria for turbulent flows. This is applied to make quantitative models of two-dimensional turbulence, the Great Red Spot and other Jovian vortices, ocean jets like the Gulf-Stream, and ocean vortices. A detailed comparison between these statistical equilibria and real flow observations will be discussed. We also present recent results for non-equilibrium situations, for which forces and dissipation are in a statistical balance. As an example, the concept of phase transition allows us to describe drastic changes of the whole system when a few external parameters are changed. F. Bouchet and E. Simonnet, Random Changes of Flow Topology in Two-Dimensional and Geophysical Turbulence, Physical Review Letters 102 (2009), no. 9, 094504-+. F. Bouchet and J. Sommeria, Emergence of intense jets and Jupiter's Great Red Spot as maximum-entropy structures, Journal of Fluid Mechanics 464 (2002), 165-207. A. Venaille and F. Bouchet, Ocean rings and jets as statistical equilibrium states, submitted to JPO F. Bouchet and A. Venaille, Statistical mechanics of two-dimensional and geophysical flows, submitted to Physics Reports Non-equilibrium phase transitions for the 2D Navier-Stokes equations with stochastic forces (time series and probability density functions (PDFs) of the modulus of the largest scale Fourrier component, showing bistability between dipole and unidirectional flows). This bistability is predicted by statistical mechanics.
A quantitative study of the clustering of polycyclic aromatic hydrocarbons at high temperatures.
Totton, Tim S; Misquitta, Alston J; Kraft, Markus
2012-03-28
The clustering of polycyclic aromatic hydrocarbon (PAH) molecules is investigated in the context of soot particle inception and growth using an isotropic potential developed from the benchmark PAHAP potential. This potential is used to estimate equilibrium constants of dimerisation for five representative PAH molecules based on a statistical mechanics model. Molecular dynamics simulations are also performed to study the clustering of homomolecular systems at a range of temperatures. The results from both sets of calculations demonstrate that at flame temperatures pyrene (C(16)H(10)) dimerisation cannot be a key step in soot particle formation and that much larger molecules (e.g. circumcoronene, C(54)H(18)) are required to form small clusters at flame temperatures. The importance of using accurate descriptions of the intermolecular interactions is demonstrated by comparing results to those calculated with a popular literature potential with an order of magnitude variation in the level of clustering observed. By using an accurate intermolecular potential we are able to show that physical binding of PAH molecules based on van der Waals interactions alone can only be a viable soot inception mechanism if concentrations of large PAH molecules are significantly higher than currently thought.
NASA Technical Reports Server (NTRS)
Helin, E. F.; Dunbar, R. S.
1984-01-01
The Planet-Crossing Asteroid Survey (PCAS) is making steady progress toward the accumulation of the data required to make improved estimates of the populations and cratering rates which can be compared with the existing record of impact events. The PCAS is the chief source of new objects on which to base these calculations over the past decade, and is an integral part of the continuing refinement of the estimates used in planetological applications. An adjunct effort to determine albedo statistics from photometry of UCAS plates is being pursued as well, to better define the magnitude frequency distributions of asteroids. This will improve the quality of the population and collision probability calculations. The survey effort continues to discover new asteroids whose orbital characteristics may reveal the origin and evolution mechanisms reponsible for the transport of the planet-crossing asteroids to the inner solar system.
Density-functional theory for fluid-solid and solid-solid phase transitions.
Bharadwaj, Atul S; Singh, Yashwant
2017-03-01
We develop a theory to describe solid-solid phase transitions. The density functional formalism of classical statistical mechanics is used to find an exact expression for the difference in the grand thermodynamic potentials of the two coexisting phases. The expression involves both the symmetry conserving and the symmetry broken parts of the direct pair correlation function. The theory is used to calculate phase diagram of systems of soft spheres interacting via inverse power potentials u(r)=ε(σ/r)^{n}, where parameter n measures softness of the potential. We find that for 1/n<0.154 systems freeze into the face centered cubic (fcc) structure while for 1/n≥0.154 the body-centred-cubic (bcc) structure is preferred. The bcc structure transforms into the fcc structure upon increasing the density. The calculated phase diagram is in good agreement with the one found from molecular simulations.
Calculation of gyrosynchrotron radiation brightness temperature for outer bright loop of ICME
NASA Astrophysics Data System (ADS)
Sun, Weiying; Wu, Ji; Wang, C. B.; Wang, S.
:Solar polar orbit radio telescope (SPORT) is proposed to detect the high density plasma clouds of outer bright loop of ICMEs from solar orbit with large inclination. Of particular interest is following the propagation of the plasma clouds with remote sensor in radio wavelength band. Gyrosynchrotron emission is a main radio radiation mechanism of the plasma clouds and can provide information of interplanetary magnetic field. In this paper, we statistically analyze the electron density, electron temperature and magnetic field of background solar wind in time of quiet sun and ICMEs propagation. We also estimate the fluctuation range of the electron density, electron temperature and magnetic field of outer bright loop of ICMEs. Moreover, we calculate and analyze the emission brightness temperature and degree of polarization on the basis of the study of gyrosynchrotron emission, absorption and polarization characteristics as the optical depth is less than or equal to 1.
Report to DHS on Summer Internship 2006
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckwith, R H
2006-07-26
This summer I worked at Lawrence Livermore National Laboratory in a bioforensics collection and extraction research group under David Camp. The group is involved with researching efficiencies of various methods for collecting bioforensic evidence from crime scenes. The different methods under examination are a wipe, swab, HVAC filter and a vacuum. The vacuum is something that has particularly gone uncharacterized. My time was spent mostly on modeling and calculations work, but at the end of the summer I completed my internship with a few experiments to supplement my calculations. I had two major projects this summer. My first major projectmore » this summer involved fluid mechanics modeling of collection and extraction situations. This work examines different fluid dynamic models for the case of a micron spore attached to a fiber. The second project I was involved with was a statistical analysis of the different sampling techniques.« less
Perthold, Jan Walther; Oostenbrink, Chris
2018-05-17
Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.
NASA Astrophysics Data System (ADS)
Dennison, Andrew G.
Classification of the seafloor substrate can be done with a variety of methods. These methods include Visual (dives, drop cameras); mechanical (cores, grab samples); acoustic (statistical analysis of echosounder returns). Acoustic methods offer a more powerful and efficient means of collecting useful information about the bottom type. Due to the nature of an acoustic survey, larger areas can be sampled, and by combining the collected data with visual and mechanical survey methods provide greater confidence in the classification of a mapped region. During a multibeam sonar survey, both bathymetric and backscatter data is collected. It is well documented that the statistical characteristic of a sonar backscatter mosaic is dependent on bottom type. While classifying the bottom-type on the basis on backscatter alone can accurately predict and map bottom-type, i.e a muddy area from a rocky area, it lacks the ability to resolve and capture fine textural details, an important factor in many habitat mapping studies. Statistical processing of high-resolution multibeam data can capture the pertinent details about the bottom-type that are rich in textural information. Further multivariate statistical processing can then isolate characteristic features, and provide the basis for an accurate classification scheme. The development of a new classification method is described here. It is based upon the analysis of textural features in conjunction with ground truth sampling. The processing and classification result of two geologically distinct areas in nearshore regions of Lake Superior; off the Lester River,MN and Amnicon River, WI are presented here, using the Minnesota Supercomputer Institute's Mesabi computing cluster for initial processing. Processed data is then calibrated using ground truth samples to conduct an accuracy assessment of the surveyed areas. From analysis of high-resolution bathymetry data collected at both survey sites is was possible to successfully calculate a series of measures that describe textural information about the lake floor. Further processing suggests that the features calculated capture a significant amount of statistical information about the lake floor terrain as well. Two sources of error, an anomalous heave and refraction error significantly deteriorated the quality of the processed data and resulting validate results. Ground truth samples used to validate the classification methods utilized for both survey sites, however, resulted in accuracy values ranging from 5 -30 percent at the Amnicon River, and between 60-70 percent for the Lester River. The final results suggest that this new processing methodology does adequately capture textural information about the lake floor and does provide an acceptable classification in the absence of significant data quality issues.
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
NASA Astrophysics Data System (ADS)
Varner, Gary Sim
1999-11-01
Utilizing the world's largest sample of resonant y' decays, as measured by the Beijing Experimental Spectrometer (BES) during 1993-1995, a comprehensive study of the hadronic decay modes of the χc (3P1 Charmonium) states has been undertaken. Compared with the data set for the Mark I detector, whose published measurements of many of these hadronic decays have been definitive for almost 20 years, roughly an order of magnitude larger statistics has been obtained. Taking advantage of these larger statistics, many new hadronic decay modes have been discovered, while others have been refined. An array of first observations, improvements, confirmations or limits are reported with respect to current world values. These higher precision and newly discovered decay modes are an excellent testing ground for recent theoretical interest in the contribution of higher Fock states and the color octet mechanism in heavy quarkonium annihilation and subsequent light hadronization. Because these calculations are largely tractable only in two body decays, these are the focus of this dissertation. A comparison of current theoretical calculations and experimental results is presented, indicating the success of these phenomenological advances. Measurements for which there are as yet no suitable theoretical prediction are indicated.
Surface free energy analysis of oil palm empty fruit bunches fiber reinforced biocomposites
NASA Astrophysics Data System (ADS)
Suryadi, G. S.; Nikmatin, S.; Sudaryanto; Irmansyah; Sukaryo, S. G.
2017-05-01
Study of the size effect of natural fiber from oil palm empty fruit bunches (OPEFB) as filler, onto the contact angle and surface free energy of fiber reinforced biocomposites has been done. The OPEFB fibers were prepared by mechanical milling and sieving to obtain various sizes of fiber (long-fiber, medium-fiber, short-fiber, and microparticle). The biocomposites has been produced by extrusion using single-screw extruder with EFB fiber as filler, recycled Acrylonitrile Butadiene Styrene (ABS) polymer as matrix, and primary antioxidant, acid scavanger, and coupling agent as additives. The obtained biocomposites in form of granular, were made into test piece by injection molding method. Contact angles of water, methanol, and hexane on the surface of biocomposites at room temperature were measured using Phoenix 300 Contact Angle Analyzer. The surface free energy (SFE) and their components were calculated using three previous known methods (Girifalco-Good-Fowkes-Young (GGFY), Owens-Wendt, and van Oss-Chaudhury-Good (vOCG)). The results showed that total SFE of Recycled ABS as control was about 24.38 mJ/m2, and SFE of biocomposites was lower than control, decreased with decreasing of EFB fiber size as biocomposites filler. The statistical analysis proved that there are no statistically significant differences in the value of the SFE calculated with the three different methods.
Current-voltage characteristics and transition voltage spectroscopy of individual redox proteins.
Artés, Juan M; López-Martínez, Montserrat; Giraudet, Arnaud; Díez-Pérez, Ismael; Sanz, Fausto; Gorostiza, Pau
2012-12-19
Understanding how molecular conductance depends on voltage is essential for characterizing molecular electronics devices. We reproducibly measured current-voltage characteristics of individual redox-active proteins by scanning tunneling microscopy under potentiostatic control in both tunneling and wired configurations. From these results, transition voltage spectroscopy (TVS) data for individual redox molecules can be calculated and analyzed statistically, adding a new dimension to conductance measurements. The transition voltage (TV) is discussed in terms of the two-step electron transfer (ET) mechanism. Azurin displays the lowest TV measured to date (0.4 V), consistent with the previously reported distance decay factor. This low TV may be advantageous for fabricating and operating molecular electronic devices for different applications. Our measurements show that TVS is a helpful tool for single-molecule ET measurements and suggest a mechanism for gating of ET between partner redox proteins.
Zurek, E; Ziegler, T
2001-07-02
Density Functional Theory (DFT) has been used to calculate the energies of over 30 different structures with the general formula (AlOMe)(n).(AlMe(3))(m) where n ranges from 6 to 13 and m ranges between 1 and 4, depending upon the structure of the parent (AlOMe)(n) cage. The way in which TMA (trimethylaluminum) bonds to MAO (methylaluminoxane) has been determined as well as the location of the acidic sites present in MAO caged structures. Topological arguments have been used to show that TMA does not bind to MAO cages where n = 12 or n > or = 14. The ADF energies in conjunction with frequency calculations based on molecular mechanics have been used to estimate the finite temperature enthalpies, entropies, and free energies of the TMA containing MAO structures. Using the Gibbs free energies found for pure MAO structures calculated in a previous work, in conjunction with the free energies of TMA containing MAO structures obtained in the present study, it was possible to determine the percent abundance of each TMA containing MAO within the temperature range of 198.15 K-598.15 K. We have found that very little TMA is actually bound to MAO. The Me/Al ratio on the MAO cages is determined as being approximately 1.00, 1.01, 1.02, and 1.03 at 198, 298, 398, and 598 K, respectively. Moreover, the percentage of Al found as TMA has been calculated as being 0.21%, 0.62%, 1.05%, and 1.76% and the average unit formulas of (AlOMe)(18.08).(TMA)(0.04), (AlOMe)(17.04).(TMA)(0.11), (AlOMe)(15.72).(TMA)(0.17), and (AlOMe)(14.62).(TMA)(0.26) have been determined at the aforementioned temperatures.
Many-Body Localization and Thermalization in Quantum Statistical Mechanics
NASA Astrophysics Data System (ADS)
Nandkishore, Rahul; Huse, David A.
2015-03-01
We review some recent developments in the statistical mechanics of isolated quantum systems. We provide a brief introduction to quantum thermalization, paying particular attention to the eigenstate thermalization hypothesis (ETH) and the resulting single-eigenstate statistical mechanics. We then focus on a class of systems that fail to quantum thermalize and whose eigenstates violate the ETH: These are the many-body Anderson-localized systems; their long-time properties are not captured by the conventional ensembles of quantum statistical mechanics. These systems can forever locally remember information about their local initial conditions and are thus of interest for possibilities of storing quantum information. We discuss key features of many-body localization (MBL) and review a phenomenology of the MBL phase. Single-eigenstate statistical mechanics within the MBL phase reveal dynamically stable ordered phases, and phase transitions among them, that are invisible to equilibrium statistical mechanics and can occur at high energy and low spatial dimensionality, where equilibrium ordering is forbidden.
A statistical model of operational impacts on the framework of the bridge crane
NASA Astrophysics Data System (ADS)
Antsev, V. Yu; Tolokonnikov, A. S.; Gorynin, A. D.; Reutov, A. A.
2017-02-01
The technical regulations of the Customs Union demands implementation of the risk analysis of the bridge cranes operation at their design stage. The statistical model has been developed for performance of random calculations of risks, allowing us to model possible operational influences on the bridge crane metal structure in their various combination. The statistical model is practically actualized in the software product automated calculation of risks of failure occurrence of bridge cranes.
2012 Workplace and Gender Relations Survey of Reserve Component Members: Survey Note and Briefing
2013-05-08
to be a statistically significant difference at the .05 leve l of significance. Overview The abi li ty to calculate annual prevalence rates is a...understand that to be a statistically significant difference at the .05 level of significance. Overview The ability to calculate annual prevalence...statistically significant differences for women or men in the overall rate between 2008 and 2012. Of the 2.8% of women who experienced UMAN
Statistical Signal Process in R Language in the Pharmacovigilance Programme of India.
Kumar, Aman; Ahuja, Jitin; Shrivastava, Tarani Prakash; Kumar, Vipin; Kalaiselvan, Vivekanandan
2018-05-01
The Ministry of Health & Family Welfare, Government of India, initiated the Pharmacovigilance Programme of India (PvPI) in July 2010. The purpose of the PvPI is to collect data on adverse reactions due to medications, analyze it, and use the reference to recommend informed regulatory intervention, besides communicating the risk to health care professionals and the public. The goal of the present study was to apply statistical tools to find the relationship between drugs and ADRs for signal detection by R programming. Four statistical parameters were proposed for quantitative signal detection. These 4 parameters are IC 025 , PRR and PRR lb , chi-square, and N 11 ; we calculated these 4 values using R programming. We analyzed 78,983 drug-ADR combinations, and the total count of drug-ADR combination was 4,20,060. During the calculation of the statistical parameter, we use 3 variables: (1) N 11 (number of counts), (2) N 1. (Drug margin), and (3) N .1 (ADR margin). The structure and calculation of these 4 statistical parameters in R language are easily understandable. On the basis of the IC value (IC value >0), out of the 78,983 drug-ADR combination (drug-ADR combination), we found the 8,667 combinations to be significantly associated. The calculation of statistical parameters in R language is time saving and allows to easily identify new signals in the Indian ICSR (Individual Case Safety Reports) database.
Compendium of Abstracts on Statistical Applications in Geotechnical Engineering.
1983-09-01
research in the application of probabilistic and statistical methods to soil mechanics, rock mechanics, and engineering geology problems have grown markedly...probability, statistics, soil mechanics, rock mechanics, and engineering geology. 2. The purpose of this report is to make available to the U. S...Deformation Dynamic Response Analysis Seepage, Soil Permeability and Piping Earthquake Engineering, Seismology, Settlement and Heave Seismic Risk Analysis
Effective field theory of statistical anisotropies for primordial bispectrum and gravitational waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rostami, Tahereh; Karami, Asieh; Firouzjahi, Hassan, E-mail: t.rostami@ipm.ir, E-mail: karami@ipm.ir, E-mail: firouz@ipm.ir
2017-06-01
We present the effective field theory studies of primordial statistical anisotropies in models of anisotropic inflation. The general action in unitary gauge is presented to calculate the leading interactions between the gauge field fluctuations, the curvature perturbations and the tensor perturbations. The anisotropies in scalar power spectrum and bispectrum are calculated and the dependence of these anisotropies to EFT couplings are presented. In addition, we calculate the statistical anisotropy in tensor power spectrum and the scalar-tensor cross correlation. Our EFT approach incorporates anisotropies generated in models with non-trivial speed for the gauge field fluctuations and sound speed for scalar perturbationsmore » such as in DBI inflation.« less
NASA Astrophysics Data System (ADS)
Nguyen, Huu Chuong; Szyja, Bartłomiej M.; Doltsinis, Nikos L.
2014-09-01
Density functional theory (DFT) based molecular dynamics simulations have been performed of a 1,4-benzenedithiol molecule attached to two gold electrodes. To model the mechanical manipulation in typical break junction and atomic force microscopy experiments, the distance between two electrodes was incrementally increased up to the rupture point. For each pulling distance, the electric conductance was calculated using the DFT nonequilibrium Green's-function approach for a statistically relevant sample of configurations extracted from the simulation. With increasing mechanical strain, the formation of monoatomic gold wires is observed. The conductance decreases by three orders of magnitude as the initial twofold coordination of the thiol sulfur to the gold is reduced to a single S-Au bond at each electrode and the order in the electrodes is destroyed. Independent of the pulling distance, the conductance was found to fluctuate by at least two orders of magnitude depending on the instantaneous junction geometry.
Aldeghi, Matteo; Bodkin, Michael J; Knapp, Stefan; Biggin, Philip C
2017-09-25
Binding free energy calculations that make use of alchemical pathways are becoming increasingly feasible thanks to advances in hardware and algorithms. Although relative binding free energy (RBFE) calculations are starting to find widespread use, absolute binding free energy (ABFE) calculations are still being explored mainly in academic settings due to the high computational requirements and still uncertain predictive value. However, in some drug design scenarios, RBFE calculations are not applicable and ABFE calculations could provide an alternative. Computationally cheaper end-point calculations in implicit solvent, such as molecular mechanics Poisson-Boltzmann surface area (MMPBSA) calculations, could too be used if one is primarily interested in a relative ranking of affinities. Here, we compare MMPBSA calculations to previously performed absolute alchemical free energy calculations in their ability to correlate with experimental binding free energies for three sets of bromodomain-inhibitor pairs. Different MMPBSA approaches have been considered, including a standard single-trajectory protocol, a protocol that includes a binding entropy estimate, and protocols that take into account the ligand hydration shell. Despite the improvements observed with the latter two MMPBSA approaches, ABFE calculations were found to be overall superior in obtaining correlation with experimental affinities for the test cases considered. A difference in weighted average Pearson ([Formula: see text]) and Spearman ([Formula: see text]) correlations of 0.25 and 0.31 was observed when using a standard single-trajectory MMPBSA setup ([Formula: see text] = 0.64 and [Formula: see text] = 0.66 for ABFE; [Formula: see text] = 0.39 and [Formula: see text] = 0.35 for MMPBSA). The best performing MMPBSA protocols returned weighted average Pearson and Spearman correlations that were about 0.1 inferior to ABFE calculations: [Formula: see text] = 0.55 and [Formula: see text] = 0.56 when including an entropy estimate, and [Formula: see text] = 0.53 and [Formula: see text] = 0.55 when including explicit water molecules. Overall, the study suggests that ABFE calculations are indeed the more accurate approach, yet there is also value in MMPBSA calculations considering the lower compute requirements, and if agreement to experimental affinities in absolute terms is not of interest. Moreover, for the specific protein-ligand systems considered in this study, we find that including an explicit ligand hydration shell or a binding entropy estimate in the MMPBSA calculations resulted in significant performance improvements at a negligible computational cost.
Monolithic ceramic analysis using the SCARE program
NASA Technical Reports Server (NTRS)
Manderscheid, Jane M.
1988-01-01
The Structural Ceramics Analysis and Reliability Evaluation (SCARE) computer program calculates the fast fracture reliability of monolithic ceramic components. The code is a post-processor to the MSC/NASTRAN general purpose finite element program. The SCARE program automatically accepts the MSC/NASTRAN output necessary to compute reliability. This includes element stresses, temperatures, volumes, and areas. The SCARE program computes two-parameter Weibull strength distributions from input fracture data for both volume and surface flaws. The distributions can then be used to calculate the reliability of geometrically complex components subjected to multiaxial stress states. Several fracture criteria and flaw types are available for selection by the user, including out-of-plane crack extension theories. The theoretical basis for the reliability calculations was proposed by Batdorf. These models combine linear elastic fracture mechanics (LEFM) with Weibull statistics to provide a mechanistic failure criterion. Other fracture theories included in SCARE are the normal stress averaging technique and the principle of independent action. The objective of this presentation is to summarize these theories, including their limitations and advantages, and to provide a general description of the SCARE program, along with example problems.
Statistical Mechanical Model for Adsorption Coupled with SAFT-VR Mie Equation of State.
Franco, Luís F M; Economou, Ioannis G; Castier, Marcelo
2017-10-24
We extend the SAFT-VR Mie equation of state to calculate adsorption isotherms by considering explicitly the residual energy due to the confinement effect. Assuming a square-well potential for the fluid-solid interactions, the structure imposed by the fluid-solid interface is calculated using two different approaches: an empirical expression proposed by Travalloni et al. ( Chem. Eng. Sci. 65 , 3088 - 3099 , 2010 ), and a new theoretical expression derived by applying the mean value theorem. Adopting the SAFT-VR Mie ( Lafitte et al. J. Chem. Phys. , 139 , 154504 , 2013 ) equation of state to describe the fluid-fluid interactions, and solving the phase equilibrium criteria, we calculate adsorption isotherms for light hydrocarbons adsorbed in a carbon molecular sieve and for carbon dioxide, nitrogen, and water adsorbed in a zeolite. Good results are obtained from the model using either approach. Nonetheless, the theoretical expression seems to correlate better the experimental data than the empirical one, possibly implying that a more reliable way to describe the structure ensures a better description of the thermodynamic behavior.
NASA Astrophysics Data System (ADS)
James, Ryan G.; Mahoney, John R.; Crutchfield, James P.
2017-06-01
One of the most basic characterizations of the relationship between two random variables, X and Y , is the value of their mutual information. Unfortunately, calculating it analytically and estimating it empirically are often stymied by the extremely large dimension of the variables. One might hope to replace such a high-dimensional variable by a smaller one that preserves its relationship with the other. It is well known that either X (or Y ) can be replaced by its minimal sufficient statistic about Y (or X ) while preserving the mutual information. While intuitively reasonable, it is not obvious or straightforward that both variables can be replaced simultaneously. We demonstrate that this is in fact possible: the information X 's minimal sufficient statistic preserves about Y is exactly the information that Y 's minimal sufficient statistic preserves about X . We call this procedure information trimming. As an important corollary, we consider the case where one variable is a stochastic process' past and the other its future. In this case, the mutual information is the channel transmission rate between the channel's effective states. That is, the past-future mutual information (the excess entropy) is the amount of information about the future that can be predicted using the past. Translating our result about minimal sufficient statistics, this is equivalent to the mutual information between the forward- and reverse-time causal states of computational mechanics. We close by discussing multivariate extensions to this use of minimal sufficient statistics.
The Physics and Operation of Ultra-Submicron Length Semiconductor Devices.
1994-05-01
300 mei heterostructure diode at T=3001( with Fenni statistics and flat band conditions In all of the calculations with a heterostructure barrier, once...25 24- 22- 21- 0 50 100 150 200 Obhnce (mre Figure 8. Self-consistent T=300K calculation with Fenni statistics showing the density and donor
NASA Technical Reports Server (NTRS)
Staubert, R.
1985-01-01
Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.
Teaching Classical Statistical Mechanics: A Simulation Approach.
ERIC Educational Resources Information Center
Sauer, G.
1981-01-01
Describes a one-dimensional model for an ideal gas to study development of disordered motion in Newtonian mechanics. A Monte Carlo procedure for simulation of the statistical ensemble of an ideal gas with fixed total energy is developed. Compares both approaches for a pseudoexperimental foundation of statistical mechanics. (Author/JN)
Murchie, Brent; Tandon, Kanwarpreet; Hakim, Seifeldin; Shah, Kinchit; O'Rourke, Colin; Castro, Fernando J
2017-04-01
Colorectal cancer (CRC) screening guidelines likely over-generalizes CRC risk, 35% of Americans are not up to date with screening, and there is growing incidence of CRC in younger patients. We developed a practical prediction model for high-risk colon adenomas in an average-risk population, including an expanded definition of high-risk polyps (≥3 nonadvanced adenomas), exposing higher than average-risk patients. We also compared results with previously created calculators. Patients aged 40 to 59 years, undergoing first-time average-risk screening or diagnostic colonoscopies were evaluated. Risk calculators for advanced adenomas and high-risk adenomas were created based on age, body mass index, sex, race, and smoking history. Previously established calculators with similar risk factors were selected for comparison of concordance statistic (c-statistic) and external validation. A total of 5063 patients were included. Advanced adenomas, and high-risk adenomas were seen in 5.7% and 7.4% of the patient population, respectively. The c-statistic for our calculator was 0.639 for the prediction of advanced adenomas, and 0.650 for high-risk adenomas. When applied to our population, all previous models had lower c-statistic results although one performed similarly. Our model compares favorably to previously established prediction models. Age and body mass index were used as continuous variables, likely improving the c-statistic. It also reports absolute predictive probabilities of advanced and high-risk polyps, allowing for more individualized risk assessment of CRC.
Time Series Analysis Based on Running Mann Whitney Z Statistics
USDA-ARS?s Scientific Manuscript database
A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...
Differentiation of benign and malignant breast lesions by mechanical imaging
Kearney, Thomas; Pollak, Stanley B.; Rohatgi, Chand; Sarvazyan, Noune; Airapetian, Suren; Browning, Stephanie; Sarvazyan, Armen
2009-01-01
Mechanical imaging yields tissue elasticity map and provides quantitative characterization of a detected pathology. The changes in the surface stress patterns as a function of applied load provide information about the elastic composition and geometry of the underlying tissue structures. The objective of this study is the clinical evaluation of breast mechanical imager for breast lesion characterization and differentiation between benign and malignant lesions. The breast mechanical imager includes a probe with pressure sensor array, an electronic unit providing data acquisition from the pressure sensors and communication with a touch-screen laptop computer. We have developed an examination procedure and algorithms to provide assessment of breast lesion features such as hardness related parameters, mobility, and shape. A statistical Bayesian classifier was constructed to distinguish between benign and malignant lesions by utilizing all the listed features as the input. Clinical results for 179 cases, collected at four different clinical sites, have demonstrated that the breast mechanical imager provides a reliable image formation of breast tissue abnormalities and calculation of lesion features. Malignant breast lesions (histologically confirmed) demonstrated increased hardness and strain hardening as well as decreased mobility and longer boundary length in comparison with benign lesions. Statistical analysis of differentiation capability for 147 benign and 32 malignant lesions revealed an average sensitivity of 91.4% and specificity of 86.8% with a standard deviation of ±6.1%. The area under the receiver operating characteristic curve characterizing benign and malignant lesion discrimination is 86.1% with the confidence interval ranging from 80.3 to 90.9%, with a significance level of P = 0.0001 (area = 50%). The multisite clinical study demonstrated the capability of mechanical imaging for characterization and differentiation of benign and malignant breast lesions. We hypothesize that the breast mechanical imager has the potential to be used as a cost effective device for cancer diagnostics that could reduce the benign biopsy rate, serve as an adjunct to mammography and to be utilized as a screening device for breast cancer detection. PMID:19306059
The Energetic Cost of Walking: A Comparison of Predictive Methods
Kramer, Patricia Ann; Sylvester, Adam D.
2011-01-01
Background The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is “best”, but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. Methodology/Principal Findings We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Conclusion Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species. PMID:21731693
Thermochemical properties of nanometer CL-20 and PETN fabricated using a mechanical milling method
NASA Astrophysics Data System (ADS)
Song, Xiaolan; Wang, Yi; An, Chongwei
2018-06-01
2,4,6,8,10,12-Hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (CL-20) and pentaerythritol tetranitrate (PETN), with mean sizes of 73.8 nm and 267.7 nm, respectively, were fabricated on a high-energy ball-mill. Scanning electron microscope (SEM) analysis was used to image the micron-scale morphology of nano-explosives, and the particle size distribution was calculated using the statistics of individual particle sizes obtained from the SEM images. Analyses, such as X-ray diffractometer (XRD), infrared spectroscopy (IR), and X-ray photoelectron spectroscopy (XPS), were also used to confirm whether the crystal phase, molecular structure, and surface elements changed after a long-term milling process. The results were as expected. Thermal analysis was performed at different heating rates. Parameters, such as the activation energy (ES), activation enthalpy (ΔH≠), activation free energy (ΔG≠), activation entropy (ΔS≠), and critical temperature of thermal explosion (Tb), were calculated to determine the decomposition courses of the explosives. Moreover, the thermal decomposition mechanisms of nano CL-20 and nano PETN were investigated using thermal-infrared spectrometry online (DSC-IR) analysis, by which their gas products were also detected. The results indicated that nano CL-20 decomposed to CO2 and N2O and that nano PETN decayed to NO2, which implied a remarkable difference between the decomposition mechanisms of the two explosives. In addition, the mechanical sensitivities of CL-20 and PETN were tested, and the results revealed that nano-explosives were more insensitive than raw ones, and the possible mechanism for this was discussed. Thermal sensitivity was also investigated with a 5 s bursting point test, from which the 5 s bursting point (T5s) and the activation of the deflagration were obtained.
NASA Technical Reports Server (NTRS)
Butler, C. M.; Hogge, J. E.
1978-01-01
Air quality sampling was conducted. Data for air quality parameters, recorded on written forms, punched cards or magnetic tape, are available for 1972 through 1975. Computer software was developed to (1) calculate several daily statistical measures of location, (2) plot time histories of data or the calculated daily statistics, (3) calculate simple correlation coefficients, and (4) plot scatter diagrams. Computer software was developed for processing air quality data to include time series analysis and goodness of fit tests. Computer software was developed to (1) calculate a larger number of daily statistical measures of location, and a number of daily monthly and yearly measures of location, dispersion, skewness and kurtosis, (2) decompose the extended time series model and (3) perform some goodness of fit tests. The computer program is described, documented and illustrated by examples. Recommendations are made for continuation of the development of research on processing air quality data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew
Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less
Watanabe, Hiroshi C; Kubillus, Maximilian; Kubař, Tomáš; Stach, Robert; Mizaikoff, Boris; Ishikita, Hiroshi
2017-07-21
In the condensed phase, quantum chemical properties such as many-body effects and intermolecular charge fluctuations are critical determinants of the solvation structure and dynamics. Thus, a quantum mechanical (QM) molecular description is required for both solute and solvent to incorporate these properties. However, it is challenging to conduct molecular dynamics (MD) simulations for condensed systems of sufficient scale when adapting QM potentials. To overcome this problem, we recently developed the size-consistent multi-partitioning (SCMP) quantum mechanics/molecular mechanics (QM/MM) method and realized stable and accurate MD simulations, using the QM potential to a benchmark system. In the present study, as the first application of the SCMP method, we have investigated the structures and dynamics of Na + , K + , and Ca 2+ solutions based on nanosecond-scale sampling, a sampling 100-times longer than that of conventional QM-based samplings. Furthermore, we have evaluated two dynamic properties, the diffusion coefficient and difference spectra, with high statistical certainty. Furthermore the calculation of these properties has not previously been possible within the conventional QM/MM framework. Based on our analysis, we have quantitatively evaluated the quantum chemical solvation effects, which show distinct differences between the cations.
Study on the keV neutron capture reaction in 56Fe and 57Fe
NASA Astrophysics Data System (ADS)
Wang, Taofeng; Lee, Manwoo; Kim, Guinyun; Ro, Tae-Ik; Kang, Yeong-Rok; Igashira, Masayuki; Katabuchi, Tatsuya
2014-03-01
The neutron capture cross-sections and the radiative capture gamma-ray spectra from the broad resonances of 56Fe and 57Fe in the neutron energy range from 10 to 90keV and 550keV have been measured with an anti-Compton NaI(Tl) detector. Pulsed keV neutrons were produced from the 7Li 7Be reaction by bombarding the lithium target with the 1.5ns bunched proton beam from the 3MV Pelletron accelerator. The incident neutron spectrum on a capture sample was measured by means of a time-of-flight (TOF) method with a 6Li -glass detector. The number of weighted capture counts of the iron or gold sample was obtained by applying a pulse height weighting technique to the corresponding capture gamma-ray pulse height spectrum. The neutron capture gamma-ray spectra were obtained by unfolding the observed capture gamma-ray pulse height spectra. To achieve further understanding on the mechanism of neutron radiative capture reaction and study on physics models, theoretical calculations of the -ray spectra for 56Fe and 57Fe with the POD program have been performed by applying the Hauser-Feshbach statistical model. The dominant ingredients to perform the statistical calculation were the Optical Model Potential (OMP), the level densities described by the Mengoni-Nakajima approach, and the -ray transmission coefficients described by -ray strength functions. The comparison of the theoretical calculations, performed only for the 550keV point, show a good agreement with the present experimental results.
NASA Astrophysics Data System (ADS)
Batchelor, Murray T.; Wille, Luc T.
The Table of Contents for the book is as follows: * Preface * Modelling the Immune System - An Example of the Simulation of Complex Biological Systems * Brief Overview of Quantum Computation * Quantal Information in Statistical Physics * Modeling Economic Randomness: Statistical Mechanics of Market Phenomena * Essentially Singular Solutions of Feigenbaum- Type Functional Equations * Spatiotemporal Chaotic Dynamics in Coupled Map Lattices * Approach to Equilibrium of Chaotic Systems * From Level to Level in Brain and Behavior * Linear and Entropic Transformations of the Hydrophobic Free Energy Sequence Help Characterize a Novel Brain Polyprotein: CART's Protein * Dynamical Systems Response to Pulsed High-Frequency Fields * Bose-Einstein Condensates in the Light of Nonlinear Physics * Markov Superposition Expansion for the Entropy and Correlation Functions in Two and Three Dimensions * Calculation of Wave Center Deflection and Multifractal Analysis of Directed Waves Through the Study of su(1,1)Ferromagnets * Spectral Properties and Phases in Hierarchical Master Equations * Universality of the Distribution Functions of Random Matrix Theory * The Universal Chiral Partition Function for Exclusion Statistics * Continuous Space-Time Symmetries in a Lattice Field Theory * Quelques Cas Limites du Problème à N Corps Unidimensionnel * Integrable Models of Correlated Electrons * On the Riemann Surface of the Three-State Chiral Potts Model * Two Exactly Soluble Lattice Models in Three Dimensions * Competition of Ferromagnetic and Antiferromagnetic Order in the Spin-l/2 XXZ Chain at Finite Temperature * Extended Vertex Operator Algebras and Monomial Bases * Parity and Charge Conjugation Symmetries and S Matrix of the XXZ Chain * An Exactly Solvable Constrained XXZ Chain * Integrable Mixed Vertex Models Ftom the Braid-Monoid Algebra * From Yang-Baxter Equations to Dynamical Zeta Functions for Birational Tlansformations * Hexagonal Lattice Directed Site Animals * Direction in the Star-Triangle Relations * A Self-Avoiding Walk Through Exactly Solved Lattice Models in Statistical Mechanics
Effect of Embolization Material in the Calculation of Dose Deposition in Arteriovenous Malformations
DOE Office of Scientific and Technical Information (OSTI.GOV)
De la Cruz, O. O. Galvan; Moreno-Jimenez, S.; Larraga-Gutierrez, J. M.
2010-12-07
In this work it is studied the impact of the incorporation of high Z materials (embolization material) in the dose calculation for stereotactic radiosurgery treatment for arteriovenous malformations. A statistical analysis is done to establish the variables that may impact in the dose calculation. To perform the comparison pencil beam (PB) and Monte Carlo (MC) calculation algorithms were used. The comparison between both dose calculations shows that PB overestimates the dose deposited. The statistical analysis, for the quantity of patients of the study (20), shows that the variable that may impact in the dose calculation is the volume of themore » high Z material in the arteriovenous malformation. Further studies have to be done to establish the clinical impact with the radiosurgery result.« less
NASA Astrophysics Data System (ADS)
Zhao, Qingya
2011-12-01
Proton radiotherapy has advantages to deliver accurate high conformal radiation dose to the tumor while sparing the surrounding healthy tissue and critical structures. However, the treatment effectiveness is degraded greatly due to patient free breathing during treatment delivery. Motion compensation for proton radiotherapy is especially challenging as proton beam is more sensitive to the density change along the beam path. Tumor respiratory motion during treatment delivery will affect the proton dose distribution and the selection of optimized parameters for treatment planning, which has not been fully addressed yet in the existing approaches for proton dose calculation. The purpose of this dissertation is to develop an approach for more accurate dose delivery to a moving tumor in proton radiotherapy, i.e., 4D proton dose calculation and delivery, for the uniform scanning proton beam. A three-step approach has been carried out to achieve this goal. First, a solution for the proton output factor calculation which will convert the prescribed dose to machine deliverable monitor unit for proton dose delivery has been proposed and implemented. The novel sector integration method is accurate and time saving, which considers the various beam scanning patterns and treatment field parameters, such as aperture shape, aperture size, measuring position, beam range, and beam modulation. Second, tumor respiratory motion behavior has been statistically characterized and the results have been applied to advanced image guided radiation treatment. Different statistical analysis and correlation discovery approaches have been investigated. The internal / external motion correlation patterns have been simulated, analyzed, and applied in a new hybrid gated treatment to improve the target coverage. Third, a dose calculation method has been developed for 4D proton treatment planning which integrates the interplay effects of tumor respiratory motion patterns and proton beam delivery mechanism. These three steps provide an innovative integrated framework for accurate 4D proton dose calculation and treatment planning for a moving tumor, which extends the functionalities of existing 3D planning systems. In short, this dissertation work addresses a few important problems for effective proton radiotherapy to a moving target. The outcomes of the dissertation are very useful for motion compensation with advanced image guided proton treatment.
THEORETICAL AND EXPERIMENTAL ASPECTS OF ISOTOPIC FRACTIONATION.
O'Neil, James R.
1986-01-01
Essential to the interpretation of natural variations of light stable isotope ratios is knowledge of the magnitude and temperature dependence of isotopic fractionation factors between the common minerals and fluids. These fractionation factors are obtained in three ways: (1) Semi-empirical calculations using spectroscopic data and the methods of statistical mechanics. (2) Laboratory calibration studies. (3) Measurements of natural samples whose formation conditions are well-known or highly constrained. In this chapter methods (1) and (2) are evaluated and a review is given of the present state of knowledge of the theory of isotopic fractionation and the fraction that influence the isotopic properties of minerals.
Thermal corrections to the Casimir energy in a general weak gravitational field
NASA Astrophysics Data System (ADS)
Nazari, Borzoo
2016-12-01
We calculate finite temperature corrections to the energy of the Casimir effect of a two conducting parallel plates in a general weak gravitational field. After solving the Klein-Gordon equation inside the apparatus, mode frequencies inside the apparatus are obtained in terms of the parameters of the weak background. Using Matsubara’s approach to quantum statistical mechanics gravity-induced thermal corrections of the energy density are obtained. Well-known weak static and stationary gravitational fields are analyzed and it is found that in the low temperature limit the energy of the system increases compared to that in the zero temperature case.
Nickel and chromium isotopes in Allende inclusions
NASA Technical Reports Server (NTRS)
Birck, J. L.; Lugmair, G. W.
1988-01-01
High-precision nickel and chromium isotopic measurements were carried out on nine Allende inclusions. It is found that Ni-62, Ni-64, excesses are present in at least three of the samples. The results suggest that the most likely mechanism for the anomalies is a neutron-rich statistical equilibrium process. An indication of elevated Ni-60 is found in almost every inclusion measured. This effect is thought to be related to the decay of now extinct Fe-60. An upper limit of 1.6 X 10 to the -6th is calculated for the Fe-60/Fe-56 ratio at the time these Allende inclusions crystallized.
Open-access programs for injury categorization using ICD-9 or ICD-10.
Clark, David E; Black, Adam W; Skavdahl, David H; Hallagan, Lee D
2018-04-09
The article introduces Programs for Injury Categorization, using the International Classification of Diseases (ICD) and R statistical software (ICDPIC-R). Starting with ICD-8, methods have been described to map injury diagnosis codes to severity scores, especially the Abbreviated Injury Scale (AIS) and Injury Severity Score (ISS). ICDPIC was originally developed for this purpose using Stata, and ICDPIC-R is an open-access update that accepts both ICD-9 and ICD-10 codes. Data were obtained from the National Trauma Data Bank (NTDB), Admission Year 2015. ICDPIC-R derives CDC injury mechanism categories and an approximate ISS ("RISS") from either ICD-9 or ICD-10 codes. For ICD-9-coded cases, RISS is derived similar to the Stata package (with some improvements reflecting user feedback). For ICD-10-coded cases, RISS may be calculated in several ways: The "GEM" methods convert ICD-10 to ICD-9 (using General Equivalence Mapping tables from CMS) and then calculate ISS with options similar to the Stata package; a "ROCmax" method calculates RISS directly from ICD-10 codes, based on diagnosis-specific mortality in the NTDB, maximizing the C-statistic for predicting NTDB mortality while attempting to minimize the difference between RISS and ISS submitted by NTDB registrars (ISSAIS). Findings were validated using data from the National Inpatient Survey (NIS, 2015). NTDB contained 917,865 cases, of which 86,878 had valid ICD-10 injury codes. For a random 100,000 ICD-9-coded cases in NTDB, RISS using the GEM methods was nearly identical to ISS calculated by the Stata version, which has been previously validated. For ICD-10-coded cases in NTDB, categorized ISS using any version of RISS was similar to ISSAIS; for both NTDB and NIS cases, increasing ISS was associated with increasing mortality. Prediction of NTDB mortality was associated with C-statistics of 0.81 for ISSAIS, 0.75 for RISS using the GEM methods, and 0.85 for RISS using the ROCmax method; prediction of NIS mortality was associated with C-statistics of 0.75-0.76 for RISS using the GEM methods, and 0.78 for RISS using the ROCmax method. Instructions are provided for accessing ICDPIC-R at no cost. The ideal methods of injury categorization and injury severity scoring involve trained personnel with access to injured persons or their medical records. ICDPIC-R may be a useful substitute when this ideal cannot be obtained.
78 FR 24336 - Rules of Practice and Procedure; Adjusting Civil Money Penalties for Inflation
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-25
... courts. \\4\\ The CPI is published by the Department of Labor, Bureau of Statistics, and is available at.... Mathematical Calculation In general, the adjustment calculation required by the Inflation Adjustment Act is... adjusted in 2009. According to the Bureau of Labor Statistics, the CPI for June 1996 and June 2009 was 156...
40 CFR 91.511 - Suspension and revocation of certificates of conformity.
Code of Federal Regulations, 2010 CFR
2010-07-01
... many engines as needed so that the CumSum statistic, as calculated in § 91.508(a), falls below the... family, if the manufacturer desires to continue introduction into commerce of a modified version of that... family so that the CumSum statistic, as calculated in § 91.508(a) using the newly assigned FEL if...
Conservative Tests under Satisficing Models of Publication Bias.
McCrary, Justin; Christensen, Garret; Fanelli, Daniele
2016-01-01
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%-rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs.
Conservative Tests under Satisficing Models of Publication Bias
McCrary, Justin; Christensen, Garret; Fanelli, Daniele
2016-01-01
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%—rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs. PMID:26901834
Heidel, R Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
Nangia, Shikha; Jasper, Ahren W; Miller, Thomas F; Truhlar, Donald G
2004-02-22
The most widely used algorithm for Monte Carlo sampling of electronic transitions in trajectory surface hopping (TSH) calculations is the so-called anteater algorithm, which is inefficient for sampling low-probability nonadiabatic events. We present a new sampling scheme (called the army ants algorithm) for carrying out TSH calculations that is applicable to systems with any strength of coupling. The army ants algorithm is a form of rare event sampling whose efficiency is controlled by an input parameter. By choosing a suitable value of the input parameter the army ants algorithm can be reduced to the anteater algorithm (which is efficient for strongly coupled cases), and by optimizing the parameter the army ants algorithm may be efficiently applied to systems with low-probability events. To demonstrate the efficiency of the army ants algorithm, we performed atom-diatom scattering calculations on a model system involving weakly coupled electronic states. Fully converged quantum mechanical calculations were performed, and the probabilities for nonadiabatic reaction and nonreactive deexcitation (quenching) were found to be on the order of 10(-8). For such low-probability events the anteater sampling scheme requires a large number of trajectories ( approximately 10(10)) to obtain good statistics and converged semiclassical results. In contrast by using the new army ants algorithm converged results were obtained by running 10(5) trajectories. Furthermore, the results were found to be in excellent agreement with the quantum mechanical results. Sampling errors were estimated using the bootstrap method, which is validated for use with the army ants algorithm. (c) 2004 American Institute of Physics.
A new device to test cutting efficiency of mechanical endodontic instruments.
Giansiracusa Rubini, Alessio; Plotino, Gianluca; Al-Sudani, Dina; Grande, Nicola M; Sonnino, Gianpaolo; Putorti, Ermanno; Cotti, Elisabetta; Testarelli, Luca; Gambarini, Gianluca
2014-03-06
The purpose of the present study was to introduce a new device specifically designed to evaluate the cutting efficiency of mechanically driven endodontic instruments. Twenty new Reciproc R25 (VDW, Munich, Germany) files were used to be investigated in the new device developed to test the cutting ability of endodontic instruments. The device consists of a main frame to which a mobile plastic support for the hand-piece is connected and a stainless-steel block containing a Plexiglas block against which the cutting efficiency of the instruments was tested. The length of the block cut in 1 minute was measured in a computerized program with a precision of 0.1mm. The instruments were activated by using a torque-controlled motor (Silver Reciproc; VDW, Munich, Germany) in a reciprocating movement by the "Reciproc ALL" program (Group 1) and in counter-clockwise rotation at 300 rpm (Group 2). Mean and standard deviations of each group were calculated and data were statistically analyzed with a one-way ANOVA test (P<0.05). Reciproc in reciprocation (Group 1) mean cut in the Plexiglas block was 8.6 mm (SD=0.6 mm), while Reciproc in rotation mean cut was 8.9 mm (SD=0.7 mm). There was no statistically significant difference between the 2 groups investigated (P>0.05). The cutting testing device evaluated in the present study was reliable and easy to use and may be effectively used to test cutting efficiency of both rotary and reciprocating mechanical endodontic instruments.
Putz, Mihai V.
2009-01-01
The density matrix theory, the ancestor of density functional theory, provides the immediate framework for Path Integral (PI) development, allowing the canonical density be extended for the many-electronic systems through the density functional closure relationship. Yet, the use of path integral formalism for electronic density prescription presents several advantages: assures the inner quantum mechanical description of the system by parameterized paths; averages the quantum fluctuations; behaves as the propagator for time-space evolution of quantum information; resembles Schrödinger equation; allows quantum statistical description of the system through partition function computing. In this framework, four levels of path integral formalism were presented: the Feynman quantum mechanical, the semiclassical, the Feynman-Kleinert effective classical, and the Fokker-Planck non-equilibrium ones. In each case the density matrix or/and the canonical density were rigorously defined and presented. The practical specializations for quantum free and harmonic motions, for statistical high and low temperature limits, the smearing justification for the Bohr’s quantum stability postulate with the paradigmatic Hydrogen atomic excursion, along the quantum chemical calculation of semiclassical electronegativity and hardness, of chemical action and Mulliken electronegativity, as well as by the Markovian generalizations of Becke-Edgecombe electronic focalization functions – all advocate for the reliability of assuming PI formalism of quantum mechanics as a versatile one, suited for analytically and/or computationally modeling of a variety of fundamental physical and chemical reactivity concepts characterizing the (density driving) many-electronic systems. PMID:20087467
Putz, Mihai V
2009-11-10
The density matrix theory, the ancestor of density functional theory, provides the immediate framework for Path Integral (PI) development, allowing the canonical density be extended for the many-electronic systems through the density functional closure relationship. Yet, the use of path integral formalism for electronic density prescription presents several advantages: assures the inner quantum mechanical description of the system by parameterized paths; averages the quantum fluctuations; behaves as the propagator for time-space evolution of quantum information; resembles Schrödinger equation; allows quantum statistical description of the system through partition function computing. In this framework, four levels of path integral formalism were presented: the Feynman quantum mechanical, the semiclassical, the Feynman-Kleinert effective classical, and the Fokker-Planck non-equilibrium ones. In each case the density matrix or/and the canonical density were rigorously defined and presented. The practical specializations for quantum free and harmonic motions, for statistical high and low temperature limits, the smearing justification for the Bohr's quantum stability postulate with the paradigmatic Hydrogen atomic excursion, along the quantum chemical calculation of semiclassical electronegativity and hardness, of chemical action and Mulliken electronegativity, as well as by the Markovian generalizations of Becke-Edgecombe electronic focalization functions - all advocate for the reliability of assuming PI formalism of quantum mechanics as a versatile one, suited for analytically and/or computationally modeling of a variety of fundamental physical and chemical reactivity concepts characterizing the (density driving) many-electronic systems.
Analysis of Failures of High Speed Shaft Bearing System in a Wind Turbine
NASA Astrophysics Data System (ADS)
Wasilczuk, Michał; Gawarkiewicz, Rafał; Bastian, Bartosz
2018-01-01
During the operation of wind turbines with gearbox of traditional configuration, consisting of one planetary stage and two helical stages high failure rate of high speed shaft bearings is observed. Such a high failures frequency is not reflected in the results of standard calculations of bearing durability. Most probably it can be attributed to atypical failure mechanism. The authors studied problems in 1.5 MW wind turbines of one of Polish wind farms. The analysis showed that the problems of high failure rate are commonly met all over the world and that the statistics for the analysed turbines were very similar. After the study of potential failure mechanism and its potential reasons, modification of the existing bearing system was proposed. Various options, with different bearing types were investigated. Different versions were examined for: expected durability increase, extent of necessary gearbox modifications and possibility to solve existing problems in operation.
Zhang, Zhefeng; Xian, Jiahui; Zhang, Chunyong; Fu, Degang
2017-09-01
This study investigated the degradation performance and mechanism of creatinine (a urine metabolite) with boron-doped diamond (BDD) anodes. Experiments were performed using a synthetic creatinine solution containing two supporting electrolytes (NaCl and Na 2 SO 4 ). A three-level central composite design was adopted to optimize the degradation process, a mathematical model was thus constructed and used to explore the optimum operating conditions. A maximum mineralization percentage of 80% following with full creatinine removal had been achieved within 120 min of electrolysis, confirming the strong oxidation capability of BDD anodes. Moreover, the results obtained suggested that supporting electrolyte concentration should be listed as one of the most important parameters in BDD technology. Lastly, based on the results from quantum chemistry calculations and LC/MS analyses, two different reaction pathways which governed the electrocatalytic oxidation of creatinine irrespective of the supporting electrolytes were identified. Copyright © 2017 Elsevier Ltd. All rights reserved.
Weibull analysis of fracture test data on bovine cortical bone: influence of orientation.
Khandaker, Morshed; Ekwaro-Osire, Stephen
2013-01-01
The fracture toughness, K IC, of a cortical bone has been experimentally determined by several researchers. The variation of K IC values occurs from the variation of specimen orientation, shape, and size during the experiment. The fracture toughness of a cortical bone is governed by the severest flaw and, hence, may be analyzed using Weibull statistics. To the best of the authors' knowledge, however, no studies of this aspect have been published. The motivation of the study is the evaluation of Weibull parameters at the circumferential-longitudinal (CL) and longitudinal-circumferential (LC) directions. We hypothesized that Weibull parameters vary depending on the bone microstructure. In the present work, a two-parameter Weibull statistical model was applied to calculate the plane-strain fracture toughness of bovine femoral cortical bone obtained using specimens extracted from CL and LC directions of the bone. It was found that the Weibull modulus of fracture toughness was larger for CL specimens compared to LC specimens, but the opposite trend was seen for the characteristic fracture toughness. The reason for these trends is the microstructural and extrinsic toughening mechanism differences between CL and LC directions bone. The Weibull parameters found in this study can be applied to develop a damage-mechanics model for bone.
NASA Astrophysics Data System (ADS)
Belof, Jonathan; Orlikowski, Daniel; Wu, Christine; McLaughlin, Keith
2013-06-01
Shock and ramp compression experiments are allowing us to probe condensed matter under extreme conditions where phase transitions and other non-equilibrium aspects can now be directly observed, but first principles simulation of kinetics remains a challenge. A multi-scale approach is presented here, with non-equilibrium statistical mechanical quantities calculated by molecular dynamics (MD) and then leveraged to inform a classical nucleation and growth kinetics model at the hydrodynamic scale. Of central interest is the free energy barrier for the formation of a critical nucleus, with direct NEMD presenting the challenge of relatively long timescales necessary to resolve nucleation. Rather than attempt to resolve the time-dependent nucleation sequence directly, the methodology derived here is built upon the non-equilibrium work theorem in order to bias the formation of a critical nucleus and thus construct the nucleation and growth rates. Having determined these kinetic terms from MD, a hydrodynamics implementation of Kolmogorov-Johnson-Mehl-Avrami (KJMA) kinetics and metastabilty is applied to the dynamic compressive freezing of water and compared with recent ramp compression experiments [Dolan et al., Nature (2007)] Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344.
Porcelain surface conditioning protocols and shear bond strength of orthodontic brackets.
Lestrade, Ashley M; Ballard, Richard W; Xu, Xiaoming; Yu, Qingzhao; Kee, Edwin L; Armbruster, Paul C
2016-05-01
The objective of the present study was to determine which of six bonding protocols yielded a clinically acceptable shear bond strength (SBS) of metal orthodontic brackets to CAD/CAM lithium disilicate porcelain restorations. A secondary aim was to determine which bonding protocol produced the least surface damage at debond. Sixty lithium disilicate samples were fabricated to replicate the facial surface of a mandibular first molar using a CEREC CAD/CAM machine. The samples were split into six test groups, each of which received different mechanical/chemical pretreatment protocols to roughen the porcelain surface prior to bonding a molar orthodontic attachment. Shear bond strength testing was conducted using an Instron machine. The mean, maximum, minimal, and standard deviation SBS values for each sample group including an enamel control were calculated. A t-test was used to evaluate the statistical significance between the groups. No significant differences were found in SBS values, with the exception of surface roughening with a green stone prior to HFA and silane treatment. This protocol yielded slightly higher bond strength which was statistically significant. Chemical treatment alone with HFA/silane yielded SBS values within an acceptable clinical range to withstand forces applied by orthodontic treatment and potentially eliminates the need to mechanically roughen the ceramic surface.
Irrigation water use in Kansas, 2013
Lanning-Rush, Jennifer L.
2016-03-22
This report, prepared by the U.S. Geological Survey in cooperation with the Kansas Department of Agriculture, Division of Water Resources, presents derivative statistics of 2013 irrigation water use in Kansas. The published regional and county-level statistics from the previous 4 years (2009–12) are shown with the 2013 statistics and are used to calculate a 5-year average. An overall Kansas average and regional averages also are calculated and presented. Total reported irrigation water use in 2013 was 3.3 million acre-feet of water applied to 3.0 million irrigated acres.
Calculating weighted estimates of peak streamflow statistics
Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.
2012-01-01
According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.
Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo
2014-09-01
Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.
Mechanism of Urban Water Dissipation: A Case Study in Xiamen Island
NASA Astrophysics Data System (ADS)
Zhou, J.; Liu, J.; Wang, Z.
2017-12-01
Urbanization have resulted in increasing water supply and water dissipation from water uses in urban areas, but traditional hydrological models usually ignores the dissipation from social water cycle. In order to comprehensively calculate the water vapor flux of urban natural - social binary water cycle, this study advanced the concept of urban water dissipation (UWD) to describe all form water transfer from liquid to gas in urban area. UWD units were divided according to the water consumption characteristics of the underlying surface, and experimental methods of investigation, statistics, observation and measurement were used to study the water dissipation of different units, determine the corresponding calculation method, and establish the UWD calculation model. Taking Xiamen Island as an example, the city's water dissipation in 2016 was calculated to be 850 mm and verified by water balance. The results showed that the contributions of water dissipation from the green land, building, hardened ground and water surface. The results means that water dissipation inside buildings was one main component of the total UWD. The proportion of water vapor fluxes exceeds the natural water cycle in the urban area. Social water cycle is the main part of the city's water cycle, and also the hot and focus of urban hydrology research in the future.
New estimates of asymmetric decomposition of racemic mixtures by natural beta-radiation sources
NASA Technical Reports Server (NTRS)
Hegstrom, R. A.; Rich, A.; Van House, J.
1985-01-01
Some recent calculations that appeared to invalidate the Vester-Ulbricht hypothesis, which suggests that the chirality of biological molecules originates from the beta-radiolysis of prebiotic racemic mixtures, are reexamined. These calculations apparently showed that the radiolysis-induced chiral polarization can never exceed the chiral polarization produced by statistical fluctuations. It is here shown that several overly restrictive conditions were imposed on these calculations which, when relaxed, allow the radiolysis-induced polarization to exceed that produced by statistical fluctuations, in accordance with the Vester-Ulbricht hypothesis.
Statistical and sampling issues when using multiple particle tracking
NASA Astrophysics Data System (ADS)
Savin, Thierry; Doyle, Patrick S.
2007-08-01
Video microscopy can be used to simultaneously track several microparticles embedded in a complex material. The trajectories are used to extract a sample of displacements at random locations in the material. From this sample, averaged quantities characterizing the dynamics of the probes are calculated to evaluate structural and/or mechanical properties of the assessed material. However, the sampling of measured displacements in heterogeneous systems is singular because the volume of observation with video microscopy is finite. By carefully characterizing the sampling design in the experimental output of the multiple particle tracking technique, we derive estimators for the mean and variance of the probes’ dynamics that are independent of the peculiar statistical characteristics. We expose stringent tests of these estimators using simulated and experimental complex systems with a known heterogeneous structure. Up to a certain fundamental limitation, which we characterize through a material degree of sampling by the embedded probe tracking, these estimators can be applied to quantify the heterogeneity of a material, providing an original and intelligible kind of information on complex fluid properties. More generally, we show that the precise assessment of the statistics in the multiple particle tracking output sample of observations is essential in order to provide accurate unbiased measurements.
Plackett-Burman experimental design to facilitate syntactic foam development
Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; ...
2015-09-14
The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix andmore » the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.« less
Uçar, Yurdanur; Aysan Meriç, İpek; Ekren, Orhun
2018-02-11
To compare the fracture mechanics, microstructure, and elemental composition of lithography-based ceramic manufacturing with pressing and CAD/CAM. Disc-shaped specimens (16 mm diameter, 1.2 mm thick) were used for mechanical testing (n = 10/group). Biaxial flexural strength of three groups (In-Ceram alumina [ICA], lithography-based alumina, ZirkonZahn) were determined using the "piston on 3-ball" technique as suggested in test Standard ISO-6872. Vickers hardness test was performed. Fracture toughness was calculated using fractography. Results were statistically analyzed using Kruskal-Wallis test followed by Dunnett T3 (α = 0.05). Weibull analysis was conducted. Polished and fracture surface characterization was made using scanning electron microscope (SEM). Energy dispersive spectroscopy (EDS) was used for elemental analysis. Biaxial flexural strength of ICA, LCM alumina (LCMA), and ZirkonZahn were 147 ± 43 MPa, 490 ± 44 MPa, and 709 ± 94 MPa, respectively, and were statistically different (P ≤ 0.05). The Vickers hardness number of ICA was 850 ± 41, whereas hardness values for LCMA and ZirkonZahn were 1581 ± 144 and 1249 ± 57, respectively, and were statistically different (P ≤ 0.05). A statistically significant difference was found between fracture toughness of ICA (2 ± 0.4 MPa⋅m 1/2 ), LCMA (6.5 ± 1.5 MPa⋅m 1/2 ), and ZirkonZahn (7.7 ± 1 MPa⋅m 1/2 ) (P ≤ 0.05). Weibull modulus was highest for LCMA (m = 11.43) followed by ZirkonZahn (m = 8.16) and ICA (m = 5.21). Unlike LCMA and ZirkonZahn groups, a homogeneous microstructure was not observed for ICA. EDS results supported the SEM images. Within the limitations of this in vitro study, it can be concluded that LCM seems to be a promising technique for final ceramic object manufacturing in dental applications. Both the manufacturing method and the material used should be improved. © 2018 by the American College of Prosthodontists.
The development of ensemble theory. A new glimpse at the history of statistical mechanics
NASA Astrophysics Data System (ADS)
Inaba, Hajime
2015-12-01
This paper investigates the history of statistical mechanics from the viewpoint of the development of the ensemble theory from 1871 to 1902. In 1871, Ludwig Boltzmann introduced a prototype model of an ensemble that represents a polyatomic gas. In 1879, James Clerk Maxwell defined an ensemble as copies of systems of the same energy. Inspired by H.W. Watson, he called his approach "statistical". Boltzmann and Maxwell regarded the ensemble theory as a much more general approach than the kinetic theory. In the 1880s, influenced by Hermann von Helmholtz, Boltzmann made use of ensembles to establish thermodynamic relations. In Elementary Principles in Statistical Mechanics of 1902, Josiah Willard Gibbs tried to get his ensemble theory to mirror thermodynamics, including thermodynamic operations in its scope. Thermodynamics played the role of a "blind guide". His theory of ensembles can be characterized as more mathematically oriented than Einstein's theory proposed in the same year. Mechanical, empirical, and statistical approaches to foundations of statistical mechanics are presented. Although it was formulated in classical terms, the ensemble theory provided an infrastructure still valuable in quantum statistics because of its generality.
Heads Up! a Calculation- & Jargon-Free Approach to Statistics
ERIC Educational Resources Information Center
Giese, Alan R.
2012-01-01
Evaluating the strength of evidence in noisy data is a critical step in scientific thinking that typically relies on statistics. Students without statistical training will benefit from heuristic models that highlight the logic of statistical analysis. The likelihood associated with various coin-tossing outcomes gives students such a model. There…
A statistical approach to develop a detailed soot growth model using PAH characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raj, Abhijeet; Celnik, Matthew; Shirley, Raphael
A detailed PAH growth model is developed, which is solved using a kinetic Monte Carlo algorithm. The model describes the structure and growth of planar PAH molecules, and is referred to as the kinetic Monte Carlo-aromatic site (KMC-ARS) model. A detailed PAH growth mechanism based on reactions at radical sites available in the literature, and additional reactions obtained from quantum chemistry calculations are used to model the PAH growth processes. New rates for the reactions involved in the cyclodehydrogenation process for the formation of 6-member rings on PAHs are calculated in this work based on density functional theory simulations. Themore » KMC-ARS model is validated by comparing experimentally observed ensembles on PAHs with the computed ensembles for a C{sub 2}H{sub 2} and a C{sub 6}H{sub 6} flame at different heights above the burner. The motivation for this model is the development of a detailed soot particle population balance model which describes the evolution of an ensemble of soot particles based on their PAH structure. However, at present incorporating such a detailed model into a population balance is computationally unfeasible. Therefore, a simpler model referred to as the site-counting model has been developed, which replaces the structural information of the PAH molecules by their functional groups augmented with statistical closure expressions. This closure is obtained from the KMC-ARS model, which is used to develop correlations and statistics in different flame environments which describe such PAH structural information. These correlations and statistics are implemented in the site-counting model, and results from the site-counting model and the KMC-ARS model are in good agreement. Additionally the effect of steric hindrance in large PAH structures is investigated and correlations for sites unavailable for reaction are presented. (author)« less
[A Review on the Use of Effect Size in Nursing Research].
Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae
2015-10-01
The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.
Li, Wei-Kang; Zheng, Qing-Chuan; Zhang, Hong-Xing
2016-01-01
TvMyb2, one of the Myb-like transcriptional factors in Trichomonas vaginalis, binds to two closely spaced promoter sites, MRE-1/MRE-2r and MRE-2f, on the ap65-1 gene. However, detailed dynamical structural characteristics of the tvMyb2-ap65-1 complex and a detailed study of the protein in the complex have not been done. Focused on a specific tvMyb2-MRE-2-13 complex (PDB code: ) and a series of mutants K51A, R84A and R87A, we applied molecular dynamics (MD) simulation and molecular mechanics generalized Born surface area (MM-GBSA) free energy calculations to examine the role of the tvMyb2 protein in recognition interaction. The simulation results indicate that tvMyb2 becomes stable when it binds the DNA duplex. A series of mutants, K51A, R84A and R87A, have been followed, and the results of statistical analyses of the H-bond and hydrophobic contacts show that some residues have significant influence on recognition and binding to ap65-1 DNA. Our work gives important information to understand the interactions of tvMyb2 with ap65-1.
Struemph, Jonathon M; Chong, Alexander C M; Wooley, Paul H
2015-01-01
PMMA bone cement is a brittle material and the creation of defects that increase porosity during mixing or injecting is a significant factor in reducing its mechanical properties. The goal during residency training is to learn how to avoid creating increased porosity during mixing and injecting the material. The aim of this study was to evaluate and compare tensile and compression strength for PMMA cement mixed by intern orthopaedic residents (PGY-1) and senior orthopaedic residents (PGY-5). The hypothesis was that the mechanical properties of PMMA cement mixed by PGY-5 would be significantly better than PMMA cement mixed by PGY-1 residents. Four PGY-1 and four PGY-5 orthopaedic residents each prepared eight tensile specimens. The bone cement used was Simplex™ P bone cement (Stryker Howmedica Osteonics, Mahwah, NJ) under vacuum mixing in a cement-delivery system. Tensile testing of the specimens was performed in an MTS Bionix servohydraulic materials testing system with loading rate of 2.54 mm/min at room temperature. The mean and standard deviation of the ultimate tensile strength (UTS) for each orthopaedic resident group was calculated. The compression specimens were cylinders formed with a central core to mimic a prosthetic implant. Ten samples from each orthopaedic resident were tested using the same MTS system under identical conditions at room temperature. The specimens were loaded from -50 N to complete structural failure at the rate of 20 mm/min. The ultimate compressive strength (UCS) was then determined and the mean and standard deviation calculated for each group. The average UTS of the bone cement for the PGY-1 and PGY-5 residents was 37.5 ± 4.5 MPa and 39.2 ± 5.0 MPa, respectively, and there was no statistically significant difference between the two groups. For the tensile elastic modulus of the bone cement, the results for the PGY-1 and PGY-5 residents were 2.40 ± 0.09 GPa and 2.44 ± 0.08 GPa, respectively, and again there was no statistically significant difference. For the compression elastic modulus of the bone cement, the results for the PGY-1 and PGY-5 residents were 1.19 ± 0.13 GPa and 1.21 ± 0.18 GPa, respectively, with no statistically significant difference. However, the UCS of the bone cement for the PGY-1 and PGY-5 residents was 87.4 ± 5.8 MPa and 91.1 ± 4.5 MPa, respectively, and there was a statistically significant difference between the groups. The PMMA specimens prepared by both the PGY-1 and PGY-5 resident groups had similar characteristics during tensile and compression testing, and were similar to known standards. Although mixing and applying bone cement is an important skill for joint replacement surgery, our results indicate that no special training appears to be necessary for orthopaedic residents. Rather, a basic training video demonstrating manufacturer standard procedure is all that is necessary. The results of this study indicate the importance of experience in bone cement mixing and injecting on cement mechanical properties, but indicate that no special training appears to be necessary for orthopaedic residents.
NASA Astrophysics Data System (ADS)
Bhakat, Soumendranath; Söderhjelm, Pär
2017-01-01
The funnel metadynamics method enables rigorous calculation of the potential of mean force along an arbitrary binding path and thereby evaluation of the absolute binding free energy. A problem of such physical paths is that the mechanism characterizing the binding process is not always obvious. In particular, it might involve reorganization of the solvent in the binding site, which is not easily captured with a few geometrically defined collective variables that can be used for biasing. In this paper, we propose and test a simple method to resolve this trapped-water problem by dividing the process into an artificial host-desolvation step and an actual binding step. We show that, under certain circumstances, the contribution from the desolvation step can be calculated without introducing further statistical errors. We apply the method to the problem of predicting host-guest binding free energies in the SAMPL5 blind challenge, using two octa-acid hosts and six guest molecules. For one of the hosts, well-converged results are obtained and the prediction of relative binding free energies is the best among all the SAMPL5 submissions. For the other host, which has a narrower binding pocket, the statistical uncertainties are slightly higher; longer simulations would therefore be needed to obtain conclusive results.
The Web as an educational tool for/in learning/teaching bioinformatics statistics.
Oliver, J; Pisano, M E; Alonso, T; Roca, P
2005-12-01
Statistics provides essential tool in Bioinformatics to interpret the results of a database search or for the management of enormous amounts of information provided from genomics, proteomics and metabolomics. The goal of this project was the development of a software tool that would be as simple as possible to demonstrate the use of the Bioinformatics statistics. Computer Simulation Methods (CSMs) developed using Microsoft Excel were chosen for their broad range of applications, immediate and easy formula calculation, immediate testing and easy graphics representation, and of general use and acceptance by the scientific community. The result of these endeavours is a set of utilities which can be accessed from the following URL: http://gmein.uib.es/bioinformatica/statistics. When tested on students with previous coursework with traditional statistical teaching methods, the general opinion/overall consensus was that Web-based instruction had numerous advantages, but traditional methods with manual calculations were also needed for their theory and practice. Once having mastered the basic statistical formulas, Excel spreadsheets and graphics were shown to be very useful for trying many parameters in a rapid fashion without having to perform tedious calculations. CSMs will be of great importance for the formation of the students and professionals in the field of bioinformatics, and for upcoming applications of self-learning and continuous formation.
The shape of CMB temperature and polarization peaks on the sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcos-Caballero, A.; Fernández-Cobos, R.; Martínez-González, E.
2016-04-01
We present a theoretical study of CMB temperature peaks, including its effect over the polarization field, and allowing nonzero eccentricity. The formalism is developed in harmonic space and using the covariant derivative on the sphere, which guarantees that the expressions obtained are completely valid at large scales (i.e., no flat approximation). The expected patterns induced by the peak, either in temperature or polarization, are calculated, as well as their covariances. It is found that the eccentricity introduces a quadrupolar dependence in the peak shape, which is proportional to a complex bias parameter b {sub ε}, characterizing the peak asymmetry andmore » orientation. In addition, the one-point statistics of the variables defining the peak on the sphere is reviewed, finding some differences with respect to the flat case for large peaks. Finally, we present a mechanism to simulate constrained CMB maps with a particular peak on the field, which is an interesting tool for analysing the statistical properties of the peaks present in the data.« less
Shock and statistical acceleration of energetic particles in the interplanetary medium
NASA Technical Reports Server (NTRS)
Valdes-Galicia, J. F.; Moussas, X.; Quenby, J. J.; Neubauer, F. M.; Schwenn, R.
1985-01-01
Definite evidence for particle acceleration in the solar wind came around a decade ago. Two likely sources are known to exist: particles may be accelerated by the turbulence resulting from the superposition of Alfven and Magnetosonic waves (Statistical Acceleration) or they may be accelerated directly at shock fronts formed by the interaction of fast and slow solar wind (CIR's) or by traveling shocks due to sporadic coronal mass ejections. Naurally both mechanisms may be operative. In this work the acceleration problem was tackled numerically using Helios 1 and 2 data to create a realistic representation of the Heliospheric plasma. Two 24 hour samples were used: one where there are only wave like fluctuations of the field (Day 90 Helios 1) and another with a shock present in it (Day 92 of Helios 2) both in 1976 during the STIP 2 interval. Transport coefficients in energy space have been calculated for particles injected in each sample and the effect of the shock studied in detail.
Modeling of Yb3+/Er3+-codoped microring resonators
NASA Astrophysics Data System (ADS)
Vallés, Juan A.; Gălătuş, Ramona
2015-03-01
The performance of a highly Yb3+/Er3+-codoped phosphate glass add-drop microring resonator is numerically analyzed. The model assumes resonant behaviour of both pump and signal powers and the dependences of pump intensity build-up inside the microring resonator and of the signal transfer functions to the device through and drop ports are evaluated. Detailed equations for the evolution of the rare-earth ions levels population densities and the propagation of the optical powers inside the microring resonator are included in the model. Moreover, due to the high dopant concentrations considered, the microscopic statistical formalism based on the statistical average of the excitation probability of the Er3+ ion in a microscopic level has been used to describe energy-transfer inter-atomic mechanisms. Realistic parameters and working conditions are used for the calculations. Requirements to achieve amplification and laser oscillation within these devices are obtainable as a function of rare earth ions concentration and coupling losses.
Kinematic analysis of crank -cam mechanism of process equipment
NASA Astrophysics Data System (ADS)
Podgornyj, Yu I.; Skeeba, V. Yu; Martynova, T. G.; Pechorkina, N. S.; Skeeba, P. Yu
2018-03-01
This article discusses how to define the kinematic parameters of a crank-cam mechanism. Using the mechanism design, the authors have developed a calculation model and a calculation algorithm that allowed the definition of kinematic parameters of the mechanism, including crank displacements, angular velocities and acceleration, as well as driven link (rocker arm) angular speeds and acceleration. All calculations were performed using the Mathcad mathematical package. The results of the calculations are reported as numerical values.
Gorobets, Yu I; Gorobets, O Yu
2015-01-01
The statistical model is proposed in this paper for description of orientation of trajectories of unicellular diamagnetic organisms in a magnetic field. The statistical parameter such as the effective energy is calculated on basis of this model. The resulting effective energy is the statistical characteristics of trajectories of diamagnetic microorganisms in a magnetic field connected with their metabolism. The statistical model is applicable for the case when the energy of the thermal motion of bacteria is negligible in comparison with their energy in a magnetic field and the bacteria manifest the significant "active random movement", i.e. there is the randomizing motion of the bacteria of non thermal nature, for example, movement of bacteria by means of flagellum. The energy of the randomizing active self-motion of bacteria is characterized by the new statistical parameter for biological objects. The parameter replaces the energy of the randomizing thermal motion in calculation of the statistical distribution. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ladd, David E.; Law, George S.
2007-01-01
The U.S. Geological Survey (USGS) provides streamflow and other stream-related information needed to protect people and property from floods, to plan and manage water resources, and to protect water quality in the streams. Streamflow statistics provided by the USGS, such as the 100-year flood and the 7-day 10-year low flow, frequently are used by engineers, land managers, biologists, and many others to help guide decisions in their everyday work. In addition to streamflow statistics, resource managers often need to know the physical and climatic characteristics (basin characteristics) of the drainage basins for locations of interest to help them understand the mechanisms that control water availability and water quality at these locations. StreamStats is a Web-enabled geographic information system (GIS) application that makes it easy for users to obtain streamflow statistics, basin characteristics, and other information for USGS data-collection stations and for ungaged sites of interest. If a user selects the location of a data-collection station, StreamStats will provide previously published information for the station from a database. If a user selects a location where no data are available (an ungaged site), StreamStats will run a GIS program to delineate a drainage basin boundary, measure basin characteristics, and estimate streamflow statistics based on USGS streamflow prediction methods. A user can download a GIS feature class of the drainage basin boundary with attributes including the measured basin characteristics and streamflow estimates.
NASA Technical Reports Server (NTRS)
Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.
1990-01-01
This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.
Statistical mechanics explanation for the structure of ocean eddies and currents
NASA Astrophysics Data System (ADS)
Venaille, A.; Bouchet, F.
2010-12-01
The equilibrium statistical mechanics of two dimensional and geostrophic flows predicts the outcome for the large scales of the flow, resulting from the turbulent mixing. This theory has been successfully applied to describe detailed properties of Jupiter's Great Red Spot. We discuss the range of applicability of this theory to ocean dynamics. It is able to reproduce mesoscale structures like ocean rings. It explains, from statistical mechanics, the westward drift of rings at the speed of non dispersive baroclinic waves, and the recently observed (Chelton and col.) slower northward drift of cyclonic eddies and southward drift of anticyclonic eddies. We also uncover relations between strong eastward mid-basin inertial jets, like the Kuroshio extension and the Gulf Stream, and statistical equilibria. We explain under which conditions such strong mid-basin jets can be understood as statistical equilibria. We claim that these results are complementary to the classical Sverdrup-Munk theory: they explain the inertial part basin dynamics, the jets structure and location, using very simple theoretical arguments. References: A. VENAILLE and F. BOUCHET, Ocean rings and jets as statistical equilibrium states, submitted to JPO F. BOUCHET and A. VENAILLE, Statistical mechanics of two-dimensional and geophysical flows, arxiv ...., submitted to Physics Reports P. BERLOFF, A. M. HOGG, W. DEWAR, The Turbulent Oscillator: A Mechanism of Low- Frequency Variability of the Wind-Driven Ocean Gyres, Journal of Physical Oceanography 37 (2007) 2363-+. D. B. CHELTON, M. G. SCHLAX, R. M. SAMELSON, R. A. de SZOEKE, Global observations of large oceanic eddies, Geo. Res. Lett.34 (2007) 15606-+ b) and c) are snapshots of streamfunction and potential vorticity (red: positive values; blue: negative values) in the upper layer of a three layer quasi-geostrophic model of a mid-latitude ocean basin (from Berloff and co.). a) Streamfunction predicted by statistical mechanics. Even in an out-equilibrium situation like this one, equilibrium statistical mechanics predicts remarkably the overall qualitative flow structure. Observation of westward drift of ocean eddies and of slower northward drift of cyclones and southward drift of anticyclones by Chelton and co. We explain these observations from statistical mechanics.
Anandakrishnan, Ramu; Onufriev, Alexey
2008-03-01
In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.
2000 Iowa crash facts : a summary of motor vehicle crash statistics on Iowa roadways
DOT National Transportation Integrated Search
2000-01-01
All statistics are gathered and calculated by the Iowa Department of Transportations Office of Driver Services. National statistics : are obtained from Traffic Safety Facts 2000 published by the U.S. Department of Transportations National...
Landau's statistical mechanics for quasi-particle models
NASA Astrophysics Data System (ADS)
Bannur, Vishnu M.
2014-04-01
Landau's formalism of statistical mechanics [following L. D. Landau and E. M. Lifshitz, Statistical Physics (Pergamon Press, Oxford, 1980)] is applied to the quasi-particle model of quark-gluon plasma. Here, one starts from the expression for pressure and develop all thermodynamics. It is a general formalism and consistent with our earlier studies [V. M. Bannur, Phys. Lett. B647, 271 (2007)] based on Pathria's formalism [following R. K. Pathria, Statistical Mechanics (Butterworth-Heinemann, Oxford, 1977)]. In Pathria's formalism, one starts from the expression for energy density and develop thermodynamics. Both the formalisms are consistent with thermodynamics and statistical mechanics. Under certain conditions, which are wrongly called thermodynamic consistent relation, we recover other formalism of quasi-particle system, like in M. I. Gorenstein and S. N. Yang, Phys. Rev. D52, 5206 (1995), widely studied in quark-gluon plasma.
Ensuring Positiveness of the Scaled Difference Chi-Square Test Statistic
ERIC Educational Resources Information Center
Satorra, Albert; Bentler, Peter M.
2010-01-01
A scaled difference test statistic T[tilde][subscript d] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (Psychometrika 66:507-514, 2001). The statistic T[tilde][subscript d] is asymptotically equivalent to the scaled difference test statistic T[bar][subscript…
Ergodic theorem, ergodic theory, and statistical mechanics
Moore, Calvin C.
2015-01-01
This perspective highlights the mean ergodic theorem established by John von Neumann and the pointwise ergodic theorem established by George Birkhoff, proofs of which were published nearly simultaneously in PNAS in 1931 and 1932. These theorems were of great significance both in mathematics and in statistical mechanics. In statistical mechanics they provided a key insight into a 60-y-old fundamental problem of the subject—namely, the rationale for the hypothesis that time averages can be set equal to phase averages. The evolution of this problem is traced from the origins of statistical mechanics and Boltzman's ergodic hypothesis to the Ehrenfests' quasi-ergodic hypothesis, and then to the ergodic theorems. We discuss communications between von Neumann and Birkhoff in the Fall of 1931 leading up to the publication of these papers and related issues of priority. These ergodic theorems initiated a new field of mathematical-research called ergodic theory that has thrived ever since, and we discuss some of recent developments in ergodic theory that are relevant for statistical mechanics. PMID:25691697
A Method to Predict the Structure and Stability of RNA/RNA Complexes.
Xu, Xiaojun; Chen, Shi-Jie
2016-01-01
RNA/RNA interactions are essential for genomic RNA dimerization and regulation of gene expression. Intermolecular loop-loop base pairing is a widespread and functionally important tertiary structure motif in RNA machinery. However, computational prediction of intermolecular loop-loop base pairing is challenged by the entropy and free energy calculation due to the conformational constraint and the intermolecular interactions. In this chapter, we describe a recently developed statistical mechanics-based method for the prediction of RNA/RNA complex structures and stabilities. The method is based on the virtual bond RNA folding model (Vfold). The main emphasis in the method is placed on the evaluation of the entropy and free energy for the loops, especially tertiary kissing loops. The method also uses recursive partition function calculations and two-step screening algorithm for large, complicated structures of RNA/RNA complexes. As case studies, we use the HIV-1 Mal dimer and the siRNA/HIV-1 mutant (T4) to illustrate the method.
Equations of state for explosive detonation products: The PANDA model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerley, G.I.
1994-05-01
This paper discusses a thermochemical model for calculating equations of state (EOS) for the detonation products of explosives. This model, which was first presented at the Eighth Detonation Symposium, is available in the PANDA code and is referred to here as ``the Panda model``. The basic features of the PANDA model are as follows. (1) Statistical-mechanical theories are used to construct EOS tables for each of the chemical species that are to be allowed in the detonation products. (2) The ideal mixing model is used to compute the thermodynamic functions for a mixture of these species, and the composition ofmore » the system is determined from assumption of chemical equilibrium. (3) For hydrocode calculations, the detonation product EOS are used in tabular form, together with a reactive burn model that allows description of shock-induced initiation and growth or failure as well as ideal detonation wave propagation. This model has been implemented in the three-dimensional Eulerian code, CTH.« less
Finite-Temperature Behavior of PdH x Elastic Constants Computed by Direct Molecular Dynamics
Zhou, X. W.; Heo, T. W.; Wood, B. C.; ...
2017-05-30
In this paper, robust time-averaged molecular dynamics has been developed to calculate finite-temperature elastic constants of a single crystal. We find that when the averaging time exceeds a certain threshold, the statistical errors in the calculated elastic constants become very small. We applied this method to compare the elastic constants of Pd and PdH 0.6 at representative low (10 K) and high (500 K) temperatures. The values predicted for Pd match reasonably well with ultrasonic experimental data at both temperatures. In contrast, the predicted elastic constants for PdH 0.6 only match well with ultrasonic data at 10 K; whereas, atmore » 500 K, the predicted values are significantly lower. We hypothesize that at 500 K, the facile hydrogen diffusion in PdH 0.6 alters the speed of sound, resulting in significantly reduced values of predicted elastic constants as compared to the ultrasonic experimental data. Finally, literature mechanical testing experiments seem to support this hypothesis.« less
NASA Astrophysics Data System (ADS)
der, R.
1987-01-01
The various approaches to nonequilibrium statistical mechanics may be subdivided into convolution and convolutionless (time-local) ones. While the former, put forward by Zwanzig, Mori, and others, are used most commonly, the latter are less well developed, but have proven very useful in recent applications. The aim of the present series of papers is to develop the time-local picture (TLP) of nonequilibrium statistical mechanics on a new footing and to consider its physical implications for topics such as the formulation of irreversible thermodynamics. The most natural approach to TLP is seen to derive from the Fourier-Laplace transformwidetilde{C}(z)) of pertinent time correlation functions, which on the physical sheet typically displays an essential singularity at z=∞ and a number of macroscopic and microscopic poles in the lower half-plane corresponding to long- and short-lived modes, respectively, the former giving rise to the autonomous macrodynamics, whereas the latter are interpreted as doorway modes mediating the transfer of information from relevant to irrelevant channels. Possible implications of this doorway mode concept for socalled extended irreversible thermodynamics are briefly discussed. The pole structure is used for deriving new kinds of generalized Green-Kubo relations expressing macroscopic quantities, transport coefficients, e.g., by contour integrals over current-current correlation functions obeying Hamiltonian dynamics, the contour integration replacing projection. The conventional Green-Kubo relations valid for conserved quantities only are rederived for illustration. Moreover,widetilde{C}(z) may be expressed by a Laurent series expansion in positive and negative powers of z, from which a rigorous, general, and straightforward method is developed for extracting all macroscopic quantities from so-called secularly divergent expansions ofwidetilde{C}(z) as obtained from the application of conventional many-body techniques to the calculation ofwidetilde{C}(z). The expressions are formulated as time scale expansions, which should rapidly converge if macroscopic and microscopic time scales are sufficiently well separated, i.e., if lifetime ("memory") effects are not too large.
The equation of state of Song and Mason applied to fluorine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslami, H.; Boushehri, A.
1999-03-01
An analytical equation of state is applied to calculate the compressed and saturation thermodynamic properties of fluorine. The equation of state is that of Song and Mason. It is based on a statistical mechanical perturbation theory of hard convex bodies and is a fifth-order polynomial in the density. There exist three temperature-dependent parameters: the second virial coefficient, an effective molecular volume, and a scaling factor for the average contact pair distribution function of hard convex bodies. The temperature-dependent parameters can be calculated if the intermolecular pair potential is known. However, the equation is usable with much less input than themore » full intermolecular potential, since the scaling factor and effective volume are nearly universal functions when expressed in suitable reduced units. The equation of state has been applied to calculate thermodynamic parameters including the critical constants, the vapor pressure curve, the compressibility factor, the fugacity coefficient, the enthalpy, the entropy, the heat capacity at constant pressure, the ratio of heat capacities, the Joule-Thomson coefficient, the Joule-Thomson inversion curve, and the speed of sound for fluorine. The agreement with experiment is good.« less
Experimental design, power and sample size for animal reproduction experiments.
Chapman, Phillip L; Seidel, George E
2008-01-01
The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.
Curie-type paramagnetic NMR relaxation in the aqueous solution of Ni(II).
Mareš, Jiří; Hanni, Matti; Lantto, Perttu; Lounila, Juhani; Vaara, Juha
2014-04-21
Ni(2+)(aq) has been used for many decades as a model system for paramagnetic nuclear magnetic resonance (pNMR) relaxation studies. More recently, its magnetic properties and also nuclear magnetic relaxation rates have been studied computationally. We have calculated electron paramagnetic resonance and NMR parameters using quantum-mechanical (QM) computation of molecular dynamics snapshots, obtained using a polarizable empirical force field. Statistical averages of hyperfine coupling, g- and zero-field splitting tensors, as well as the pNMR shielding terms, are compared to the available experimental and computational data. In accordance with our previous work, the isotropic hyperfine coupling as well as nuclear shielding values agree well with experimental measurements for the (17)O nuclei of water molecules in the first solvation shell of the nickel ion, whereas larger deviations are found for (1)H centers. We report, for the first time, the Curie-type contribution to the pNMR relaxation rate using QM calculations together with Redfield relaxation theory. The Curie relaxation mechanism is analogous to chemical shift anisotropy relaxation, well-known in diamagnetic NMR. Due to the predominance of other types of paramagnetic relaxation mechanisms for this system, it is possible to extract the Curie term only computationally. The Curie mechanism alone would result in around 16 and 20 s(-1) of relaxation rates (R1 and R2 respectively) for the (1)H nuclei of water molecules bonded to the Ni(2+) center, in a magnetic field of 11.7 T. The corresponding (17)O relaxation rates are around 33 and 38 s(-1). We also report the Curie contribution to the relaxation rate for molecules beyond the first solvation shell in a 1 M solution of Ni(2+) in water.
Ma, K.-F.; Chan, C.-H.; Stein, R.S.
2005-01-01
The correlation between static Coulomb stress increases and aftershocks has thus far provided the strongest evidence that stress changes promote seismicity, a correlation that the Chi-Chi earthquake well exhibits. Several studies have deepened the argument by resolving stress changes on aftershock focal mechanisms, which removes the assumption that the aftershocks are optimally oriented for failure. Here one compares the percentage of planes on which failure is promoted after the main shock relative to the percentage beforehand. For Chi-Chi we find a 28% increase for thrust and an 18% increase for strike-slip mechanisms, commensurate with increases reported for other large main shocks. However, perhaps the chief criticism of static stress triggering is the difficulty in observing predicted seismicity rate decreases in the stress shadows, or sites of Coulomb stress decrease. Detection of sustained drops in seismicity rate demands a long catalog with a low magnitude of completeness and a high seismicity rate, conditions that are met at Chi-Chi. We find four lobes with statistically significant seismicity rate declines of 40-90% for 50 months, and they coincide with the stress shadows calculated for strike-slip faults, the dominant faulting mechanism. The rate drops are evident in uniform cell calculations, 100-month time series, and by visual inspection of the M ??? 3 seismicity. An additional reason why detection of such declines has proven so rare emerges from this study: there is a widespread increase in seismicity rate during the first 3 months after Chi-Chi, and perhaps many other main shocks, that might be associated with a different mechanism. Copyright 2005 by the American Geophysical Union.
Shear Band Formation in Plastic-Bonded Explosives (PBX)
NASA Astrophysics Data System (ADS)
Dey, Thomas N.; Johnson, James N.
1997-07-01
Adiabatic shear bands can be a source of ignition and lead to detonation. At low to moderate deformation rates, 10--1000 s-1, two other mechanisms can also give rise to shear bands. These mechanisms are: softening caused by micro-cracking and (2) a constitutive response with a non-associated flow rule as is observed in granular material such as soil. Brittle behavior at small strains and the granular nature of HMX suggest that PBX-9501 constitutive behavior may be similar to sand. A constitutive model for each of these mechanims is studied in a series of calculations. A viscoelastic constitutive model for PBX-9501 softens via a statistical crack model, based on the work of Dienes (1986). A sand model is used to provide a non-associated flow rule. Both models generate shear band formation at 1--2% strain at nominal strain rates at and below 1000 s-1. Shear band formation is suppressed at higher strain rates. The sand model gives qualitative agreement for location and orientation of shear bands observed in a punch experiment. Both mechanisms may accelerate the formation of adiabatic shear bands.
NASA Technical Reports Server (NTRS)
Dewitt, H. E.; Hubbard, W. B.
1976-01-01
A large quantity of data on the thermodynamic properties of hydrogen-helium metallic liquids have been obtained in extended computer calculations in which a Monte Carlo code essentially identical to that described by Hubbard (1972) was used. A model free energy for metallic hydrogen with a relatively small mass fraction of helium is discussed, taking into account the definition of variables, a procedure for choosing the free energy, values for the fitting parameters, and the evaluation of the entropy constants. Possibilities concerning a use of the obtained data in studies of the interiors of the outer planets are briefly considered.
Compaction Behavior of Granular Materials
NASA Astrophysics Data System (ADS)
Endicott, Mark R.; Kenkre, V. M.; Glass, S. Jill; Hurd, Alan J.
1996-03-01
We report the results of our recent study of compaction of granular materials. A theoretical model is developed for the description of the compaction of granular materials exemplified by granulated ceramic powders. Its predictions are compared to observations of uniaxial compaction tests of ceramic granules of PMN-PT, spray dried alumina and rutile. The theoretical model employs a volume-based statistical mechanics treatment and an activation analogy. Results of a computer simulation of random packing of discs in two dimensions are also reported. The effect of type of particle size distribution and other parameters of that distribution on the calculated quantities are discussed. We examine the implications of the results of the simulation for the theoretical model.
NASA Astrophysics Data System (ADS)
Ingber, Lester
1985-02-01
This paper is an essential addendum to a previous paper [L. Ingber,
Study of optimum methods of optical communication
NASA Technical Reports Server (NTRS)
Harger, R. O.
1972-01-01
Optimum methods of optical communication accounting for the effects of the turbulent atmosphere and quantum mechanics, both by the semi-classical method and the full-fledged quantum theoretical model are described. A concerted effort to apply the techniques of communication theory to the novel problems of optical communication by a careful study of realistic models and their statistical descriptions, the finding of appropriate optimum structures and the calculation of their performance and, insofar as possible, comparing them to conventional and other suboptimal systems are discussed. In this unified way the bounds on performance and the structure of optimum communication systems for transmission of information, imaging, tracking, and estimation can be determined for optical channels.
NASA Astrophysics Data System (ADS)
Dan, Li; Guo, Li-Xin; Li, Jiang-Ting; Chen, Wei; Yan, Xu; Huang, Qing-Qing
2017-09-01
The expression of complex dielectric permittivity for non-magnetized fully ionized dusty plasma is obtained based on the kinetic equation in the Fokker-Planck-Landau collision model and the charging equation of the statistical theory. The influences of density, average size of dust grains, and balanced charging of the charge number of dust particles on the attenuation properties of electromagnetic waves in fully ionized dusty plasma are investigated by calculating the attenuation constant. In addition, the attenuation characteristics of weakly ionized and fully ionized dusty plasmas are compared. Results enriched the physical mechanisms of microwave attenuation for fully ionized dusty plasma and provide a theoretical basis for future studies.
The uniform quantized electron gas revisited
NASA Astrophysics Data System (ADS)
Lomba, Enrique; Høye, Johan S.
2017-11-01
In this article we continue and extend our recent work on the correlation energy of the quantized electron gas of uniform density at temperature T=0 . As before, we utilize the methods, properties, and results obtained by means of classical statistical mechanics. These were extended to quantized systems via the Feynman path integral formalism. The latter translates the quantum problem into a classical polymer problem in four dimensions. Again, the well known RPA (random phase approximation) is recovered as a basic result which we then modify and improve upon. Here we analyze the condition of thermodynamic self-consistency. Our numerical calculations exhibit a remarkable agreement with well known results of a standard parameterization of Monte Carlo correlation energies.
NASA Technical Reports Server (NTRS)
Sohrab, Siavash H.; Piltch, Nancy (Technical Monitor)
2000-01-01
A scale-invariant model of statistical mechanics is applied to present invariant forms of mass, energy, linear, and angular momentum conservation equations in reactive fields. The resulting conservation equations at molecular-dynamic scale are solved by the method of large activation energy asymptotics to describe the hydro-thermo-diffusive structure of laminar premixed flames. The predicted temperature and velocity profiles are in agreement with the observations. Also, with realistic physico-chemical properties and chemical-kinetic parameters for a single-step overall combustion of stoichiometric methane-air premixed flame, the laminar flame propagation velocity of 42.1 cm/s is calculated in agreement with the experimental value.
Löytynoja, T; Niskanen, J; Jänkälä, K; Vahtras, O; Rinkevicius, Z; Ågren, H
2014-11-20
Using ethanol-water solutions as illustration, we demonstrate the capability of the hybrid quantum mechanics/molecular mechanics (QM/MM) paradigm to simulate core photoelectron spectroscopy: the binding energies and the chemical shifts. An integrated approach with QM/MM binding energy calculations coupled to preceding molecular dynamics sampling is adopted to generate binding energies averaged over the solute-solvent configurations available at a particular temperature and pressure and thus allowing for a statistical assessment with confidence levels for the final binding energies. The results are analyzed in terms of the contributions in the molecular mechanics model-electrostatic, polarization, and van der Waals-with atom or bond granulation of the corresponding MM charge and polarizability force-fields. The role of extramolecular charge transfer screening of the core-hole and explicit hydrogen bonding is studied by extending the QM core to cover the first solvation shell. The results are compared to those obtained from pure electrostatic and polarizable continuum models. Particularly, the dependence of the carbon 1s binding energies with respect to the ethanol concentration is studied. Our results indicate that QM/MM can be used as an all-encompassing model to study photoelectron binding energies and chemical shifts in solvent environments.
NASA Technical Reports Server (NTRS)
Wallace, G. R.; Weathers, G. D.; Graf, E. R.
1973-01-01
The statistics of filtered pseudorandom digital sequences called hybrid-sum sequences, formed from the modulo-two sum of several maximum-length sequences, are analyzed. The results indicate that a relation exists between the statistics of the filtered sequence and the characteristic polynomials of the component maximum length sequences. An analysis procedure is developed for identifying a large group of sequences with good statistical properties for applications requiring the generation of analog pseudorandom noise. By use of the analysis approach, the filtering process is approximated by the convolution of the sequence with a sum of unit step functions. A parameter reflecting the overall statistical properties of filtered pseudorandom sequences is derived. This parameter is called the statistical quality factor. A computer algorithm to calculate the statistical quality factor for the filtered sequences is presented, and the results for two examples of sequence combinations are included. The analysis reveals that the statistics of the signals generated with the hybrid-sum generator are potentially superior to the statistics of signals generated with maximum-length generators. Furthermore, fewer calculations are required to evaluate the statistics of a large group of hybrid-sum generators than are required to evaluate the statistics of the same size group of approximately equivalent maximum-length sequences.
Nursing students' mathematic calculation skills.
Rainboth, Lynde; DeMasi, Chris
2006-12-01
This mixed method study used a pre-test/post-test design to evaluate the efficacy of a teaching strategy in improving beginning nursing student learning outcomes. During a 4-week student teaching period, a convenience sample of 54 sophomore level nursing students were required to complete calculation assignments, taught one calculation method, and mandated to attend medication calculation classes. These students completed pre- and post-math tests and a major medication mathematic exam. Scores from the intervention student group were compared to those achieved by the previous sophomore class. Results demonstrated a statistically significant improvement from pre- to post-test and the students who received the intervention had statistically significantly higher scores on the major medication calculation exam than did the students in the control group. The evaluation completed by the intervention group showed that the students were satisfied with the method and outcome.
Imprints of magnetic power and helicity spectra on radio polarimetry statistics
NASA Astrophysics Data System (ADS)
Junklewitz, H.; Enßlin, T. A.
2011-06-01
The statistical properties of turbulent magnetic fields in radio-synchrotron sources should be imprinted on the statistics of polarimetric observables. In search of these imprints, i.e. characteristic modifications of the polarimetry statistics caused by magnetic field properties, we calculate correlation and cross-correlation functions from a set of observables that contain total intensity I, polarized intensity P, and Faraday depth φ. The correlation functions are evaluated for all combinations of observables up to fourth order in magnetic field B. We derive these analytically as far as possible and from first principles using only some basic assumptions, such as Gaussian statistics for the underlying magnetic field in the observed region and statistical homogeneity. We further assume some simplifications to reduce the complexity of the calculations, because for a start we were interested in a proof of concept. Using this statistical approach, we show that it is possible to gain information about the helical part of the magnetic power spectrum via the correlation functions < P(kperp) φ(k'_{perp)φ(k''perp)>B} and < I(kperp) φ(k'_{perp)φ(k''perp)>B}. Using this insight, we construct an easy-to-use test for helicity called LITMUS (Local Inference Test for Magnetic fields which Uncovers heliceS), which gives a spectrally integrated measure of helicity. For now, all calculations are given in a Faraday-free case, but set up so that Faraday rotational effects can be included later.
12 CFR 324.162 - Mechanics of risk-weighted asset calculation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Mechanics of risk-weighted asset calculation. 324.162 Section 324.162 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND... Mechanics of risk-weighted asset calculation. (a) If an FDIC-supervised institution does not qualify to use...
Hagiwara, Yohsuke; Tateno, Masaru
2010-10-20
We review the recent research on the functional mechanisms of biological macromolecules using theoretical methodologies coupled to ab initio quantum mechanical (QM) treatments of reaction centers in proteins and nucleic acids. Since in most cases such biological molecules are large, the computational costs of performing ab initio calculations for the entire structures are prohibitive. Instead, simulations that are jointed with molecular mechanics (MM) calculations are crucial to evaluate the long-range electrostatic interactions, which significantly affect the electronic structures of biological macromolecules. Thus, we focus our attention on the methodologies/schemes and applications of jointed QM/MM calculations, and discuss the critical issues to be elucidated in biological macromolecular systems. © 2010 IOP Publishing Ltd
A new device to test cutting efficiency of mechanical endodontic instruments
Rubini, Alessio Giansiracusa; Plotino, Gianluca; Al-Sudani, Dina; Grande, Nicola M.; Putorti, Ermanno; Sonnino, GianPaolo; Cotti, Elisabetta; Testarelli, Luca; Gambarini, Gianluca
2014-01-01
Background The purpose of the present study was to introduce a new device specifically designed to evaluate the cutting efficiency of mechanically driven endodontic instruments. Material/Methods Twenty new Reciproc R25 (VDW, Munich, Germany) files were used to be investigated in the new device developed to test the cutting ability of endodontic instruments. The device consists of a main frame to which a mobile plastic support for the hand-piece is connected and a stainless-steel block containing a Plexiglas block against which the cutting efficiency of the instruments was tested. The length of the block cut in 1 minute was measured in a computerized program with a precision of 0.1mm. The instruments were activated by using a torque-controlled motor (Silver Reciproc; VDW, Munich, Germany) in a reciprocating movement by the “Reciproc ALL” program (Group 1) and in counter-clockwise rotation at 300 rpm (Group 2). Mean and standard deviations of each group were calculated and data were statistically analyzed with a one-way ANOVA test (P<0.05). Results Reciproc in reciprocation (Group 1) mean cut in the Plexiglas block was 8.6 mm (SD=0.6 mm), while Reciproc in rotation mean cut was 8.9 mm (SD=0.7 mm). There was no statistically significant difference between the 2 groups investigated (P>0.05). Conclusions The cutting testing device evaluated in the present study was reliable and easy to use and may be effectively used to test cutting efficiency of both rotary and reciprocating mechanical endodontic instruments. PMID:24603777
The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes
ERIC Educational Resources Information Center
Cartier, Stephen F.
2011-01-01
A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…
Statistical Learning of Phonetic Categories: Insights from a Computational Approach
ERIC Educational Resources Information Center
McMurray, Bob; Aslin, Richard N.; Toscano, Joseph C.
2009-01-01
Recent evidence (Maye, Werker & Gerken, 2002) suggests that statistical learning may be an important mechanism for the acquisition of phonetic categories in the infant's native language. We examined the sufficiency of this hypothesis and its implications for development by implementing a statistical learning mechanism in a computational model…
NASA Astrophysics Data System (ADS)
Taniya, Abraham; Deepthi, Murali; Padmanabhan, Alapat
2018-06-01
Recent calculations on the change in radial dimensions of reacting (growing) polyethylene in the gas phase experiencing Lennard Jones and Kihara type potentials revealed that a single reacting polyethylene molecule does not experience polymer collapse. This implies that a transition that is the converse of what happens when molten polyethylene crystallizes, i.e. it transforms from random coil like structure to folded rigid rod type structure, occurs. The predicted behaviour of growing polyethylene was explained by treating the head of the growing polymer chain as myopic whereas as the whole chain (i.e. when under equilibrium conditions) being treated as having normal vision, i.e. the growing chain does not see the attractive part of the LJ or Kihara Potentials. In this paper we provide further proof for this argument in two ways. Firstly we carry forward the exact enumeration calculations on growing self avoiding walks reported in that paper to larger values of number of steps by using Monte Carlo type calculations. We thereby assign physical significance to the connective constant of self avoiding walks, which until now was treated as a purely abstract mathematical entity. Secondly since a reacting polymer molecule that grows by addition polymerisation sees only one step ahead at a time, we extend this calculation by estimating the average atmosphere for molecules, with repulsive potential only (growing self avoiding walks in two dimensions), that look at two, three, four, five ...steps ahead. Our calculation shows that the arguments used in the previous work are correct.
Snow, Mathew S.; Clark, Sue B.; Morrison, Samuel S.; ...
2015-10-01
Particulate transport represents an important mechanism for actinides and fission products at the Earth's surface; soil samples taken in the early 1970's near the Subsurface Disposal Area (SDA) at Idaho National Laboratory (INL) provide a case study for examining the mechanisms and characteristics of actinide transport under arid conditions. Transuranic waste was disposed via shallow land burial at the SDA until shortly after a flooding event that occurred in 1969. In this study we analyze soils collected in the early 1970's for ¹³⁷Cs, ²⁴¹Am, and Pu using a combination of radiometric and mass spectrometric techniques. Two distinct ²⁴⁰Pu/²³⁹Pu isotopic ratiosmore » are observed for contamination from the SDA, with values ranging from at least 0.059 to 0.069. ²⁴¹Am concentrations are observed to increase only slightly in 0-4 cm soils over the ~40 year period since soil sampling, contrary to Markham's previous hypothesis that ²⁴¹Pu is principally associated with the 0-4 cm soil fractions (Markham 1978). The lack of statistical difference in ²⁴¹Am/²³⁹⁺²⁴⁰Pu ratios with depth suggests mechanical transport and mixing discrete contaminated particles under arid conditions. Occasional samples beyond the northeastern corner are observed to contain anomalously high Pu concentrations with corresponding low ²⁴⁰Pu/²³⁹Pu atoms ratios, suggesting the occurrence of "hot particles;" application of a background Pu subtraction results in calculated Pu atom ratios for the "hot particles" which are statistically similar to those observed in the northeastern corner. Taken together, our data suggests that flooding resulted in mechanical transport of contaminated particles into the area between the SDA and the flood containment dike in the northeastern corner, following which subsequent contamination spreading resulted from wind transport of discrete particles.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snow, Mathew S.; Clark, Sue B.; Morrison, Samuel S.
Particulate transport represents an important mechanism for actinides and fission products at the Earth's surface; soil samples taken in the early 1970's near the Subsurface Disposal Area (SDA) at Idaho National Laboratory (INL) provide a case study for examining the mechanisms and characteristics of actinide transport under arid conditions. Transuranic waste was disposed via shallow land burial at the SDA until shortly after a flooding event that occurred in 1969. In this study we analyze soils collected in the early 1970's for ¹³⁷Cs, ²⁴¹Am, and Pu using a combination of radiometric and mass spectrometric techniques. Two distinct ²⁴⁰Pu/²³⁹Pu isotopic ratiosmore » are observed for contamination from the SDA, with values ranging from at least 0.059 to 0.069. ²⁴¹Am concentrations are observed to increase only slightly in 0-4 cm soils over the ~40 year period since soil sampling, contrary to Markham's previous hypothesis that ²⁴¹Pu is principally associated with the 0-4 cm soil fractions (Markham 1978). The lack of statistical difference in ²⁴¹Am/²³⁹⁺²⁴⁰Pu ratios with depth suggests mechanical transport and mixing discrete contaminated particles under arid conditions. Occasional samples beyond the northeastern corner are observed to contain anomalously high Pu concentrations with corresponding low ²⁴⁰Pu/²³⁹Pu atoms ratios, suggesting the occurrence of "hot particles;" application of a background Pu subtraction results in calculated Pu atom ratios for the "hot particles" which are statistically similar to those observed in the northeastern corner. Taken together, our data suggests that flooding resulted in mechanical transport of contaminated particles into the area between the SDA and the flood containment dike in the northeastern corner, following which subsequent contamination spreading resulted from wind transport of discrete particles.« less
Lin, Hai; Zhao, Yan; Tishchenko, Oksana; Truhlar, Donald G
2006-09-01
The multiconfiguration molecular mechanics (MCMM) method is a general algorithm for generating potential energy surfaces for chemical reactions by fitting high-level electronic structure data with the help of molecular mechanical (MM) potentials. It was previously developed as an extension of standard MM to reactive systems by inclusion of multidimensional resonance interactions between MM configurations corresponding to specific valence bonding patterns, with the resonance matrix element obtained from quantum mechanical (QM) electronic structure calculations. In particular, the resonance matrix element is obtained by multidimensional interpolation employing a finite number of geometries at which electronic-structure calculations of the energy, gradient, and Hessian are carried out. In this paper, we present a strategy for combining MCMM with hybrid quantum mechanical molecular mechanical (QM/MM) methods. In the new scheme, electronic-structure information for obtaining the resonance integral is obtained by means of hybrid QM/MM calculations instead of fully QM calculations. As such, the new strategy can be applied to the studies of very large reactive systems. The new MCMM scheme is tested for two hydrogen-transfer reactions. Very encouraging convergence is obtained for rate constants including tunneling, suggesting that the new MCMM method, called QM/MM-MCMM, is a very general, stable, and efficient procedure for generating potential energy surfaces for large reactive systems. The results are found to converge well with respect to the number of Hessians. The results are also compared to calculations in which the resonance integral data are obtained by pure QM, and this illustrates the sensitivity of reaction rate calculations to the treatment of the QM-MM border. For the smaller of the two systems, comparison is also made to direct dynamics calculations in which the potential energies are computed quantum mechanically on the fly.
Pompei-Reynolds, Renée C; Kanavakis, Georgios
2014-08-01
The manufacturing process for copper-nickel-titanium archwires is technique sensitive. The primary aim of this investigation was to examine the interlot consistency of the mechanical properties of copper-nickel-titanium wires from 2 manufacturers. Wires of 2 sizes (0.016 and 0.016 × 0.022 in) and 3 advertised austenite finish temperatures (27°C, 35°C, and 40°C) from 2 manufacturers were tested for transition temperature ranges and force delivery using differential scanning calorimetry and the 3-point bend test, respectively. Variations of these properties were analyzed for statistical significance by calculating the F statistic for equality of variances for transition temperature and force delivery in each group of wires. All statistical analyses were performed at the 0.05 level of significance. Statistically significant interlot variations in austenite finish were found for the 0.016 in/27°C (P = 0.041) and 0.016 × 0.022 in/35°C (P = 0.048) wire categories, and in austenite start for the 0.016 × 0.022 in/35°C wire category (P = 0.01). In addition, significant variations in force delivery were found between the 2 manufacturers for the 0.016 in/27°C (P = 0.002), 0.016 in/35.0°C (P = 0.049), and 0.016 × 0.022 in/35°C (P = 0.031) wires. Orthodontic wires of the same material, dimension, and manufacturer but from different production lots do not always have similar mechanical properties. Clinicians should be aware that copper-nickel-titanium wires might not always deliver the expected force, even when they come from the same manufacturer, because of interlot variations in the performance of the material. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Employing Deceptive Dynamic Network Topology Through Software-Defined Networking
2014-03-01
manage economies, banking, and businesses , to the way we gather intelligence and militaries wage war. With computer networks and the Internet, we have seen...space, along with other generated statistics , similar to that performed by the Ant Census project. As we have shown, there is an extensive and diverse...calculated RTT for each probe. In the ping statistics , we are presented the details of probes sent and responses received, and the calculated packet loss
Statistical analysis of QC data and estimation of fuel rod behaviour
NASA Astrophysics Data System (ADS)
Heins, L.; Groβ, H.; Nissen, K.; Wunderlich, F.
1991-02-01
The behaviour of fuel rods while in reactor is influenced by many parameters. As far as fabrication is concerned, fuel pellet diameter and density, and inner cladding diameter are important examples. Statistical analyses of quality control data show a scatter of these parameters within the specified tolerances. At present it is common practice to use a combination of superimposed unfavorable tolerance limits (worst case dataset) in fuel rod design calculations. Distributions are not considered. The results obtained in this way are very conservative but the degree of conservatism is difficult to quantify. Probabilistic calculations based on distributions allow the replacement of the worst case dataset by a dataset leading to results with known, defined conservatism. This is achieved by response surface methods and Monte Carlo calculations on the basis of statistical distributions of the important input parameters. The procedure is illustrated by means of two examples.
To P or Not to P: Backing Bayesian Statistics.
Buchinsky, Farrel J; Chadha, Neil K
2017-12-01
In biomedical research, it is imperative to differentiate chance variation from truth before we generalize what we see in a sample of subjects to the wider population. For decades, we have relied on null hypothesis significance testing, where we calculate P values for our data to decide whether to reject a null hypothesis. This methodology is subject to substantial misinterpretation and errant conclusions. Instead of working backward by calculating the probability of our data if the null hypothesis were true, Bayesian statistics allow us instead to work forward, calculating the probability of our hypothesis given the available data. This methodology gives us a mathematical means of incorporating our "prior probabilities" from previous study data (if any) to produce new "posterior probabilities." Bayesian statistics tell us how confidently we should believe what we believe. It is time to embrace and encourage their use in our otolaryngology research.
Biermann, A; Geissler, A
2016-09-01
Diagnosis-related groups (DRGs) have been used to reimburse hospitals services in Germany since 2003/04. Like any other reimbursement system, DRGs offer specific incentives for hospitals that may lead to unintended consequences for patients. In the German context, specific procedures and their documentation are suspected to be primarily performed to increase hospital revenues. Mechanical ventilation of patients and particularly the duration of ventilation, which is an important variable for the DRG-classification, are often discussed to be among these procedures. The aim of this study was to examine incentives created by the German DRG-based payment system with regard to mechanical ventilation and to identify factors that explain the considerable increase of mechanically ventilated patients in recent years. Moreover, the assumption that hospitals perform mechanical ventilation in order to gain economic benefits was examined. In order to gain insights on the development of the number of mechanically ventilated patients, patient-level data provided by the German Federal Statistical Office and the German Institute for the Hospital Remuneration System were analyzed. The type of performed ventilation, the total number of ventilation hours, the age distribution, mortality and the DRG distribution for mechanical ventilation were calculated, using methods of descriptive and inferential statistics. Furthermore, changes in DRG-definitions and changes in respiratory medicine were compared for the years 2005-2012. Since the introduction of the DRG-based payment system in Germany, the hours of ventilation and the number of mechanically ventilated patients have substantially increased, while mortality has decreased. During the same period there has been a switch to less invasive ventilation methods. The age distribution has shifted to higher age-groups. A ventilation duration determined by DRG definitions could not be found. Due to advances in respiratory medicine, new ventilation methods have been introduced that are less prone to complications. This development has simultaneously improved survival rates. There was no evidence supporting the assumption that the duration of mechanical ventilation is influenced by the time intervals relevant for DRG grouping. However, presumably operational routines such as staff availability within early and late shifts of the hospital have a significant impact on the termination of mechanical ventilation.
Thieler, E. Robert; Himmelstoss, Emily A.; Zichichi, Jessica L.; Ergul, Ayhan
2009-01-01
The Digital Shoreline Analysis System (DSAS) version 4.0 is a software extension to ESRI ArcGIS v.9.2 and above that enables a user to calculate shoreline rate-of-change statistics from multiple historic shoreline positions. A user-friendly interface of simple buttons and menus guides the user through the major steps of shoreline change analysis. Components of the extension and user guide include (1) instruction on the proper way to define a reference baseline for measurements, (2) automated and manual generation of measurement transects and metadata based on user-specified parameters, and (3) output of calculated rates of shoreline change and other statistical information. DSAS computes shoreline rates of change using four different methods: (1) endpoint rate, (2) simple linear regression, (3) weighted linear regression, and (4) least median of squares. The standard error, correlation coefficient, and confidence interval are also computed for the simple and weighted linear-regression methods. The results of all rate calculations are output to a table that can be linked to the transect file by a common attribute field. DSAS is intended to facilitate the shoreline change-calculation process and to provide rate-of-change information and the statistical data necessary to establish the reliability of the calculated results. The software is also suitable for any generic application that calculates positional change over time, such as assessing rates of change of glacier limits in sequential aerial photos, river edge boundaries, land-cover changes, and so on.
Statistics or How to Know Your Onions.
ERIC Educational Resources Information Center
Hawkins, Anne S.
1986-01-01
Using calculators (and computers) to develop an understanding and appreciation of statistical ideas is advocated. Manual computation as a prerequisite for developing concepts is negated through several examples. (MNS)
Rainfall threshold calculation for debris flow early warning in areas with scarcity of data
NASA Astrophysics Data System (ADS)
Pan, Hua-Li; Jiang, Yuan-Jun; Wang, Jun; Ou, Guo-Qiang
2018-05-01
Debris flows are natural disasters that frequently occur in mountainous areas, usually accompanied by serious loss of lives and properties. One of the most commonly used approaches to mitigate the risk associated with debris flows is the implementation of early warning systems based on well-calibrated rainfall thresholds. However, many mountainous areas have little data regarding rainfall and hazards, especially in debris-flow-forming regions. Therefore, the traditional statistical analysis method that determines the empirical relationship between rainstorms and debris flow events cannot be effectively used to calculate reliable rainfall thresholds in these areas. After the severe Wenchuan earthquake, there were plenty of deposits deposited in the gullies, which resulted in several debris flow events. The triggering rainfall threshold has decreased obviously. To get a reliable and accurate rainfall threshold and improve the accuracy of debris flow early warning, this paper developed a quantitative method, which is suitable for debris flow triggering mechanisms in meizoseismal areas, to identify rainfall threshold for debris flow early warning in areas with a scarcity of data based on the initiation mechanism of hydraulic-driven debris flow. First, we studied the characteristics of the study area, including meteorology, hydrology, topography and physical characteristics of the loose solid materials. Then, the rainfall threshold was calculated by the initiation mechanism of the hydraulic debris flow. The comparison with other models and with alternate configurations demonstrates that the proposed rainfall threshold curve is a function of the antecedent precipitation index (API) and 1 h rainfall. To test the proposed method, we selected the Guojuanyan gully, a typical debris flow valley that during the 2008-2013 period experienced several debris flow events, located in the meizoseismal areas of the Wenchuan earthquake, as a case study. The comparison with other threshold models and configurations shows that the selected approach is the most promising starting point for further studies on debris flow early warning systems in areas with a scarcity of data.
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
Statistics Using Just One Formula
ERIC Educational Resources Information Center
Rosenthal, Jeffrey S.
2018-01-01
This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…
Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak
2013-01-01
The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
12 CFR 3.162 - Mechanics of risk-weighted asset calculation.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Mechanics of risk-weighted asset calculation. 3.162 Section 3.162 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY CAPITAL...-Weighted Assets for Operational Risk § 3.162 Mechanics of risk-weighted asset calculation. (a) If a...
Playing at Statistical Mechanics
ERIC Educational Resources Information Center
Clark, Paul M.; And Others
1974-01-01
Discussed are the applications of counting techniques of a sorting game to distributions and concepts in statistical mechanics. Included are the following distributions: Fermi-Dirac, Bose-Einstein, and most probable. (RH)
Third law of thermodynamics as a key test of generalized entropies.
Bento, E P; Viswanathan, G M; da Luz, M G E; Silva, R
2015-02-01
The laws of thermodynamics constrain the formulation of statistical mechanics at the microscopic level. The third law of thermodynamics states that the entropy must vanish at absolute zero temperature for systems with nondegenerate ground states in equilibrium. Conversely, the entropy can vanish only at absolute zero temperature. Here we ask whether or not generalized entropies satisfy this fundamental property. We propose a direct analytical procedure to test if a generalized entropy satisfies the third law, assuming only very general assumptions for the entropy S and energy U of an arbitrary N-level classical system. Mathematically, the method relies on exact calculation of β=dS/dU in terms of the microstate probabilities p(i). To illustrate this approach, we present exact results for the two best known generalizations of statistical mechanics. Specifically, we study the Kaniadakis entropy S(κ), which is additive, and the Tsallis entropy S(q), which is nonadditive. We show that the Kaniadakis entropy correctly satisfies the third law only for -1<κ<+1, thereby shedding light on why κ is conventionally restricted to this interval. Surprisingly, however, the Tsallis entropy violates the third law for q<1. Finally, we give a concrete example of the power of our proposed method by applying it to a paradigmatic system: the one-dimensional ferromagnetic Ising model with nearest-neighbor interactions.
Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations
NASA Astrophysics Data System (ADS)
Kozak, P.
2014-12-01
Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.
The influence of chemical mechanisms on PDF calculations of non-premixed turbulent flames
NASA Astrophysics Data System (ADS)
Pope, Stephen B.
2005-11-01
A series of calculations is reported of the Barlow & Frank non-premixed piloted jet flames D, E and F, with the aim of determining the level of description of the chemistry necessary to account accurately for the turbulence-chemistry interactions observed in these flames. The calculations are based on the modeled transport equation for the joint probability density function of velocity, turbulence frequency and composition (enthalpy and species mass fractions). Seven chemical mechanisms for methane are investigated, ranging from a five-step reduced mechanism to the 53-species GRI 3.0 mechanism. The results show that, for C-H-O species, accurate results are obtained with the GRI 2.11 and GRI 3.0 mechanisms, as well as with 12 and 15-step reduced mechanisms based on GRI 2.11. But significantly inaccurate calculations result from use of the 5-step reduced mechanism (based on GRI 2.11), and from two different 16-species skeletal mechanisms. As has previously been observed, GRI 3.0 over-predicts NO by up to a factor of two; whereas NO is calculated reasonably accurately by GRI 2.11 and the 15-step reduced mechanism.
Boron diffusion in bcc-Fe studied by first-principles calculations
NASA Astrophysics Data System (ADS)
Xianglong, Li; Ping, Wu; Ruijie, Yang; Dan, Yan; Sen, Chen; Shiping, Zhang; Ning, Chen
2016-03-01
The diffusion mechanism of boron in bcc-Fe has been studied by first-principles calculations. The diffusion coefficients of the interstitial mechanism, the B-monovacancy complex mechanism, and the B-divacancy complex mechanism have been calculated. The calculated diffusion coefficient of the interstitial mechanism is D0 = 1.05 × 10-7 exp (-0.75 eV/kT) m2 · s-1, while the diffusion coefficients of the B-monovacancy and the B-divacancy complex mechanisms are D1 = 1.22 × 10-6 f1 exp (-2.27 eV/kT) m2 · s-1 and D2 ≈ 8.36 × 10-6 exp (-4.81 eV/kT) m2 · s-1, respectively. The results indicate that the dominant diffusion mechanism in bcc-Fe is the interstitial mechanism through an octahedral interstitial site instead of the complex mechanism. The calculated diffusion coefficient is in accordance with the reported experiment results measured in Fe-3%Si-B alloy (bcc structure). Since the non-equilibrium segregation of boron is based on the diffusion of the complexes as suggested by the theory, our calculation reasonably explains why the non-equilibrium segregation of boron is not observed in bcc-Fe in experiments. Project supported by the National Natural Science Foundation of China (Grant No. 51276016) and the National Basic Research Program of China (Grant No. 2012CB720406).
Health Disparities Calculator (HD*Calc) - SEER Software
Statistical software that generates summary measures to evaluate and monitor health disparities. Users can import SEER data or other population-based health data to calculate 11 disparity measurements.
40 CFR Appendix IV to Part 264 - Cochran's Approximation to the Behrens-Fisher Students' t-test
Code of Federal Regulations, 2011 CFR
2011-07-01
... summary measures to calculate a t-statistic (t*) and a comparison t-statistic (tc). The t* value is compared to the tc value and a conclusion reached as to whether there has been a statistically significant... made in collecting the background data. The t-statistic (tc), against which t* will be compared...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-20
... rent and the core-based statistical area (CBSA) rent as applied to the 40th percentile FMR for that..., calculated on the basis of the core-based statistical area (CBSA) or the metropolitan Statistical Area (MSA... will be ranked according to each of the statistics specified above, and then a weighted average ranking...
40 CFR Appendix IV to Part 264 - Cochran's Approximation to the Behrens-Fisher Students' t-test
Code of Federal Regulations, 2010 CFR
2010-07-01
... summary measures to calculate a t-statistic (t*) and a comparison t-statistic (tc). The t* value is compared to the tc value and a conclusion reached as to whether there has been a statistically significant... made in collecting the background data. The t-statistic (tc), against which t* will be compared...
Role of breakup and direct processes in deuteron-induced reactions at low energies
NASA Astrophysics Data System (ADS)
Avrigeanu, M.; Avrigeanu, V.
2015-08-01
Background: Recent studies of deuteron-induced reactions around the Coulomb barrier B pointed out that numerical calculations for deuteron-induced reactions are beyond current capabilities. The statistical model of nuclear reactions was used in this respect since the compound-nucleus (CN) mechanism was considered to be responsible for most of the total-reaction cross section σR in this energy range. However, specific noncompound processes such as the breakup (BU) and direct reactions (DR) should be also considered for the deuteron-induced reactions, making them different from reactions with other incident particles. Purpose: The unitary and consistent BU and DR consideration in deuteron-induced reactions is proved to yield results at variance with the assumption of negligible noncompound components. Method: The CN fractions of σR obtained by analysis of measured neutron angular distributions in deuteron-induced reactions on 27Al, 56Fe, 63,63Cu, and 89Y target nuclei, around B , are compared with the results of an unitary analysis of every reaction mechanism. The latter values have been supported by the previously established agreement with all available deuteron data for 27Al 54,56,-58,natCu, 63,65,natCu and 93Nb. Results: There is a significant difference between the larger CN contributions obtained from measured neutron angular distributions and calculated results of an unitary analysis of every deuteron-interaction mechanism. The decrease of the latter values is mainly due to the BU component. Conclusions: The above-mentioned differences underline the key role of the breakup and direct reactions that should be considered explicitly in the case of deuteron-induced reactions.
NASA Astrophysics Data System (ADS)
Blum, T.; Boyle, P. A.; Izubuchi, T.; Jin, L.; Jüttner, A.; Lehner, C.; Maltman, K.; Marinkovic, M.; Portelli, A.; Spraggs, M.; Rbc; Ukqcd Collaborations
2016-06-01
We report the first lattice QCD calculation of the hadronic vacuum polarization (HVP) disconnected contribution to the muon anomalous magnetic moment at physical pion mass. The calculation uses a refined noise-reduction technique that enables the control of statistical uncertainties at the desired level with modest computational effort. Measurements were performed on the 483×96 physical-pion-mass lattice generated by the RBC and UKQCD Collaborations. We find the leading-order hadronic vacuum polarization aμHVP (LO )disc=-9.6 (3.3 )(2.3 )×10-10 , where the first error is statistical and the second systematic.
Blum, T; Boyle, P A; Izubuchi, T; Jin, L; Jüttner, A; Lehner, C; Maltman, K; Marinkovic, M; Portelli, A; Spraggs, M
2016-06-10
We report the first lattice QCD calculation of the hadronic vacuum polarization (HVP) disconnected contribution to the muon anomalous magnetic moment at physical pion mass. The calculation uses a refined noise-reduction technique that enables the control of statistical uncertainties at the desired level with modest computational effort. Measurements were performed on the 48^{3}×96 physical-pion-mass lattice generated by the RBC and UKQCD Collaborations. We find the leading-order hadronic vacuum polarization a_{μ}^{HVP(LO)disc}=-9.6(3.3)(2.3)×10^{-10}, where the first error is statistical and the second systematic.
Menzerath-Altmann Law: Statistical Mechanical Interpretation as Applied to a Linguistic Organization
NASA Astrophysics Data System (ADS)
Eroglu, Sertac
2014-10-01
The distribution behavior described by the empirical Menzerath-Altmann law is frequently encountered during the self-organization of linguistic and non-linguistic natural organizations at various structural levels. This study presents a statistical mechanical derivation of the law based on the analogy between the classical particles of a statistical mechanical organization and the distinct words of a textual organization. The derived model, a transformed (generalized) form of the Menzerath-Altmann model, was termed as the statistical mechanical Menzerath-Altmann model. The derived model allows interpreting the model parameters in terms of physical concepts. We also propose that many organizations presenting the Menzerath-Altmann law behavior, whether linguistic or not, can be methodically examined by the transformed distribution model through the properly defined structure-dependent parameter and the energy associated states.
1991-03-01
the A parameters; yhatf, to calculate the y-hat statistics; ssrf, to calculate the uncorrected SSR; sstof, to calculate the uncorrected SSTO ; matmulmm...DEGREES OF FREEDOM * int sstocdf, ssrcdf, ssecdf; float ssr, ssto , sse; /* SUM OF SQUARES * float ssrc, sstoc, ssec; float insr, insto, inse; float...Y-HAT STATSISTICS * yhatf(x,beta,stats,n,n); /* CALCULATE UNCORRECTED SSR * ssrf(beta, x, y, mn, n, ss); ssr = ss[l][l]; /* CALCULATE UNCORRECTED SSTO
Statistical characteristics of mechanical heart valve cavitation in accelerated testing.
Wu, Changfu; Hwang, Ned H C; Lin, Yu-Kweng M
2004-07-01
Cavitation damage has been observed on mechanical heart valves (MHVs) undergoing accelerated testing. Cavitation itself can be modeled as a stochastic process, as it varies from beat to beat of the testing machine. This in-vitro study was undertaken to investigate the statistical characteristics of MHV cavitation. A 25-mm St. Jude Medical bileaflet MHV (SJM 25) was tested in an accelerated tester at various pulse rates, ranging from 300 to 1,000 bpm, with stepwise increments of 100 bpm. A miniature pressure transducer was placed near a leaflet tip on the inflow side of the valve, to monitor regional transient pressure fluctuations at instants of valve closure. The pressure trace associated with each beat was passed through a 70 kHz high-pass digital filter to extract the high-frequency oscillation (HFO) components resulting from the collapse of cavitation bubbles. Three intensity-related measures were calculated for each HFO burst: its time span; its local root-mean-square (LRMS) value; and the area enveloped by the absolute value of the HFO pressure trace and the time axis, referred to as cavitation impulse. These were treated as stochastic processes, of which the first-order probability density functions (PDFs) were estimated for each test rate. Both the LRMS value and cavitation impulse were log-normal distributed, and the time span was normal distributed. These distribution laws were consistent at different test rates. The present investigation was directed at understanding MHV cavitation as a stochastic process. The results provide a basis for establishing further the statistical relationship between cavitation intensity and time-evolving cavitation damage on MHV surfaces. These data are required to assess and compare the performance of MHVs of different designs.
A quantitative study of nanoparticle skin penetration with interactive segmentation.
Lee, Onseok; Lee, See Hyun; Jeong, Sang Hoon; Kim, Jaeyoung; Ryu, Hwa Jung; Oh, Chilhwan; Son, Sang Wook
2016-10-01
In the last decade, the application of nanotechnology techniques has expanded within diverse areas such as pharmacology, medicine, and optical science. Despite such wide-ranging possibilities for implementation into practice, the mechanisms behind nanoparticle skin absorption remain unknown. Moreover, the main mode of investigation has been qualitative analysis. Using interactive segmentation, this study suggests a method of objectively and quantitatively analyzing the mechanisms underlying the skin absorption of nanoparticles. Silica nanoparticles (SNPs) were assessed using transmission electron microscopy and applied to the human skin equivalent model. Captured fluorescence images of this model were used to evaluate degrees of skin penetration. These images underwent interactive segmentation and image processing in addition to statistical quantitative analyses of calculated image parameters including the mean, integrated density, skewness, kurtosis, and area fraction. In images from both groups, the distribution area and intensity of fluorescent silica gradually increased in proportion to time. Since statistical significance was achieved after 2 days in the negative charge group and after 4 days in the positive charge group, there is a periodic difference. Furthermore, the quantity of silica per unit area showed a dramatic change after 6 days in the negative charge group. Although this quantitative result is identical to results obtained by qualitative assessment, it is meaningful in that it was proven by statistical analysis with quantitation by using image processing. The present study suggests that the surface charge of SNPs could play an important role in the percutaneous absorption of NPs. These findings can help achieve a better understanding of the percutaneous transport of NPs. In addition, these results provide important guidance for the design of NPs for biomedical applications.
Budiyono, Agung; Rohrlich, Daniel
2017-11-03
Where does quantum mechanics part ways with classical mechanics? How does quantum randomness differ fundamentally from classical randomness? We cannot fully explain how the theories differ until we can derive them within a single axiomatic framework, allowing an unambiguous account of how one theory is the limit of the other. Here we derive non-relativistic quantum mechanics and classical statistical mechanics within a common framework. The common axioms include conservation of average energy and conservation of probability current. But two axioms distinguish quantum mechanics from classical statistical mechanics: an "ontic extension" defines a nonseparable (global) random variable that generates physical correlations, and an "epistemic restriction" constrains allowed phase space distributions. The ontic extension and epistemic restriction, with strength on the order of Planck's constant, imply quantum entanglement and uncertainty relations. This framework suggests that the wave function is epistemic, yet it does not provide an ontic dynamics for individual systems.
The Utility of Robust Means in Statistics
ERIC Educational Resources Information Center
Goodwyn, Fara
2012-01-01
Location estimates calculated from heuristic data were examined using traditional and robust statistical methods. The current paper demonstrates the impact outliers have on the sample mean and proposes robust methods to control for outliers in sample data. Traditional methods fail because they rely on the statistical assumptions of normality and…
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
ppcor: An R Package for a Fast Calculation to Semi-partial Correlation Coefficients.
Kim, Seongho
2015-11-01
Lack of a general matrix formula hampers implementation of the semi-partial correlation, also known as part correlation, to the higher-order coefficient. This is because the higher-order semi-partial correlation calculation using a recursive formula requires an enormous number of recursive calculations to obtain the correlation coefficients. To resolve this difficulty, we derive a general matrix formula of the semi-partial correlation for fast computation. The semi-partial correlations are then implemented on an R package ppcor along with the partial correlation. Owing to the general matrix formulas, users can readily calculate the coefficients of both partial and semi-partial correlations without computational burden. The package ppcor further provides users with the level of the statistical significance with its test statistic.
An indirect approach to the extensive calculation of relationship coefficients
Colleau, Jean-Jacques
2002-01-01
A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design) by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population. PMID:12270102
Statistical Thermodynamics and Microscale Thermophysics
NASA Astrophysics Data System (ADS)
Carey, Van P.
1999-08-01
Many exciting new developments in microscale engineering are based on the application of traditional principles of statistical thermodynamics. In this text Van Carey offers a modern view of thermodynamics, interweaving classical and statistical thermodynamic principles and applying them to current engineering systems. He begins with coverage of microscale energy storage mechanisms from a quantum mechanics perspective and then develops the fundamental elements of classical and statistical thermodynamics. Subsequent chapters discuss applications of equilibrium statistical thermodynamics to solid, liquid, and gas phase systems. The remainder of the book is devoted to nonequilibrium thermodynamics of transport phenomena and to nonequilibrium effects and noncontinuum behavior at the microscale. Although the text emphasizes mathematical development, Carey includes many examples and exercises to illustrate how the theoretical concepts are applied to systems of scientific and engineering interest. In the process he offers a fresh view of statistical thermodynamics for advanced undergraduate and graduate students, as well as practitioners, in mechanical, chemical, and materials engineering.
12 CFR 3.31 - Mechanics for calculating risk-weighted assets for general credit risk.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Mechanics for calculating risk-weighted assets for general credit risk. 3.31 Section 3.31 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT... Assets for General Credit Risk § 3.31 Mechanics for calculating risk-weighted assets for general credit...
The difference between a dynamic and mechanical approach to stroke treatment.
Helgason, Cathy M
2007-06-01
The current classification of stroke is based on causation, also called pathogenesis, and relies on binary logic faithful to the Aristotelian tradition. Accordingly, a pathology is or is not the cause of the stroke, is considered independent of others, and is the target for treatment. It is the subject for large double-blind randomized clinical therapeutic trials. The scientific view behind clinical trials is the fundamental concept that information is statistical, and causation is determined by probabilities. Therefore, the cause and effect relation will be determined by probability-theory-based statistics. This is the basis of evidence-based medicine, which calls for the results of such trials to be the basis for physician decisions regarding diagnosis and treatment. However, there are problems with the methodology behind evidence-based medicine. Calculations using probability-theory-based statistics regarding cause and effect are performed within an automatic system where there are known inputs and outputs. This method of research provides a framework of certainty with no surprise elements or outcomes. However, it is not a system or method that will come up with previously unknown variables, concepts, or universal principles; it is not a method that will give a new outcome; and it is not a method that allows for creativity, expertise, or new insight for problem solving.
Structure and Randomness of Continuous-Time, Discrete-Event Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2017-10-01
Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.
An operational definition of a statistically meaningful trend.
Bryhn, Andreas C; Dimberg, Peter H
2011-04-28
Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r(2) and p values are calculated from regressions concerning time and interval mean values. If r(2) ≥ 0.65 at p ≤ 0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.
Statistical characteristics of dynamics for population migration driven by the economic interests
NASA Astrophysics Data System (ADS)
Huo, Jie; Wang, Xu-Ming; Zhao, Ning; Hao, Rui
2016-06-01
Population migration typically occurs under some constraints, which can deeply affect the structure of a society and some other related aspects. Therefore, it is critical to investigate the characteristics of population migration. Data from the China Statistical Yearbook indicate that the regional gross domestic product per capita relates to the population size via a linear or power-law relation. In addition, the distribution of population migration sizes or relative migration strength introduced here is dominated by a shifted power-law relation. To reveal the mechanism that creates the aforementioned distributions, a dynamic model is proposed based on the population migration rule that migration is facilitated by higher financial gains and abated by fewer employment opportunities at the destination, considering the migration cost as a function of the migration distance. The calculated results indicate that the distribution of the relative migration strength is governed by a shifted power-law relation, and that the distribution of migration distances is dominated by a truncated power-law relation. These results suggest the use of a power-law to fit a distribution may be not always suitable. Additionally, from the modeling framework, one can infer that it is the randomness and determinacy that jointly create the scaling characteristics of the distributions. The calculation also demonstrates that the network formed by active nodes, representing the immigration and emigration regions, usually evolves from an ordered state with a non-uniform structure to a disordered state with a uniform structure, which is evidenced by the increasing structural entropy.
Simple measurement-based admission control for DiffServ access networks
NASA Astrophysics Data System (ADS)
Lakkakorpi, Jani
2002-07-01
In order to provide good Quality of Service (QoS) in a Differentiated Services (DiffServ) network, a dynamic admission control scheme is definitely needed as an alternative to overprovisioning. In this paper, we present a simple measurement-based admission control (MBAC) mechanism for DiffServ-based access networks. Instead of using active measurements only or doing purely static bookkeeping with parameter-based admission control (PBAC), the admission control decisions are based on bandwidth reservations and periodically measured & exponentially averaged link loads. If any link load on the path between two endpoints is over the applicable threshold, access is denied. Link loads are periodically sent to Bandwidth Broker (BB) of the routing domain, which makes the admission control decisions. The information needed in calculating the link loads is retrieved from the router statistics. The proposed admission control mechanism is verified through simulations. Our results prove that it is possible to achieve very high bottleneck link utilization levels and still maintain good QoS.
Single-Case Experimental Designs to Evaluate Novel Technology-Based Health Interventions
Cassidy, Rachel N; Raiff, Bethany R
2013-01-01
Technology-based interventions to promote health are expanding rapidly. Assessing the preliminary efficacy of these interventions can be achieved by employing single-case experiments (sometimes referred to as n-of-1 studies). Although single-case experiments are often misunderstood, they offer excellent solutions to address the challenges associated with testing new technology-based interventions. This paper provides an introduction to single-case techniques and highlights advances in developing and evaluating single-case experiments, which help ensure that treatment outcomes are reliable, replicable, and generalizable. These advances include quality control standards, heuristics to guide visual analysis of time-series data, effect size calculations, and statistical analyses. They also include experimental designs to isolate the active elements in a treatment package and to assess the mechanisms of behavior change. The paper concludes with a discussion of issues related to the generality of findings derived from single-case research and how generality can be established through replication and through analysis of behavioral mechanisms. PMID:23399668
Physical mechanism and numerical simulation of the inception of the lightning upward leader
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Qingmin; Lu Xinchang; Shi Wei
2012-12-15
The upward leader is a key physical process of the leader progression model of lightning shielding. The inception mechanism and criterion of the upward leader need further understanding and clarification. Based on leader discharge theory, this paper proposes the critical electric field intensity of the stable upward leader (CEFISUL) and characterizes it by the valve electric field intensity on the conductor surface, E{sub L}, which is the basis of a new inception criterion for the upward leader. Through numerical simulation under various physical conditions, we verified that E{sub L} is mainly related to the conductor radius, and data fitting yieldsmore » the mathematical expression of E{sub L}. We further establish a computational model for lightning shielding performance of the transmission lines based on the proposed CEFISUL criterion, which reproduces the shielding failure rate of typical UHV transmission lines. The model-based calculation results agree well with the statistical data from on-site operations, which show the effectiveness and validity of the CEFISUL criterion.« less
Surface phase stability and surfactant behavior of InAsSb alloy surfaces.
NASA Astrophysics Data System (ADS)
Anderson, Evan M.; Lundquist, Adam M.; Pearson, Chris; Millunchick, Joanna M.
InAsSb has the narrowest bandgap of any of the conventional III-V semiconductors: low enough for long wavelength infrared applications. Such devices are sensitive to point defects, which can be detrimental to performance. To control these defects, all aspects of synthesis must be considered, especially the atomic bonding at the surface. We use an ab initio statistical mechanics approach that combines density functional theory with a cluster expansion formalism to determine the stable surface reconstructions of Sb (As) on InAs (InSb) substrates. The surface phase diagram of Sb on InAs is dominated by Sb-dimer termination α2(2x4) and β2(2x4) and c(4x4). Smaller regions of mixed Sb-As dimers appear for high Sb chemical potentials and intermediate As chemical potential. We propose that InAsSb films could be grown on (2x4), which maintain bulk-like stoichiometry, to eliminate the formation of typically observed n-type defects. Scanning tunneling microscopy and reflection high energy electron diffraction confirm the calculated phase diagram. Based on these calculations, we propose a new mechanism for the surfactant behavior of Sb in these materials. We gratefully acknowledge Chakrapani Varanasi and the support of the Department of Defense, Army Research Office via the Grant Number W911NF-12-1-0338.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Haining; Kim, Seungchul; Lee, Kwang-Ryeol, E-mail: krlee@kist.re.kr
2016-03-28
Initial stage of oxynitridation process of Si substrate is of crucial importance in fabricating the ultrathin gate dielectric layer of high quality in advanced MOSFET devices. The oxynitridation reaction on a relaxed Si(001) surface is investigated via reactive molecular dynamics (MD) simulation. A total of 1120 events of a single nitric oxide (NO) molecule reaction at temperatures ranging from 300 to 1000 K are statistically analyzed. The observed reaction kinetics are consistent with the previous experimental or calculation results, which show the viability of the reactive MD technique to study the NO dissociation reaction on Si. We suggest the reaction pathwaymore » for NO dissociation that is characterized by the inter-dimer bridge of a NO molecule as the intermediate state prior to NO dissociation. Although the energy of the inter-dimer bridge is higher than that of the intra-dimer one, our suggestion is supported by the ab initio nudged elastic band calculations showing that the energy barrier for the inter-dimer bridge formation is much lower. The growth mechanism of an ultrathin Si oxynitride layer is also investigated via consecutive NO reactions simulation. The simulation reveals the mechanism of self-limiting reaction at low temperature and the time evolution of the depth profile of N and O atoms depending on the process temperature, which would guide to optimize the oxynitridation process condition.« less
ERIC Educational Resources Information Center
Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.
2010-01-01
Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…
TSP Symposium 2012 Proceedings
2012-11-01
and Statistical Model 78 7.3 Analysis and Results 79 7.4 Threats to Validity and Limitations 85 7.5 Conclusions 86 7.6 Acknowledgments 87 7.7...Table 12: Overall Statistics of the Experiment 32 Table 13: Results of Pairwise ANOVA Analysis, Highlighting Statistically Significant Differences...we calculated the percentage of defects injected. The distribution statistics are shown in Table 2. Table 2: Mean Lower, Upper Confidence Interval
Spatial Accessibility and Availability Measures and Statistical Properties in the Food Environment
Van Meter, E.; Lawson, A.B.; Colabianchi, N.; Nichols, M.; Hibbert, J.; Porter, D.; Liese, A.D.
2010-01-01
Spatial accessibility is of increasing interest in the health sciences. This paper addresses the statistical use of spatial accessibility and availability indices. These measures are evaluated via an extensive simulation based on cluster models for local food outlet density. We derived Monte Carlo critical values for several statistical tests based on the indices. In particular we are interested in the ability to make inferential comparisons between different study areas where indices of accessibility and availability are to be calculated. We derive tests of mean difference as well as tests for differences in Moran's I for spatial correlation for each of the accessibility and availability indices. We also apply these new statistical tests to a data example based on two counties in South Carolina for various accessibility and availability measures calculated for food outlets, stores, and restaurants. PMID:21499528
Influence of nonlinear effects on statistical properties of the radiation from SASE FEL
NASA Astrophysics Data System (ADS)
Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
1998-02-01
The paper presents analysis of statistical properties of the radiation from self-amplified spontaneous emission (SASE) free-electron laser operating in nonlinear mode. The present approach allows one to calculate the following statistical properties of the SASE FEL radiation: time and spectral field correlation functions, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and the radiation spectrum. It has been observed that the statistics of the instantaneous radiation power from SASE FEL operating in the nonlinear regime changes significantly with respect to the linear regime. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility under construction at DESY.
Generalized statistical mechanics approaches to earthquakes and tectonics.
Vallianatos, Filippos; Papadakis, Giorgos; Michas, Georgios
2016-12-01
Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes.
Generalized statistical mechanics approaches to earthquakes and tectonics
Papadakis, Giorgos; Michas, Georgios
2016-01-01
Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes. PMID:28119548
Micellar hexagonal phases in lyotropic liquid crystals
NASA Astrophysics Data System (ADS)
Amaral, L. Q.; Gulik, A.; Itri, R.; Mariani, P.
1992-09-01
The hexagonal cell parameter a of the system sodium dodecyl lauryl sulfate and water as a function of volume concentration cv in phase Hα shows the functional behavior expected for micelles of finite length: a~c-1/3v. The interpretation of x-ray data based on finite micelles leads to an alternative description of the hexagonal phase Hα: spherocylindrical micelles of constant radius with length that may grow along the range of the Hα phase. Results are compared with recent statistical-mechanical calculations for the isotropic I-Hα transition. The absence of diffraction in the direction perpendicular to the hexagonal plane is ascribed to polydispersity of micellar length, which also is a necessary condition for the occurrence of direct I-Hα transitions.
Validity criteria for Fermi's golden rule scattering rates applied to metallic nanowires.
Moors, Kristof; Sorée, Bart; Magnus, Wim
2016-09-14
Fermi's golden rule underpins the investigation of mobile carriers propagating through various solids, being a standard tool to calculate their scattering rates. As such, it provides a perturbative estimate under the implicit assumption that the effect of the interaction Hamiltonian which causes the scattering events is sufficiently small. To check the validity of this assumption, we present a general framework to derive simple validity criteria in order to assess whether the scattering rates can be trusted for the system under consideration, given its statistical properties such as average size, electron density, impurity density et cetera. We derive concrete validity criteria for metallic nanowires with conduction electrons populating a single parabolic band subjected to different elastic scattering mechanisms: impurities, grain boundaries and surface roughness.
NASA Technical Reports Server (NTRS)
Bougher, S. W.; Gerard, J. C.; Stewart, A. I. F.; Fesen, C. G.
1990-01-01
The mechanism responsible for the Venus nitric oxide (0,1) delta band nightglow observed in the Pioneer Venus Orbiter UV spectrometer (OUVS) images was investigated using the Venus Thermospheric General Circulation Model (Dickinson et al., 1984), modified to include simple odd nitrogen chemistry. Results obtained for the solar maximum conditions indicate that the recently revised dark-disk average NO intensity at 198.0 nm, based on statistically averaged OUVS measurements, can be reproduced with minor modifications in chemical rate coefficients. The results imply a nightside hemispheric downward N flux of (2.5-3) x 10 to the 9th/sq cm sec, corresponding to the dayside net production of N atoms needed for transport.
Cooperativity in Molecular Dynamics Structural Models and the Dielectric Spectra of 1,2-Ethanediol
NASA Astrophysics Data System (ADS)
Usacheva, T. M.
2018-05-01
Linear relationships are established between the experimental equilibrium correlation factor and the molecular dynamics (MD) mean
High-energy transformations of polyfluoroalkanes. IX pyrolysis of 1,1-difluoroethane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitin, P.V.; Golovin, A.V.; Grigor`eva, T.Yu.
1994-07-10
Kinetics of the unimolecular thermal dehydrofluorination of 1,1-difluoroethane in a flow reactor is reported. The first-order rate constant is determined; logk[1/c]=(-60,000{plus_minus}2000)/4.569{center_dot}T + 13.33{plus_minus}0.10. 1,1-Difluoroethylene, as a by-product of the pyrolysis of 1,1-difluoroethane, is formed by a radical mechanism, for which a heterogeneous, initiation state is proposed. MNDO calculations show the predominant formation of the CH{sub 3}-CF{sub 2} radical at the initiation stage. For this radical, rate constants of unimolecular 1{r_arrow}2 and 2{r_arrow}1 hydrogen shifts are determined within the framework of the PPKM statistical theory. 17 refs., 4 figs., 2 tabs.
Giant mesoscopic fluctuations of the elastic cotunneling thermopower of a single-electron transistor
NASA Astrophysics Data System (ADS)
Vasenko, A. S.; Basko, D. M.; Hekking, F. W. J.
2015-02-01
We study the thermoelectric transport of a small metallic island weakly coupled to two electrodes by tunnel junctions. In the Coulomb blockade regime, in the case when the ground state of the system corresponds to an even number of electrons on the island, the main mechanism of electron transport at the lowest temperatures is elastic cotunneling. In this regime, the transport coefficients strongly depend on the realization of the random impurity potential or the shape of the island. Using random-matrix theory, we calculate the thermopower and the thermoelectric kinetic coefficient and study the statistics of their mesoscopic fluctuations in the elastic cotunneling regime. The fluctuations of the thermopower turn out to be much larger than the average value.
Zhao, Pei; Zhao, Xiang; Ehara, Masahiro
2018-03-19
Th@C 76 has been studied by density functional theory combined with statistical mechanics calculations. The results reveal that Th@ T d (19151)-C 76 satisfying the isolated pentagon rule possesses the lowest energy. Nevertheless, considering the enthalpy-entropy interplay, Th@ C 1 (17418)-C 76 with one pair of adjacent pentagons is thermodynamically favorable at elevated temperatures, which is reported for the first time. The bonding critical points in both isomers were analyzed to disclose covalent interactions between the inner Th and cages. In addition, the Wiberg bond orders of M-C bonding in different endohedral metallofullerenes (EMFs) were investigated to prove stronger covalent interactions of Th-C in Th-based EMFs.
Critical parameters of hard-core Yukawa fluids within the structural theory
NASA Astrophysics Data System (ADS)
Bahaa Khedr, M.; Osman, S. M.
2012-10-01
A purely statistical mechanical approach is proposed to account for the liquid-vapor critical point based on the mean density approximation (MDA) of the direct correlation function. The application to hard-core Yukawa (HCY) fluids facilitates the use of the series mean spherical approximation (SMSA). The location of the critical parameters for HCY fluid with variable intermolecular range is accurately calculated. Good agreement is observed with computer simulation results and with the inverse temperature expansion (ITE) predictions. The influence of the potential range on the critical parameters is demonstrated and the universality of the critical compressibility ratio is discussed. The behavior of the isochoric and isobaric heat capacities along the equilibrium line and the near vicinity of the critical point is discussed in details.
Financial instability from local market measures
NASA Astrophysics Data System (ADS)
Bardoscia, Marco; Livan, Giacomo; Marsili, Matteo
2012-08-01
We study the emergence of instabilities in a stylized model of a financial market, when different market actors calculate prices according to different (local) market measures. We derive typical properties for ensembles of large random markets using techniques borrowed from statistical mechanics of disordered systems. We show that, depending on the number of financial instruments available and on the heterogeneity of local measures, the market moves from an arbitrage-free phase to an unstable one, where the complexity of the market—as measured by the diversity of financial instruments—increases, and arbitrage opportunities arise. A sharp transition separates the two phases. Focusing on two different classes of local measures inspired by real market strategies, we are able to analytically compute the critical lines, corroborating our findings with numerical simulations.
Statistical Analysis on the Mechanical Properties of Magnesium Alloys
Liu, Ruoyu; Jiang, Xianquan; Zhang, Hongju; Zhang, Dingfei; Wang, Jingfeng; Pan, Fusheng
2017-01-01
Knowledge of statistical characteristics of mechanical properties is very important for the practical application of structural materials. Unfortunately, the scatter characteristics of magnesium alloys for mechanical performance remain poorly understood until now. In this study, the mechanical reliability of magnesium alloys is systematically estimated using Weibull statistical analysis. Interestingly, the Weibull modulus, m, of strength for magnesium alloys is as high as that for aluminum and steels, confirming the very high reliability of magnesium alloys. The high predictability in the tensile strength of magnesium alloys represents the capability of preventing catastrophic premature failure during service, which is essential for safety and reliability assessment. PMID:29113116
Transport Coefficients from Large Deviation Functions
NASA Astrophysics Data System (ADS)
Gao, Chloe; Limmer, David
2017-10-01
We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.
BIG BANG NUCLEOSYNTHESIS WITH A NON-MAXWELLIAN DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertulani, C. A.; Fuqua, J.; Hussein, M. S.
The abundances of light elements based on the big bang nucleosynthesis model are calculated using the Tsallis non-extensive statistics. The impact of the variation of the non-extensive parameter q from the unity value is compared to observations and to the abundance yields from the standard big bang model. We find large differences between the reaction rates and the abundance of light elements calculated with the extensive and the non-extensive statistics. We found that the observations are consistent with a non-extensive parameter q = 1{sub -} {sub 0.12}{sup +0.05}, indicating that a large deviation from the Boltzmann-Gibbs statistics (q = 1)more » is highly unlikely.« less
NASA Astrophysics Data System (ADS)
Winkel, B. V.
1995-03-01
The purpose of this report is to document the Multi-Function Waste Tank Facility (MWTF) Project position on the concrete mechanical properties needed to perform design/analysis calculations for the MWTF secondary concrete structure. This report provides a position on MWTF concrete properties for the Title 1 and Title 2 calculations. The scope of the report is limited to mechanical properties and does not include the thermophysical properties of concrete needed to perform heat transfer calculations. In the 1970's, a comprehensive series of tests were performed at Construction Technology Laboratories (CTL) on two different Hanford concrete mix designs. Statistical correlations of the CTL data were later generated by Pacific Northwest Laboratories (PNL). These test results and property correlations have been utilized in various design/analysis efforts of Hanford waste tanks. However, due to changes in the concrete design mix and the lower range of MWTF operating temperatures, plus uncertainties in the CTL data and PNL correlations, it was prudent to evaluate the CTL data base and PNL correlations, relative to the MWTF application, and develop a defendable position. The CTL test program for Hanford concrete involved two different mix designs: a 3 kip/sq in mix and a 4.5 kip/sq in mix. The proposed 28-day design strength for the MWTF tanks is 5 kip/sq in. In addition to this design strength difference, there are also differences between the CTL and MWTF mix design details. Also of interest, are the appropriate application of the MWTF concrete properties in performing calculations demonstrating ACI Code compliance. Mix design details and ACI Code issues are addressed in Sections 3.0 and 5.0, respectively. The CTL test program and PNL data correlations focused on a temperature range of 250 to 450 F. The temperature range of interest for the MWTF tank concrete application is 70 to 200 F.
ERIC Educational Resources Information Center
National Centre for Vocational Education Research, Leabrook (Australia).
Statistics regarding Australians participating in apprenticeships and traineeships in the mechanical engineering and fabrication trades in 1995-1999 were reviewed to provide an indication of where skill shortages may be occurring or will likely occur in relation to the following occupations: mechanical engineering trades; fabrication engineering…
A Commercial IOTV Cleaning Study
2010-04-12
manufacturer’s list price without taking into consideration of possible volume discount. Equipment depreciation cost was calculated based on...Laundering with Prewash Spot Cleaning) 32 Table 12 Shrinkage Statistical Data (Traditional Wet Laundering without Prewash Spot Cleaning...Statistical Data (Computer-controlled Wet Cleaning without Prewash Spot Cleaning) 35 Table 15 Shrinkage Statistical Data (Liquid CO2 Cleaning
Using R-Project for Free Statistical Analysis in Extension Research
ERIC Educational Resources Information Center
Mangiafico, Salvatore S.
2013-01-01
One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…
Using Spreadsheets to Teach Statistics in Geography.
ERIC Educational Resources Information Center
Lee, M. P.; Soper, J. B.
1987-01-01
Maintains that teaching methods of statistical calculation in geography may be enhanced by using a computer spreadsheet. The spreadsheet format of rows and columns allows the data to be inspected and altered to demonstrate various statistical properties. The inclusion of graphics and database facilities further adds to the value of a spreadsheet.…
Statistical Mechanics Provides Novel Insights into Microtubule Stability and Mechanism of Shrinkage
Jain, Ishutesh; Inamdar, Mandar M.; Padinhateeri, Ranjith
2015-01-01
Microtubules are nano-machines that grow and shrink stochastically, making use of the coupling between chemical kinetics and mechanics of its constituent protofilaments (PFs). We investigate the stability and shrinkage of microtubules taking into account inter-protofilament interactions and bending interactions of intrinsically curved PFs. Computing the free energy as a function of PF tip position, we show that the competition between curvature energy, inter-PF interaction energy and entropy leads to a rich landscape with a series of minima that repeat over a length-scale determined by the intrinsic curvature. Computing Langevin dynamics of the tip through the landscape and accounting for depolymerization, we calculate the average unzippering and shrinkage velocities of GDP protofilaments and compare them with the experimentally known results. Our analysis predicts that the strength of the inter-PF interaction (Ems) has to be comparable to the strength of the curvature energy (Emb) such that Ems−Emb≈1kBT, and questions the prevalent notion that unzippering results from the domination of bending energy of curved GDP PFs. Our work demonstrates how the shape of the free energy landscape is crucial in explaining the mechanism of MT shrinkage where the unzippered PFs will fluctuate in a set of partially peeled off states and subunit dissociation will reduce the length. PMID:25692909
Mechanical properties of photomultiplier tube glasses for neutrino detection
Dongol, Ruhil; Chambliss, Kameron; Sundaram, Shanmugavelayutham K.; ...
2015-08-31
Photomultiplier tubes (PMT) are one of the primary components of water Cherenkov neutrino detection for the Long Baseline Neutrino Experiment (LBNE). Thousands of 10- to 12-inch diameter PMT bulbs are placed in the inner wall of a detection tank or a reservoir (e.g., deep mine) filled with 10,000 gallons of high purity water with a resistivity of 11–18.24 MΩ-cm. Long-term service of PMTs is vital to the success of neutrino detection projects. We report our results of our investigation on mechanical properties of PMT glasses from two vendors and the effect of ion exchange on their mechanical strength. Vickers indentation,more » four-point bend test, and ring-on-ring biaxial flexural strength test were used for evaluation of the mechanical strength. Chemical (potassium–sodium ion exchange) strengthening results show increased strength of 46% in one vendor glass and a 57% increase in the other, with no significant reduction in optical transmission in the ultraviolet-visible range of the electromagnetic spectrum that is critical to neutrino detection. Finally, our results also show narrowing of the distribution of strength calculated using Weibull statistics with chemical strengthening for comparable exchange depths of 22–28 μm.« less
GLIMMPSE Lite: Calculating Power and Sample Size on Smartphone Devices
Munjal, Aarti; Sakhadeo, Uttara R.; Muller, Keith E.; Glueck, Deborah H.; Kreidler, Sarah M.
2014-01-01
Researchers seeking to develop complex statistical applications for mobile devices face a common set of difficult implementation issues. In this work, we discuss general solutions to the design challenges. We demonstrate the utility of the solutions for a free mobile application designed to provide power and sample size calculations for univariate, one-way analysis of variance (ANOVA), GLIMMPSE Lite. Our design decisions provide a guide for other scientists seeking to produce statistical software for mobile platforms. PMID:25541688
NASA Astrophysics Data System (ADS)
Vavylonis, Dimitrios
2009-03-01
I will describe my experience in developing an interdisciplinary biophysics course addressed to students at the upper undergraduate and graduate level, in collaboration with colleagues in physics and biology. The students had a background in physics, biology and engineering, and for many the course was their first exposure to interdisciplinary topics. The course did not depend on a formal knowledge of equilibrium statistical mechanics. Instead, the approach was based on dynamics. I used diffusion as a universal ``long time'' law to illustrate scaling concepts. The importance of statistics and proper counting of states/paths was introduced by calculating the maximum accuracy with which bacteria can measure the concentration of diffuse chemicals. The use of quantitative concepts and methods was introduced through specific biological examples, focusing on model organisms and extremes at the cell level. Examples included microtubule dynamic instability, the search and capture model, molecular motor cooperativity in muscle cells, mitotic spindle oscillations in C. elegans, polymerization forces and propulsion of pathogenic bacteria, Brownian ratchets, bacterial cell division and MinD oscillations.
The challenge of mapping between two medical coding systems.
Wojcik, Barbara E; Stein, Catherine R; Devore, Raymond B; Hassell, L Harrison
2006-11-01
Deployable medical systems patient conditions (PCs) designate groups of patients with similar medical conditions and, therefore, similar treatment requirements. PCs are used by the U.S. military to estimate field medical resources needed in combat operations. Information associated with each of the 389 PCs is based on subject matter expert opinion, instead of direct derivation from standard medical codes. Currently, no mechanisms exist to tie current or historical medical data to PCs. Our study objective was to determine whether reliable conversion between PC codes and International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) diagnosis codes is possible. Data were analyzed for three professional coders assigning all applicable ICD-9-CM diagnosis codes to each PC code. Inter-rater reliability was measured by using Cohen's K statistic and percent agreement. Methods were developed to calculate kappa statistics when multiple responses could be selected from many possible categories. Overall, we found moderate support for the possibility of reliable conversion between PCs and ICD-9-CM diagnoses (mean kappa = 0.61). Current PCs should be modified into a system that is verifiable with real data.
Association between sleep difficulties as well as duration and hypertension: is BMI a mediator?
Carrillo-Larco, R M; Bernabe-Ortiz, A; Sacksteder, K A; Diez-Canseco, F; Cárdenas, M K; Gilman, R H; Miranda, J J
2017-01-01
Sleep difficulties and short sleep duration have been associated with hypertension. Though body mass index (BMI) may be a mediator variable, the mediation effect has not been defined. We aimed to assess the association between sleep duration and sleep difficulties with hypertension, to determine if BMI is a mediator variable, and to quantify the mediation effect. We conducted a mediation analysis and calculated prevalence ratios with 95% confidence intervals. The exposure variables were sleep duration and sleep difficulties, and the outcome was hypertension. Sleep difficulties were statistically significantly associated with a 43% higher prevalence of hypertension in multivariable analyses; results were not statistically significant for sleep duration. In these analyses, and in sex-specific subgroup analyses, we found no strong evidence that BMI mediated the association between sleep indices and risk of hypertension. Our findings suggest that BMI does not appear to mediate the association between sleep patterns and hypertension. These results highlight the need to further study the mechanisms underlying the relationship between sleep patterns and cardiovascular risk factors.
Understanding decomposition and encapsulation energies of structure I and II clathrate hydrates
NASA Astrophysics Data System (ADS)
Alavi, Saman; Ohmura, Ryo
2016-10-01
When compressed with water or ice under high pressure and low temperature conditions, some gases form solid gas hydrate inclusion compounds which have higher melting points than ice under those pressures. In this work, we study the balance of the guest-water and water-water interaction energies that lead to the formation of the clathrate hydrate phases. In particular, molecular dynamics simulations with accurate water potentials are used to study the energetics of the formation of structure I (sI) and II (sII) clathrate hydrates of methane, ethane, and propane. The dissociation enthalpy of the clathrate hydrate phases, the encapsulation enthalpy of methane, ethane, and propane guests in the corresponding phases, and the average bonding enthalpy of water molecules are calculated and compared with accurate calorimetric measurements and previous classical and quantum mechanical calculations, when available. The encapsulation energies of methane, ethane, and propane guests stabilize the small and large sI and sII hydrate cages, with the larger molecules giving larger encapsulation energies. The average water-water interactions are weakened in the sI and sII phases compared to ice. The relative magnitudes of the van der Waals potential energy in ice and the hydrate phases are similar, but in the ice phase, the electrostatic interactions are stronger. The stabilizing guest-water "hydrophobic" interactions compensate for the weaker water-water interactions and stabilize the hydrate phases. A number of common assumptions regarding the guest-cage water interactions are used in the van der Waals-Platteeuw statistical mechanical theory to predict the clathrate hydrate phase stability under different pressure-temperature conditions. The present calculations show that some of these assumptions may not accurately reflect the physical nature of the interactions between guest molecules and the lattice waters.
Understanding decomposition and encapsulation energies of structure I and II clathrate hydrates.
Alavi, Saman; Ohmura, Ryo
2016-10-21
When compressed with water or ice under high pressure and low temperature conditions, some gases form solid gas hydrate inclusion compounds which have higher melting points than ice under those pressures. In this work, we study the balance of the guest-water and water-water interaction energies that lead to the formation of the clathrate hydrate phases. In particular, molecular dynamics simulations with accurate water potentials are used to study the energetics of the formation of structure I (sI) and II (sII) clathrate hydrates of methane, ethane, and propane. The dissociation enthalpy of the clathrate hydrate phases, the encapsulation enthalpy of methane, ethane, and propane guests in the corresponding phases, and the average bonding enthalpy of water molecules are calculated and compared with accurate calorimetric measurements and previous classical and quantum mechanical calculations, when available. The encapsulation energies of methane, ethane, and propane guests stabilize the small and large sI and sII hydrate cages, with the larger molecules giving larger encapsulation energies. The average water-water interactions are weakened in the sI and sII phases compared to ice. The relative magnitudes of the van der Waals potential energy in ice and the hydrate phases are similar, but in the ice phase, the electrostatic interactions are stronger. The stabilizing guest-water "hydrophobic" interactions compensate for the weaker water-water interactions and stabilize the hydrate phases. A number of common assumptions regarding the guest-cage water interactions are used in the van der Waals-Platteeuw statistical mechanical theory to predict the clathrate hydrate phase stability under different pressure-temperature conditions. The present calculations show that some of these assumptions may not accurately reflect the physical nature of the interactions between guest molecules and the lattice waters.
Theoretical study of the kinetics of reactions of the monohalogenated methanes with atomic chlorine.
Brudnik, Katarzyna; Twarda, Maria; Sarzyński, Dariusz; Jodkowski, Jerzy T
2013-04-01
Ab initio calculations at the G2 level were used in a theoretical description of the kinetics and mechanism of the hydrogen abstraction reactions from fluoro-, chloro- and bromomethane by chlorine atoms. The profiles of the potential energy surfaces show that mechanism of the reactions under investigation is complex and consists of two - in the case of CH3F+Cl - and of three elementary steps for CH3Cl+Cl and CH3Br+Cl. The heights of the energy barrier related to the H-abstraction are of 8-10 kJ mol(-1), the lowest value corresponds to CH3Cl+Cl and the highest one to CH3F+Cl. The rate constants were calculated using the theoretical method based on the RRKM theory and the simplified version of the statistical adiabatic channel model. The kinetic equations derived in this study[Formula: see text]and[Formula: see text]allow a description of the kinetics of the reactions under investigation in the temperature range of 200-3000 K. The kinetics of reactions of the entirely deuterated reactants were also included in the kinetic analysis. Results of ab initio calculations show that D-abstraction process is related with the energy barrier of 5 kJ mol(-1) higher than the H-abstraction from the corresponding non-deuterated reactant molecule. The derived analytical equations for the reactions, CD3X+Cl, CH2X+HCl and CD2X+DCl (X = F, Cl and Br) are a substantial supplement of the kinetic data necessary for the description and modeling of the processes of importance in the atmospheric chemistry.
Pazira, Parvin; Rostami Haji-Abadi, Mahdi; Zolaktaf, Vahid; Sabahi, Mohammadfarzan; Pazira, Toomaj
2016-06-08
In relation to statistical analysis, studies to determine the validity, reliability, objectivity and precision of new measuring devices are usually incomplete, due in part to using only correlation coefficient and ignoring the data dispersion. The aim of this study was to demonstrate the best way to determine the validity, reliability, objectivity and accuracy of an electro-inclinometer or other measuring devices. Another purpose of this study is to answer the question of whether reliability and objectivity represent accuracy of measuring devices. The validity of an electro-inclinometer was examined by mechanical and geometric methods. The objectivity and reliability of the device was assessed by calculating Cronbach's alpha for repeated measurements by three raters and by measurements on the same person by mechanical goniometer and the electro-inclinometer. Measurements were performed on "hip flexion with the extended knee" and "shoulder abduction with the extended elbow." The raters measured every angle three times within an interval of two hours. The three-way ANOVA was used to determine accuracy. The results of mechanical and geometric analysis showed that validity of the electro-inclinometer was 1.00 and level of error was less than one degree. Objectivity and reliability of electro-inclinometer was 0.999, while objectivity of mechanical goniometer was in the range of 0.802 to 0.966 and the reliability was 0.760 to 0.961. For hip flexion, the difference between raters in joints angle measurement by electro-inclinometer and mechanical goniometer was 1.74 and 16.33 degree (P<0.05), respectively. The differences for shoulder abduction measurement by electro-inclinometer and goniometer were 0.35 and 4.40 degree (P<0.05). Although both the objectivity and reliability are acceptable, the results showed that measurement error was very high in the mechanical goniometer. Therefore, it can be concluded that objectivity and reliability alone cannot determine the accuracy of a device and it is preferable to use other statistical methods to compare and evaluate the accuracy of these two devices.
Blum, T.; Boyle, P. A.; Izubuchi, T.; ...
2016-06-08
Here we report the first lattice QCD calculation of the hadronic vacuum polarization (HVP) disconnected contribution to the muon anomalous magnetic moment at physical pion mass. The calculation uses a refined noise-reduction technique that enables the control of statistical uncertainties at the desired level with modest computational effort. Measurements were performed on the 48 3×96 physical-pion-mass lattice generated by the RBC and UKQCD Collaborations. In conclusion, we find the leading-order hadronic vacuum polarization amore » $$HVP(LO)disc\\atop{μ}$$=-9.6(3.3)(2.3)×10 -10, where the first error is statistical and the second systematic.« less
Light clusters and pasta phases in warm and dense nuclear matter
NASA Astrophysics Data System (ADS)
Avancini, Sidney S.; Ferreira, Márcio; Pais, Helena; Providência, Constança; Röpke, Gerd
2017-04-01
The pasta phases are calculated for warm stellar matter in a framework of relativistic mean-field models, including the possibility of light cluster formation. Results from three different semiclassical approaches are compared with a quantum statistical calculation. Light clusters are considered as point-like particles, and their abundances are determined from the minimization of the free energy. The couplings of the light clusters to mesons are determined from experimental chemical equilibrium constants and many-body quantum statistical calculations. The effect of these light clusters on the chemical potentials is also discussed. It is shown that, by including heavy clusters, light clusters are present up to larger nucleonic densities, although with smaller mass fractions.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
The Validity and Reliability of an iPhone App for Measuring Running Mechanics.
Balsalobre-Fernández, Carlos; Agopyan, Hovannes; Morin, Jean-Benoit
2017-07-01
The purpose of this investigation was to analyze the validity of an iPhone application (Runmatic) for measuring running mechanics. To do this, 96 steps from 12 different runs at speeds ranging from 2.77-5.55 m·s -1 were recorded simultaneously with Runmatic, as well as with an opto-electronic device installed on a motorized treadmill to measure the contact and aerial time of each step. Additionally, several running mechanics variables were calculated using the contact and aerial times measured, and previously validated equations. Several statistics were computed to test the validity and reliability of Runmatic in comparison with the opto-electronic device for the measurement of contact time, aerial time, vertical oscillation, leg stiffness, maximum relative force, and step frequency. The running mechanics values obtained with both the app and the opto-electronic device showed a high degree of correlation (r = .94-.99, p < .001). Moreover, there was very close agreement between instruments as revealed by the ICC (2,1) (ICC = 0.965-0.991). Finally, both Runmatic and the opto-electronic device showed almost identical reliability levels when measuring each set of 8 steps for every run recorded. In conclusion, Runmatic has been proven to be a highly reliable tool for measuring the running mechanics studied in this work.
NASA Astrophysics Data System (ADS)
Gorkunov, M. V.; Osipov, M. A.; Kapernaum, N.; Nonnenmacher, D.; Giesselmann, F.
2011-11-01
A molecular statistical theory of the smectic A phase is developed taking into account specific interactions between different molecular fragments which enables one to describe different microscopic scenario of the transition into the smectic phase. The effects of nanoscale segregation are described using molecular models with different combinations of attractive and repulsive sites. These models have been used to calculate numerically coefficients in the mean filed potential as functions of molecular model parameters and the period of the smectic structure. The same coefficients are calculated also for a conventional smectic with standard Gay-Berne interaction potential which does not promote the segregation. The free energy is minimized numerically to calculate the order parameters of the smectic A phases and to study the nature of the smectic transition in both systems. It has been found that in conventional materials the smectic order can be stabilized only when the orientational order is sufficiently high, In contrast, in materials with nanosegregation the smectic order develops mainly in the form of the orientational-translational wave while the nematic order parameter remains relatively small. Microscopic mechanisms of smectic ordering in both systems are discussed in detail, and the results for smectic order parameters are compared with experimental data for materials of various molecular structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Kaushik; Sinha, Sudipta Kumar; Bandyopadhyay, Sanjoy, E-mail: sanjoy@chem.iitkgp.ernet.in
The noncovalent interaction between protein and DNA is responsible for regulating the genetic activities in living organisms. The most critical issue in this problem is to understand the underlying driving force for the formation and stability of the complex. To address this issue, we have performed atomistic molecular dynamics simulations of two DNA binding K homology (KH) domains (KH3 and KH4) of the far upstream element binding protein (FBP) complexed with two single-stranded DNA (ss-DNA) oligomers in aqueous media. Attempts have been made to calculate the individual components of the net entropy change for the complexation process by adopting suitablemore » statistical mechanical approaches. Our calculations reveal that translational, rotational, and configurational entropy changes of the protein and the DNA components have unfavourable contributions for this protein-DNA association process and such entropy lost is compensated by the entropy gained due to the release of hydration layer water molecules. The free energy change corresponding to the association process has also been calculated using the Free Energy Perturbation (FEP) method. The free energy gain associated with the KH4–DNA complex formation has been found to be noticeably higher than that involving the formation of the KH3–DNA complex.« less
Quantifying Density Fluctuations in Volumes of All Shapes and Sizes Using Indirect Umbrella Sampling
NASA Astrophysics Data System (ADS)
Patel, Amish J.; Varilly, Patrick; Chandler, David; Garde, Shekhar
2011-10-01
Water density fluctuations are an important statistical mechanical observable and are related to many-body correlations, as well as hydrophobic hydration and interactions. Local water density fluctuations at a solid-water surface have also been proposed as a measure of its hydrophobicity. These fluctuations can be quantified by calculating the probability, P v ( N), of observing N waters in a probe volume of interest v. When v is large, calculating P v ( N) using molecular dynamics simulations is challenging, as the probability of observing very few waters is exponentially small, and the standard procedure for overcoming this problem (umbrella sampling in N) leads to undesirable impulsive forces. Patel et al. (J. Phys. Chem. B 114:1632, 2010) have recently developed an indirect umbrella sampling (INDUS) method, that samples a coarse-grained particle number to obtain P v ( N) in cuboidal volumes. Here, we present and demonstrate an extension of that approach to volumes of other basic shapes, like spheres and cylinders, as well as to collections of such volumes. We further describe the implementation of INDUS in the NPT ensemble and calculate P v ( N) distributions over a broad range of pressures. Our method may be of particular interest in characterizing the hydrophobicity of interfaces of proteins, nanotubes and related systems.
NASA Astrophysics Data System (ADS)
Bianchi, Eugenio; Haggard, Hal M.; Rovelli, Carlo
2017-08-01
We show that in Oeckl's boundary formalism the boundary vectors that do not have a tensor form represent, in a precise sense, statistical states. Therefore the formalism incorporates quantum statistical mechanics naturally. We formulate general-covariant quantum statistical mechanics in this language. We illustrate the formalism by showing how it accounts for the Unruh effect. We observe that the distinction between pure and mixed states weakens in the general covariant context, suggesting that local gravitational processes are naturally statistical without a sharp quantal versus probabilistic distinction.
Introduction to the topical issue: Nonadditive entropy and nonextensive statistical mechanics
NASA Astrophysics Data System (ADS)
Sugiyama, Masaru
. Dear CMT readers, it is my pleasure to introduce you to this topical issue dealing with a new research field of great interest, nonextensive statistical mechanics. This theory was initiated by Constantino Tsallis' work in 1998, as a possible generalization of Boltzmann-Gibbs thermostatistics. It is based on a nonadditive entropy, nowadays referred to as the Tsallis entropy. Nonextensive statistical mechanics is expected to be a consistent and unified theoretical framework for describing the macroscopic properties of complex systems that are anomalous in view of ordinary thermostatistics. In such systems, the long-standing problem regarding the relationship between statistical and dynamical laws becomes highlighted, since ergodicity and mixing may not be well realized in situations such as the edge of chaos. The phase space appears to self-organize in a structure that is not simply Euclidean but (multi)fractal. Due to this nontrivial structure, the concept of homogeneity of the system, which is the basic premise in ordinary thermodynamics, is violated and accordingly the additivity postulate for the thermodynamic quantities such as the internal energy and entropy may not be justified, in general. (Physically, nonadditivity is deeply relevant to nonextensivity of a system, in which the thermodynamic quantities do not scale with size in a simple way. Typical examples are systems with long-range interactions like self-gravitating systems as well as nonneutral charged ones.) A point of crucial importance here is that, phenomenologically, such an exotic phase-space structure has a fairly long lifetime. Therefore, this state, referred to as a metaequilibrium state or a nonequilibrium stationary state, appears to be described by a generalized entropic principle different from the traditional Boltzmann-Gibbs form, even though it may eventually approach the Boltzmann-Gibbs equilibrium state. The limits t-> ∞ and N-> ∞ do not commute, where t and N are time and the number of particles, respectively. The present topical issue is devoted to summarizing the current status of nonextensive statistical mechanics from various perspectives. It is my hope that this issue can inform the reader of one of the foremost research areas in thermostatistics. This issue consists of eight articles. The first one by Tsallis and Brigatti presents a general introduction and an overview of nonextensive statistical mechanics. At first glance, generalization of the ordinary Boltzmann-Gibbs-Shannon entropy might be completely arbitrary. But Abe's article explains how Tsallis' generalization of the statistical entropy can uniquely be characterized by both physical and mathematical principles. Then, the article by Pluchino, Latora, and Rapisarda presents a strong evidence that nonextensive statistical mechanics is in fact relevant to nonextensive systems with long-range interactions. The articles by Rajagopal, by Wada, and by Plastino, Miller, and Plastino are concerned with the macroscopic thermodynamic properties of nonextensive statistical mechanics. Rajagopal discusses the first and second laws of thermodynamics. Wada develops a discussion about the condition under which the nonextensive statistical-mechanical formalism is thermodynamically stable. The work of Plastino, Miller, and Plastino addresses the thermodynamic Legendre-transform structure and its robustness for generalizations of entropy. After these fundamental investigations, Sakagami and Taruya examine the theory for self-gravitating systems. Finally, Beck presents a novel idea of the so-called superstatistics, which provides nonextensive statistical mechanics with a physical interpretation based on nonequilibrium concepts including temperature fluctuations. Its applications to hydrodynamic turbulence and pattern formation in thermal convection states are also discussed. Nonextensive statistical mechanics is already a well-studied field, and a number of works are available in the literature. It is recommended that the interested reader visit the URL http: //tsallis.cat.cbpf.br/TEMUCO.pdf. There, one can find a comprehensive list of references to more than one thousand papers including important results that, due to lack of space, have not been mentioned in the present issue. Though there are so many published works, nonextensive statistical mechanics is still a developing field. This can naturally be understood, since the program that has been undertaken is an extremely ambitious one that makes a serious attempt to enlarge the horizons of the realm of statistical mechanics. The possible influence of nonextensive statistical mechanics on continuum mechanics and thermodynamics seems to be wide and deep. I will therefore be happy if this issue contributes to attracting the interest of researchers and stimulates research activities not only in the very field of nonextensive statistical mechanics but also in the field of continuum mechanics and thermodynamics in a wider context. As the editor of the present topical issue, I would like to express my sincere thanks to all those who joined up to make this issue. I cordially thank Professor S. Abe for advising me on the editorial policy. Without his help, the present topical issue would never have been brought out.
"Magnitude-based inference": a statistical review.
Welsh, Alan H; Knight, Emma J
2015-04-01
We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.
Landes, Reid D.; Lensing, Shelly Y.; Kodell, Ralph L.; Hauer-Jensen, Martin
2014-01-01
The dose of a substance that causes death in P% of a population is called an LDP, where LD stands for lethal dose. In radiation research, a common LDP of interest is the radiation dose that kills 50% of the population by a specified time, i.e., lethal dose 50 or LD50. When comparing LD50 between two populations, relative potency is the parameter of interest. In radiation research, this is commonly known as the dose reduction factor (DRF). Unfortunately, statistical inference on dose reduction factor is seldom reported. We illustrate how to calculate confidence intervals for dose reduction factor, which may then be used for statistical inference. Further, most dose reduction factor experiments use hundreds, rather than tens of animals. Through better dosing strategies and the use of a recently available sample size formula, we also show how animal numbers may be reduced while maintaining high statistical power. The illustrations center on realistic examples comparing LD50 values between a radiation countermeasure group and a radiation-only control. We also provide easy-to-use spreadsheets for sample size calculations and confidence interval calculations, as well as SAS® and R code for the latter. PMID:24164553
Statistical analysis of radioimmunoassay. In comparison with bioassay (in Japanese)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakano, R.
1973-01-01
Using the data of RIA (radioimmunoassay), statistical procedures for dealing with two problems of the linearization of dose response curve and calculation of relative potency were described. There were three methods for linearization of dose response curve of RIA. In each method, the following parameters were shown on the horizontal and vertical axis: dose x, (B/T)/sup -1/; c/x + c, B/T (C: dose which makes B/T 50%); log x, logit B/T. Among them, the last method seems to be most practical. The statistical procedures for bioassay were employed for calculating the relative potency of unknown samples compared to the standardmore » samples from dose response curves of standand and unknown samples using regression coefficient. It is desirable that relative potency is calculated by plotting more than 5 points in the standard curve and plotting more than 2 points in unknow samples. For examining the statistical limit of precision of measuremert, LH activity of gonadotropin in urine was measured and relative potency, precision coefficient and the upper and lower limits of relative potency at 95% confidence limit were calculated. On the other hand, bioassay (by the ovarian ascorbic acid reduction method and anteriol lobe of prostate weighing method) was done in the same samples, and the precision was compared with that of RIA. In these examinations, the upper and lower limits of the relative potency at 95% confidence limit were near each other, while in bioassay, a considerable difference was observed between the upper and lower limits. The necessity of standardization and systematization of the statistical procedures for increasing the precision of RIA was pointed out. (JA)« less
Learning Predictive Statistics: Strategies and Brain Mechanisms.
Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe
2017-08-30
When immersed in a new environment, we are challenged to decipher initially incomprehensible streams of sensory information. However, quite rapidly, the brain finds structure and meaning in these incoming signals, helping us to predict and prepare ourselves for future actions. This skill relies on extracting the statistics of event streams in the environment that contain regularities of variable complexity from simple repetitive patterns to complex probabilistic combinations. Here, we test the brain mechanisms that mediate our ability to adapt to the environment's statistics and predict upcoming events. By combining behavioral training and multisession fMRI in human participants (male and female), we track the corticostriatal mechanisms that mediate learning of temporal sequences as they change in structure complexity. We show that learning of predictive structures relates to individual decision strategy; that is, selecting the most probable outcome in a given context (maximizing) versus matching the exact sequence statistics. These strategies engage distinct human brain regions: maximizing engages dorsolateral prefrontal, cingulate, sensory-motor regions, and basal ganglia (dorsal caudate, putamen), whereas matching engages occipitotemporal regions (including the hippocampus) and basal ganglia (ventral caudate). Our findings provide evidence for distinct corticostriatal mechanisms that facilitate our ability to extract behaviorally relevant statistics to make predictions. SIGNIFICANCE STATEMENT Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. Past work has studied how humans identify repetitive patterns and associative pairings. However, the natural environment contains regularities that vary in complexity from simple repetition to complex probabilistic combinations. Here, we combine behavior and multisession fMRI to track the brain mechanisms that mediate our ability to adapt to changes in the environment's statistics. We provide evidence for an alternate route for learning complex temporal statistics: extracting the most probable outcome in a given context is implemented by interactions between executive and motor corticostriatal mechanisms compared with visual corticostriatal circuits (including hippocampal cortex) that support learning of the exact temporal statistics. Copyright © 2017 Wang et al.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasper, Ahren
2015-04-14
The appropriateness of treating crossing seams of electronic states of different spins as nonadiabatic transition states in statistical calculations of spin-forbidden reaction rates is considered. We show that the spin-forbidden reaction coordinate, the nuclear coordinate perpendicular to the crossing seam, is coupled to the remaining nuclear degrees of freedom. We found that this coupling gives rise to multidimensional effects that are not typically included in statistical treatments of spin-forbidden kinetics. Three qualitative categories of multidimensional effects may be identified: static multidimensional effects due to the geometry-dependence of the local shape of the crossing seam and of the spin–orbit coupling, dynamicalmore » multidimensional effects due to energy exchange with the reaction coordinate during the seam crossing, and nonlocal(history-dependent) multidimensional effects due to interference of the electronic variables at second, third, and later seam crossings. Nonlocal multidimensional effects are intimately related to electronic decoherence, where electronic dephasing acts to erase the history of the system. A semiclassical model based on short-time full-dimensional trajectories that includes all three multidimensional effects as well as a model for electronic decoherence is presented. The results of this multidimensional nonadiabatic statistical theory (MNST) for the 3O + CO → CO 2 reaction are compared with the results of statistical theories employing one-dimensional (Landau–Zener and weak coupling) models for the transition probability and with those calculated previously using multistate trajectories. The MNST method is shown to accurately reproduce the multistate decay-of-mixing trajectory results, so long as consistent thresholds are used. Furthermore, the MNST approach has several advantages over multistate trajectory approaches and is more suitable in chemical kinetics calculations at low temperatures and for complex systems. The error in statistical calculations that neglect multidimensional effects is shown to be as large as a factor of 2 for this system, with static multidimensional effects identified as the largest source of error.« less
Six new mechanics corresponding to further shape theories
NASA Astrophysics Data System (ADS)
Anderson, Edward
2016-02-01
In this paper, suite of relational notions of shape are presented at the level of configuration space geometry, with corresponding new theories of shape mechanics and shape statistics. These further generalize two quite well known examples: (i) Kendall’s (metric) shape space with his shape statistics and Barbour’s mechanics thereupon. (ii) Leibnizian relational space alias metric scale-and-shape space to which corresponds Barbour-Bertotti mechanics. This paper’s new theories include, using the invariant and group namings, (iii) Angle alias conformal shape mechanics. (iv) Area ratio alias e shape mechanics. (v) Area alias e scale-and-shape mechanics. (iii)-(v) rest respectively on angle space, area-ratio space, and area space configuration spaces. Probability and statistics applications are also pointed to in outline. (vi) Various supersymmetric counterparts of (i)-(v) are considered. Since supergravity differs considerably from GR-based conceptions of background independence, some of the new supersymmetric shape mechanics are compared with both. These reveal compatibility between supersymmetry and GR-based conceptions of background independence, at least within these simpler model arenas.
NASA Astrophysics Data System (ADS)
Gaskill, Christopher Somers
The use of hard-walled narrow tubes, often called resonance tubes, for the purpose of voice therapy and voice training has a historical precedent and some theoretical support, but the mechanism of any potential benefit from the application of this technique has remained poorly understood. Fifteen vocally untrained male participants produced a series of spoken /a / vowels at a modal pitch and constant loudness, followed by a minute of repeated phonation into a hard-walled glass tube at the same pitch and loudness targets. The tube parameters and tube phonation task criteria were selected according to theoretical calculations predicting an increase in the acoustic load such that phonation would occur under conditions of near-maximum inertive reactance. Following tube phonation, each participant repeated a similar series of spoken /a/ vowels. Electroglottography (EGG) was used to measure the glottal closed quotient (CQ) during each phase of the experiment. A single-subject, multiple-baseline design with direct replication across subjects was used to identify any changes in CQ across the phases of the experiment. Single-subject analysis using the method of Statistical Process Control (SPC) revealed statistically significant changes in CQ during tube phonation, but with no discernable pattern across the 15 participants. These results indicate that the use of resonance tubes can have a distinct effect on glottal closure, but the mechanism behind this change remains unclear. The implication is that vocal loading techniques such as this need to be studied further with specific attention paid to the underlying mechanism of any measured changes in glottal behavior, and especially to the role of instruction and feedback in the therapeutic and pedagogical application of these techniques.
Drug-Induced Dental Caries: A Disproportionality Analysis Using Data from VigiBase.
de Campaigno, Emilie Patras; Kebir, Inès; Montastruc, Jean-Louis; Rueter, Manuela; Maret, Delphine; Lapeyre-Mestre, Maryse; Sallerin, Brigitte; Despas, Fabien
2017-12-01
Dental caries is defined as a pathological breakdown of the tooth. It is an infectious phenomenon involving a multifactorial aetiology. The impact of drugs on cariogenic risk has been poorly investigated. In this study, we identified drugs suspected to induce dental caries as adverse drug reactions (ADRs) and then studied a possible pathogenic mechanism for each drug that had a statistically significant disproportionality. We extracted individual case safety reports of dental caries associated with drugs from VigiBase ® (the World Health Organization global individual case safety report database). We calculated disproportionality for each drug with a reporting odds ratio (ROR) and 99% confidence interval. We analysed the pharmacodynamics of each drug that had a statistically significant disproportionality. In VigiBase ® , 5229 safety reports for dental caries concerning 733 drugs were identified. Among these drugs, 88 had a significant ROR, and for 65 of them (73.9%), no information about dental caries was found in the summaries of the product characteristics, the Micromedex ® DRUGDEX, or the Martindale databases. Regarding the pharmacological classes of drugs involved in dental caries, we identified bisphosphonates, atropinic drugs, antidepressants, corticoids, immunomodulating drugs, antipsychotics, antiepileptics, opioids and β 2 -adrenoreceptor agonist drugs. Regarding possible pathogenic mechanisms for these drugs, we identified changes in salivary flow/composition for 54 drugs (61.4%), bone metabolism changes for 31 drugs (35.2%), hyperglycaemia for 32 drugs (36.4%) and/or immunosuppression for 23 drugs (26.1%). For nine drugs (10.2%), the mechanism was unclear. We identified 88 drugs with a significant positive disproportionality for dental caries. Special attention has to be paid to bisphosphonates, atropinic drugs, immunosuppressants and drugs causing hyperglycaemia.
NASA Astrophysics Data System (ADS)
Bernstein, V.; Kolodney, E.
2017-10-01
We have recently observed, both experimentally and computationally, the phenomenon of postcollision multifragmentation in sub-keV surface collisions of a C60 projectile. Namely, delayed multiparticle breakup of a strongly impact deformed and vibrationally excited large cluster collider into several large fragments, after leaving the surface. Molecular dynamics simulations with extensive statistics revealed a nearly simultaneous event, within a sub-psec time window. Here we study, computationally, additional essential aspects of this new delayed collisional fragmentation which were not addressed before. Specifically, we study here the delayed (binary) fission channel for different impact energies both by calculating mass distributions over all fission events and by calculating and analyzing lifetime distributions of the scattered projectile. We observe an asymmetric fission resulting in a most probable fission channel and we find an activated exponential (statistical) decay. Finally, we also calculate and discuss the fragment mass distribution in (triple) multifragmentation over different time windows, in terms of most abundant fragments.
Capture approximations beyond a statistical quantum mechanical method for atom-diatom reactions
NASA Astrophysics Data System (ADS)
Barrios, Lizandra; Rubayo-Soneira, Jesús; González-Lezana, Tomás
2016-03-01
Statistical techniques constitute useful approaches to investigate atom-diatom reactions mediated by insertion dynamics which involves complex-forming mechanisms. Different capture schemes based on energy considerations regarding the specific diatom rovibrational states are suggested to evaluate the corresponding probabilities of formation of such collision species between reactants and products in an attempt to test reliable alternatives for computationally demanding processes. These approximations are tested in combination with a statistical quantum mechanical method for the S + H2(v = 0 ,j = 1) → SH + H and Si + O2(v = 0 ,j = 1) → SiO + O reactions, where this dynamical mechanism plays a significant role, in order to probe their validity.
Upgrade Summer Severe Weather Tool
NASA Technical Reports Server (NTRS)
Watson, Leela
2011-01-01
The goal of this task was to upgrade to the existing severe weather database by adding observations from the 2010 warm season, update the verification dataset with results from the 2010 warm season, use statistical logistic regression analysis on the database and develop a new forecast tool. The AMU analyzed 7 stability parameters that showed the possibility of providing guidance in forecasting severe weather, calculated verification statistics for the Total Threat Score (TTS), and calculated warm season verification statistics for the 2010 season. The AMU also performed statistical logistic regression analysis on the 22-year severe weather database. The results indicated that the logistic regression equation did not show an increase in skill over the previously developed TTS. The equation showed less accuracy than TTS at predicting severe weather, little ability to distinguish between severe and non-severe weather days, and worse standard categorical accuracy measures and skill scores over TTS.
Condensate statistics and thermodynamics of weakly interacting Bose gas: Recursion relation approach
NASA Astrophysics Data System (ADS)
Dorfman, K. E.; Kim, M.; Svidzinsky, A. A.
2011-03-01
We study condensate statistics and thermodynamics of weakly interacting Bose gas with a fixed total number N of particles in a cubic box. We find the exact recursion relation for the canonical ensemble partition function. Using this relation, we calculate the distribution function of condensate particles for N=200. We also calculate the distribution function based on multinomial expansion of the characteristic function. Similar to the ideal gas, both approaches give exact statistical moments for all temperatures in the framework of Bogoliubov model. We compare them with the results of unconstraint canonical ensemble quasiparticle formalism and the hybrid master equation approach. The present recursion relation can be used for any external potential and boundary conditions. We investigate the temperature dependence of the first few statistical moments of condensate fluctuations as well as thermodynamic potentials and heat capacity analytically and numerically in the whole temperature range.
Densely calculated facial soft tissue thickness for craniofacial reconstruction in Chinese adults.
Shui, Wuyang; Zhou, Mingquan; Deng, Qingqiong; Wu, Zhongke; Ji, Yuan; Li, Kang; He, Taiping; Jiang, Haiyan
2016-09-01
Craniofacial reconstruction (CFR) is used to recreate a likeness of original facial appearance for an unidentified skull; this technique has been applied in both forensics and archeology. Many CFR techniques rely on the average facial soft tissue thickness (FSTT) of anatomical landmarks, related to ethnicity, age, sex, body mass index (BMI), etc. Previous studies typically employed FSTT at sparsely distributed anatomical landmarks, where different landmark definitions may affect the contrasting results. In the present study, a total of 90,198 one-to-one correspondence skull vertices are established on 171 head CT-scans and the FSTT of each corresponding vertex is calculated (hereafter referred to as densely calculated FSTT) for statistical analysis and CFR. Basic descriptive statistics (i.e., mean and standard deviation) for densely calculated FSTT are reported separately according to sex and age. Results show that 76.12% of overall vertices indicate that the FSTT is greater in males than females, with the exception of vertices around the zygoma, zygomatic arch and mid-lateral orbit. These sex-related significant differences are found at 55.12% of all vertices and the statistically age-related significant differences are depicted between the three age groups at a majority of all vertices (73.31% for males and 63.43% for females). Five non-overlapping categories are given and the descriptive statistics (i.e., mean, standard deviation, local standard deviation and percentage) are reported. Multiple appearances are produced using the densely calculated FSTT of various age and sex groups, and a quantitative assessment is provided to examine how relevant the choice of FSTT is to increasing the accuracy of CFR. In conclusion, this study provides a new perspective in understanding the distribution of FSTT and the construction of a new densely calculated FSTT database for craniofacial reconstruction. Copyright © 2016. Published by Elsevier Ireland Ltd.
Chaikh, Abdulhamid; Balosso, Jacques
2016-12-01
To apply the statistical bootstrap analysis and dosimetric criteria's to assess the change of prescribed dose (PD) for lung cancer to maintain the same clinical results when using new generations of dose calculation algorithms. Nine lung cancer cases were studied. For each patient, three treatment plans were generated using exactly the same beams arrangements. In plan 1, the dose was calculated using pencil beam convolution (PBC) algorithm turning on heterogeneity correction with modified batho (PBC-MB). In plan 2, the dose was calculated using anisotropic analytical algorithm (AAA) and the same PD, as plan 1. In plan 3, the dose was calculated using AAA with monitor units (MUs) obtained from PBC-MB, as input. The dosimetric criteria's include MUs, delivered dose at isocentre (Diso) and calculated dose to 95% of the target volume (D95). The bootstrap method was used to assess the significance of the dose differences and to accurately estimate the 95% confidence interval (95% CI). Wilcoxon and Spearman's rank tests were used to calculate P values and the correlation coefficient (ρ). Statistically significant for dose difference was found using point kernel model. A good correlation was observed between both algorithms types, with ρ>0.9. Using AAA instead of PBC-MB, an adjustment of the PD in the isocentre is suggested. For a given set of patients, we assessed the need to readjust the PD for lung cancer using dosimetric indices and bootstrap statistical method. Thus, if the goal is to keep on with the same clinical results, the PD for lung tumors has to be adjusted with AAA. According to our simulation we suggest to readjust the PD by 5% and an optimization for beam arrangements to better protect the organs at risks (OARs).
Calculation of a fluctuating entropic force by phase space sampling.
Waters, James T; Kim, Harold D
2015-07-01
A polymer chain pinned in space exerts a fluctuating force on the pin point in thermal equilibrium. The average of such fluctuating force is well understood from statistical mechanics as an entropic force, but little is known about the underlying force distribution. Here, we introduce two phase space sampling methods that can produce the equilibrium distribution of instantaneous forces exerted by a terminally pinned polymer. In these methods, both the positions and momenta of mass points representing a freely jointed chain are perturbed in accordance with the spatial constraints and the Boltzmann distribution of total energy. The constraint force for each conformation and momentum is calculated using Lagrangian dynamics. Using terminally pinned chains in space and on a surface, we show that the force distribution is highly asymmetric with both tensile and compressive forces. Most importantly, the mean of the distribution, which is equal to the entropic force, is not the most probable force even for long chains. Our work provides insights into the mechanistic origin of entropic forces, and an efficient computational tool for unbiased sampling of the phase space of a constrained system.
The speed-curvature power law of movements: a reappraisal.
Zago, Myrka; Matic, Adam; Flash, Tamar; Gomez-Marin, Alex; Lacquaniti, Francesco
2018-01-01
Several types of curvilinear movements obey approximately the so called 2/3 power law, according to which the angular speed varies proportionally to the 2/3 power of the curvature. The origin of the law is debated but it is generally thought to depend on physiological mechanisms. However, a recent paper (Marken and Shaffer, Exp Brain Res 88:685-690, 2017) claims that this power law is simply a statistical artifact, being a mathematical consequence of the way speed and curvature are calculated. Here we reject this hypothesis by showing that the speed-curvature power law of biological movements is non-trivial. First, we confirm that the power exponent varies with the shape of human drawing movements and with environmental factors. Second, we report experimental data from Drosophila larvae demonstrating that the power law does not depend on how curvature is calculated. Third, we prove that the law can be violated by means of several mathematical and physical examples. Finally, we discuss biological constraints that may underlie speed-curvature power laws discovered in empirical studies.
Ultrasound transmission measurements for tensile strength evaluation of tablets.
Simonaho, Simo-Pekka; Takala, T Aleksi; Kuosmanen, Marko; Ketolainen, Jarkko
2011-05-16
Ultrasound transmission measurements were performed to evaluate the tensile strength of tablets. Tablets consisting of one ingredient were compressed from dibasic calcium phosphate dehydrate, two grades of microcrystalline cellulose and two grades of lactose monohydrate powders. From each powder, tablets with five different tensile strengths were directly compressed. Ultrasound transmission measurements were conducted on every tablet at frequencies of 2.25 MHz, 5 MHz and 10 MHz and the speed of sound was calculated from the acquired waveforms. The tensile strength of the tablets was determined using a diametrical mechanical testing machine and compared to the calculated speed of sound values. It was found that the speed of sound increased with the tensile strength for the tested excipients. There was a good correlation between the speed of sound and tensile strength. Moreover, based on the statistical tests, the groups with different tensile strengths can be differentiated from each other by measuring the speed of sound. Thus, the ultrasound transmission measurement technique is a potentially useful method for non-destructive and fast evaluation of the tensile strength of tablets. Copyright © 2011 Elsevier B.V. All rights reserved.
Performance of a laser microsatellite network with an optical preamplifier.
Arnon, Shlomi
2005-04-01
Laser satellite communication (LSC) uses free space as a propagation medium for various applications, such as intersatellite communication or satellite networking. An LSC system includes a laser transmitter and an optical receiver. For communication to occur, the line of sight of the transmitter and the receiver must be aligned. However, mechanical vibration and electronic noise in the control system reduce alignment between the transmitter laser beam and the receiver field of view (FOV), which results in pointing errors. The outcome of pointing errors is fading of the received signal, which leads to impaired link performance. An LSC system is considered in which the optical preamplifier is incorporated into the receiver, and a bit error probability (BEP) model is derived that takes into account the statistics of the pointing error as well as the optical amplifier and communication system parameters. The model and the numerical calculation results indicate that random pointing errors of sigma(chi)2G > 0.05 penalize communication performance dramatically for all combinations of optical amplifier gains and noise figures that were calculated.
Non-solar noble gas abundances in the atmosphere of Jupiter
NASA Technical Reports Server (NTRS)
Lunine, Jonathan I.; Stevenson, David J.
1986-01-01
The thermodynamic stability of clathrate hydrate is calculated to predict the formation conditions corresponding to a range of solar system parameters. The calculations were performed using the statistical mechanical theory developed by van der Waals and Platteeuw (1959) and existing experimental data concerning clathrate hydrate and its components. Dissociation pressures and partition functions (Langmuir constants) are predicted at low pressure for CO clathrate (hydrate) using the properties of chemicals similar to CO. It is argued that nonsolar but well constrained noble gas abundances may be measurable by the Galileo spacecraft in the Jovian atmosphere if the observed carbon enhancement is due to bombardment of the atmosphere by clathrate-bearing planetesimals sometime after planetary formation. The noble gas abundances of the Jovian satellite Titan are predicted, assuming that most of the methane in Titan is accreted as clathrate. It is suggested that under thermodynamically appropriate conditions, complete clathration of water ice could have occurred in high-pressure nebulas around giant planets, but probably not in the outer solar nebula. The stability of clathrate in other pressure ranges is also discussed.
NASA Technical Reports Server (NTRS)
Lunine, J. I.; Stevenson, D. J.
1985-01-01
The thermodynamic stability of clathrate hydrate is calculated to predict the formation conditions corresponding to a range of solar system parameters. The calculations were performed using the statistical mechanical theory developed by van der Waals and Platteeuw (1959) and existing experimental data concerning clathrate hydrate and its components. Dissociation pressures and partition functions (Langmuir constants) are predicted at low pressure for CO clathrate (hydrate) using the properties of chemicals similar to CO. It is argued that nonsolar but well constrained noble gas abundances may be measurable by the Galileo spacecraft in the Jovian atmosphere if the observed carbon enhancement is due to bombardment of the atmosphere by clathrate-bearing planetesimals sometime after planetary formation. The noble gas abundances of the Jovian satellite Titan are predicted, assuming that most of the methane in Titan is accreted as clathrate. It is suggested that under thermodynamically appropriate conditions, complete clathration of water ice could have occurred in high-pressure nebulas around giant planets, but probably not in the outer solar nebula. The stability of clathrate in other pressure ranges is also discussed.
NASA Astrophysics Data System (ADS)
Shiroudi, Abolfazl; Zahedi, Ehsan; Oliaey, Ahmad Reza; Deleuze, Michael S.
2017-03-01
The thermal decomposition kinetics of 2-chloroethylsilane and derivatives in the gas phase has been studied computationally using density functional theory, along with various exchange-correlation functionals (UM06-2x and ωB97XD) and the aug-cc-pVTZ basis set. The calculated energy profile has been supplemented with calculations of kinetic rate constants under atmospheric pressure and in the fall-off regime, using transition state theory (TST) and statistical Rice-Ramsperger-Kassel-Marcus (RRKM) theory. Activation energies and rate constants obtained using the UM06-2x/aug-cc-pVTZ approach are in good agreement with the experimental data. The decomposition of 2-chloroethyltriethylsilane species into the related products [C2H4 + Et3SiCl] is characterized by 6 successive structural stability domains associated to the sequence of catastrophes C8H19SiCl: 6-C†FCC†[FF]-0: C6H15SiCl + C2H4. Breaking of Si-C bonds and formation of Si-Cl bonds occur in the vicinity of the transition state.
NASA Astrophysics Data System (ADS)
Vonta, N.; Souliotis, G. A.; Loveland, W.; Kwon, Y. K.; Tshoo, K.; Jeong, S. C.; Veselsky, M.; Bonasera, A.; Botvina, A.
2016-12-01
We investigate the possibilities of producing neutron-rich nuclides in projectile fission of heavy beams in the energy range of 20 MeV/nucleon expected from low-energy facilities. We report our efforts to theoretically describe the reaction mechanism of projectile fission following a multinucleon transfer collision at this energy range. Our calculations are mainly based on a two-step approach: The dynamical stage of the collision is described with either the phenomenological deep-inelastic transfer model (DIT) or with the microscopic constrained molecular dynamics model (CoMD). The de-excitation or fission of the hot heavy projectile fragments is performed with the statistical multifragmentation model (SMM). We compared our model calculations with our previous experimental projectile-fission data of 238U (20 MeV/nucleon) + 208Pb and 197Au (20 MeV/nucleon) + 197Au and found an overall reasonable agreement. Our study suggests that projectile fission following peripheral heavy-ion collisions at this energy range offers an effective route to access very neutron-rich rare isotopes toward and beyond the astrophysical r-process path.
allantools: Allan deviation calculation
NASA Astrophysics Data System (ADS)
Wallin, Anders E. E.; Price, Danny C.; Carson, Cantwell G.; Meynadier, Frédéric
2018-04-01
allantools calculates Allan deviation and related time & frequency statistics. The library is written in Python and has a GPL v3+ license. It takes input data that is either evenly spaced observations of either fractional frequency, or phase in seconds. Deviations are calculated for given tau values in seconds. Several noise generators for creating synthetic datasets are also included.
A Simple Formula for Quantiles on the TI-82/83 Calculator.
ERIC Educational Resources Information Center
Eisner, Milton P.
1997-01-01
The concept of percentile is a fundamental part of every course in basic statistics. Many such courses are now taught to students and require them to have TI-82 or TI-83 calculators. The functions defined in these calculators enable students to easily find the percentiles of a discrete data set. (PVD)
NASA Astrophysics Data System (ADS)
Vitali, Lina; Righini, Gaia; Piersanti, Antonio; Cremona, Giuseppe; Pace, Giandomenico; Ciancarella, Luisella
2017-12-01
Air backward trajectory calculations are commonly used in a variety of atmospheric analyses, in particular for source attribution evaluation. The accuracy of backward trajectory analysis is mainly determined by the quality and the spatial and temporal resolution of the underlying meteorological data set, especially in the cases of complex terrain. This work describes a new tool for the calculation and the statistical elaboration of backward trajectories. To take advantage of the high-resolution meteorological database of the Italian national air quality model MINNI, a dedicated set of procedures was implemented under the name of M-TraCE (MINNI module for Trajectories Calculation and statistical Elaboration) to calculate and process the backward trajectories of air masses reaching a site of interest. Some outcomes from the application of the developed methodology to the Italian Network of Special Purpose Monitoring Stations are shown to assess its strengths for the meteorological characterization of air quality monitoring stations. M-TraCE has demonstrated its capabilities to provide a detailed statistical assessment of transport patterns and region of influence of the site under investigation, which is fundamental for correctly interpreting pollutants measurements and ascertaining the official classification of the monitoring site based on meta-data information. Moreover, M-TraCE has shown its usefulness in supporting other assessments, i.e., spatial representativeness of a monitoring site, focussing specifically on the analysis of the effects due to meteorological variables.
ERIC Educational Resources Information Center
Osler, James Edward
2014-01-01
This monograph provides an epistemological rational for the design of a novel post hoc statistical measure called "Tri-Center Analysis". This new statistic is designed to analyze the post hoc outcomes of the Tri-Squared Test. In Tri-Center Analysis trichotomous parametric inferential parametric statistical measures are calculated from…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-20
... is calculated from tumor data of the cancer bioassays using a statistical extrapolation procedure... carcinogenic concern currently set forth in Sec. 500.84 utilizes a statistical extrapolation procedure that... procedures did not rely on a statistical extrapolation of the data to a 1 in 1 million risk of cancer to test...
Calculation of recoil implantation profiles using known range statistics
NASA Technical Reports Server (NTRS)
Fung, C. D.; Avila, R. E.
1985-01-01
A method has been developed to calculate the depth distribution of recoil atoms that result from ion implantation onto a substrate covered with a thin surface layer. The calculation includes first order recoils considering projected range straggles, and lateral straggles of recoils but neglecting lateral straggles of projectiles. Projectile range distributions at intermediate energies in the surface layer are deduced from look-up tables of known range statistics. A great saving of computing time and human effort is thus attained in comparison with existing procedures. The method is used to calculate recoil profiles of oxygen from implantation of arsenic through SiO2 and of nitrogen from implantation of phosphorus through Si3N4 films on silicon. The calculated recoil profiles are in good agreement with results obtained by other investigators using the Boltzmann transport equation and they also compare very well with available experimental results in the literature. The deviation between calculated and experimental results is discussed in relation to lateral straggles. From this discussion, a range of surface layer thickness for which the method applies is recommended.
NASA Technical Reports Server (NTRS)
Slobin, S. D.
1982-01-01
The microwave attenuation and noise temperature effects of clouds can result in serious degradation of telecommunications link performance, especially for low-noise systems presently used in deep-space communications. Although cloud effects are generally less than rain effects, the frequent presence of clouds will cause some amount of link degradation a large portion of the time. This paper presents a general review of cloud types and their water particle densities, attenuation and noise temperature calculations, and basic link signal-to-noise ratio calculations. Tabular results of calculations for 12 different cloud models are presented for frequencies in the range 10-50 GHz. Curves of average-year attenuation and noise temperature statistics at frequencies ranging from 10 to 90 GHz, calculated from actual surface and radiosonde observations, are given for 15 climatologically distinct regions in the contiguous United States, Alaska, and Hawaii. Nonuniform sky cover is considered in these calculations.
Carver, Robert L; Sprunger, Conrad P; Hogstrom, Kenneth R; Popple, Richard A; Antolak, John A
2016-05-08
The purpose of this study was to evaluate the accuracy and calculation speed of electron dose distributions calculated by the Eclipse electron Monte Carlo (eMC) algorithm for use with bolus electron conformal therapy (ECT). The recent com-mercial availability of bolus ECT technology requires further validation of the eMC dose calculation algorithm. eMC-calculated electron dose distributions for bolus ECT have been compared to previously measured TLD-dose points throughout patient-based cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV (planning treatment volume) CT anatomy. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The treatment plans were imported into the Eclipse treatment planning system, and electron dose distributions calculated using 1% and < 0.2% statistical uncertainties. The accuracy of the dose calculations using moderate smoothing and no smooth-ing were evaluated. Dose differences (eMC-calculated less measured dose) were evaluated in terms of absolute dose difference, where 100% equals the given dose, as well as distance to agreement (DTA). Dose calculations were also evaluated for calculation speed. Results from the eMC for the retromolar trigone phantom using 1% statistical uncertainty without smoothing showed calculated dose at 89% (41/46) of the measured TLD-dose points was within 3% dose difference or 3 mm DTA of the measured value. The average dose difference was -0.21%, and the net standard deviation was 2.32%. Differences as large as 3.7% occurred immediately distal to the mandible bone. Results for the nose phantom, using 1% statistical uncertainty without smoothing, showed calculated dose at 93% (53/57) of the measured TLD-dose points within 3% dose difference or 3 mm DTA. The average dose difference was 1.08%, and the net standard deviation was 3.17%. Differences as large as 10% occurred lateral to the nasal air cavities. Including smoothing had insignificant effects on the accuracy of the retromolar trigone phantom calculations, but reduced the accuracy of the nose phantom calculations in the high-gradient dose areas. Dose calculation times with 1% statistical uncertainty for the retromolar trigone and nose treatment plans were 30 s and 24 s, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a framework agent server (FAS). In comparison, the eMC was significantly more accurate than the pencil beam algorithm (PBA). The eMC has comparable accuracy to the pencil beam redefinition algorithm (PBRA) used for bolus ECT planning and has acceptably low dose calculation times. The eMC accuracy decreased when smoothing was used in high-gradient dose regions. The eMC accuracy was consistent with that previously reported for accuracy of the eMC electron dose algorithm and shows that the algorithm is suitable for clinical implementation of bolus ECT.
Statistical mechanics in the context of special relativity.
Kaniadakis, G
2002-11-01
In Ref. [Physica A 296, 405 (2001)], starting from the one parameter deformation of the exponential function exp(kappa)(x)=(sqrt[1+kappa(2)x(2)]+kappax)(1/kappa), a statistical mechanics has been constructed which reduces to the ordinary Boltzmann-Gibbs statistical mechanics as the deformation parameter kappa approaches to zero. The distribution f=exp(kappa)(-beta E+betamu) obtained within this statistical mechanics shows a power law tail and depends on the nonspecified parameter beta, containing all the information about the temperature of the system. On the other hand, the entropic form S(kappa)= integral d(3)p(c(kappa) f(1+kappa)+c(-kappa) f(1-kappa)), which after maximization produces the distribution f and reduces to the standard Boltzmann-Shannon entropy S0 as kappa-->0, contains the coefficient c(kappa) whose expression involves, beside the Boltzmann constant, another nonspecified parameter alpha. In the present effort we show that S(kappa) is the unique existing entropy obtained by a continuous deformation of S0 and preserving unaltered its fundamental properties of concavity, additivity, and extensivity. These properties of S(kappa) permit to determine unequivocally the values of the above mentioned parameters beta and alpha. Subsequently, we explain the origin of the deformation mechanism introduced by kappa and show that this deformation emerges naturally within the Einstein special relativity. Furthermore, we extend the theory in order to treat statistical systems in a time dependent and relativistic context. Then, we show that it is possible to determine in a self consistent scheme within the special relativity the values of the free parameter kappa which results to depend on the light speed c and reduces to zero as c--> infinity recovering in this way the ordinary statistical mechanics and thermodynamics. The statistical mechanics here presented, does not contain free parameters, preserves unaltered the mathematical and epistemological structure of the ordinary statistical mechanics and is suitable to describe a very large class of experimentally observed phenomena in low and high energy physics and in natural, economic, and social sciences. Finally, in order to test the correctness and predictability of the theory, as working example we consider the cosmic rays spectrum, which spans 13 decades in energy and 33 decades in flux, finding a high quality agreement between our predictions and observed data.
Saffran, Jenny R.; Kirkham, Natasha Z.
2017-01-01
Perception involves making sense of a dynamic, multimodal environment. In the absence of mechanisms capable of exploiting the statistical patterns in the natural world, infants would face an insurmountable computational problem. Infant statistical learning mechanisms facilitate the detection of structure. These abilities allow the infant to compute across elements in their environmental input, extracting patterns for further processing and subsequent learning. In this selective review, we summarize findings that show that statistical learning is both a broad and flexible mechanism (supporting learning from different modalities across many different content areas) and input specific (shifting computations depending on the type of input and goal of learning). We suggest that statistical learning not only provides a framework for studying language development and object knowledge in constrained laboratory settings, but also allows researchers to tackle real-world problems, such as multilingualism, the role of ever-changing learning environments, and differential developmental trajectories. PMID:28793812
Coupling strength assumption in statistical energy analysis
Lafont, T.; Totaro, N.
2017-01-01
This paper is a discussion of the hypothesis of weak coupling in statistical energy analysis (SEA). The examples of coupled oscillators and statistical ensembles of coupled plates excited by broadband random forces are discussed. In each case, a reference calculation is compared with the SEA calculation. First, it is shown that the main SEA relation, the coupling power proportionality, is always valid for two oscillators irrespective of the coupling strength. But the case of three subsystems, consisting of oscillators or ensembles of plates, indicates that the coupling power proportionality fails when the coupling is strong. Strong coupling leads to non-zero indirect coupling loss factors and, sometimes, even to a reversal of the energy flow direction from low to high vibrational temperature. PMID:28484335
New statistical scission-point model to predict fission fragment observables
NASA Astrophysics Data System (ADS)
Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie
2015-09-01
The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.
Managing Online Search Statistics with dBASE III Plus.
ERIC Educational Resources Information Center
Speer, Susan C.
1987-01-01
Describes a computer program designed to manage statistics about online searches which reports the number of searches by vendor, purpose, and librarian; calculates charges to departments and individuals; and prints monthly invoices to users with standing accounts. (CLB)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hidalgo Cardenuto, Marcelo, E-mail: marcelo.hidalgo@unamur.be, E-mail: benoit.champagne@unamur.be; Instituto de Física, Universidade de São Paulo, CP 66318, 05314-970 São Paulo, SP; Champagne, Benoît, E-mail: marcelo.hidalgo@unamur.be, E-mail: benoit.champagne@unamur.be
2014-12-21
A multiscale approach combining quantum mechanics (QM) and molecular mechanics methods has been employed to investigate the effects of solute-solute interactions and therefore of concentration on the first hyperpolarizability of solutions of nitrobenzene in benzene. First, spatial distributions of solute and solvent molecules are generated using Monte Carlo simulations where the intermolecular interactions are described using the Lennard-Jones potentials and Coulomb terms. Then, a reduced number of statistically-uncorrelated configurations are sampled and submitted to time-dependent Hartree-Fock calculations of the first hyperpolarizability. When only one molecule is described quantum-mechanically and is embedded in the electrostatic polarization field of the solution describedmore » by point charges, β{sub HRS} and β{sub //} as well as the depolarization ratio increase in parallel with the concentration in nitrobenzene. This effect is attributed to the increase of the polarization field associated with the presence of polar nitrobenzene molecules in the surrounding. Then, the first solvation shell is considered explicitly in the QM calculation to address solute-solute interactions effects. When the number of nitrobenzenes in the first solvation shell increases, β{sub HRS} and β{sub //} normalized to the number of nitrobenzene molecules decrease and this decrease attains roughly 50% when there are 3 nitrobenzene molecules in the first solvation shell. These drastic reductions of the first hyperpolarizability result from (partial) centro-symmetric arrangements between the nitrobenzene molecules, as supported by the relationship between β and the angle between the nitrobenzene charge transfer axes. Moreover, these β decreases originate mostly from the reduction of the dipolar β component, whereas the octupolar one is rather constant as a function of the nitrobenzene concentration.« less
Influences on lifetime of wire ropes in traction lifts
NASA Astrophysics Data System (ADS)
Vogel, W.
2016-05-01
Traction lifts are complex systems with rotating and translating moving masses, springs and dampers and several system inputs from the lifts and the users. The wire ropes are essential mechanical elements. The mechanical properties of the ropes in use depend on the rope construction, the load situation, nonlinearities and the lift dimensions. The mechanical properties are important for the proper use in lifts and the ride quality. But first of all the wire ropes (for all other suspension means as well) have to satisfy the safety relevant requirements sufficient lifetime, reliable determination of discard and sufficient and limited traction capacity. The lifetime of the wire ropes better the number of trips until rope discard depends on a lot of parameters of the rope and the rope application eg use of plastic deflection sheaves and reverse bending layouts. New challenges for rope lifetime are resulting from the more or less open D/d-ratio limits possible by certificates concerning the examination of conformity by notified bodies. This paper will highlight the basics of wire rope technology, the endurance and lifetime of wire ropes running over sheaves, and the different influences from the ropes and more and more important from the lift application parameters. Very often underestimated are the influences of transport, storage, installation and maintenance. With this background we will lead over to the calculation methods of wire rope lifetime considering the actual findings of wire rope endurance research. We'll show in this paper new and innovative facts as the influence of rope length and size factor in the lifetime formular, the reduction of lifetime caused by traction grooves, the new model for the calculation in reverse bending operations and the statistically firmed possibilities for machine roomless lifts (MRL) under very small bending conditions.
ERIC Educational Resources Information Center
Allswang, John M.
1986-01-01
This article provides two short microcomputer gradebook programs. The programs, written in BASIC for the IBM-PC and Apple II, provide statistical information about class performance and calculate grades either on a normal distribution or based on teacher-defined break points. (JDH)
Information retrieval from wide-band meteorological data - An example
NASA Technical Reports Server (NTRS)
Adelfang, S. I.; Smith, O. E.
1983-01-01
The methods proposed by Smith and Adelfang (1981) and Smith et al. (1982) are used to calculate probabilities over rectangles and sectors of the gust magnitude-gust length plane; probabilities over the same regions are also calculated from the observed distributions and a comparison is also presented to demonstrate the accuracy of the statistical model. These and other statistical results are calculated from samples of Jimsphere wind profiles at Cape Canaveral. The results are presented for a variety of wavelength bands, altitudes, and seasons. It is shown that wind perturbations observed in Jimsphere wind profiles in various wavelength bands can be analyzed by using digital filters. The relationship between gust magnitude and gust length is modeled with the bivariate gamma distribution. It is pointed out that application of the model to calculate probabilities over specific areas of the gust magnitude-gust length plane can be useful in aerospace design.
From Mechanical Motion to Brownian Motion, Thermodynamics and Particle Transport Theory
ERIC Educational Resources Information Center
Bringuier, E.
2008-01-01
The motion of a particle in a medium is dealt with either as a problem of mechanics or as a transport process in non-equilibrium statistical physics. The two kinds of approach are often unrelated as they are taught in different textbooks. The aim of this paper is to highlight the link between the mechanical and statistical treatments of particle…
NASA Astrophysics Data System (ADS)
Rich, Grayson Currie
The COHERENT Collaboration has produced the first-ever observation, with a significance of 6.7sigma, of a process consistent with coherent, elastic neutrino-nucleus scattering (CEnuNS) as first predicted and described by D.Z. Freedman in 1974. Physics of the CEnuNS process are presented along with its relationship to future measurements in the arenas of nuclear physics, fundamental particle physics, and astroparticle physics, where the newly-observed interaction presents a viable tool for investigations into numerous outstanding questions about the nature of the universe. To enable the CEnuNS observation with a 14.6-kg CsI[Na] detector, new measurements of the response of CsI[Na] to low-energy nuclear recoils, which is the only mechanism by which CEnuNS is detectable, were carried out at Triangle Universities Nuclear Laboratory; these measurements are detailed and an effective nuclear-recoil quenching factor of 8.78 +/- 1.66% is established for CsI[Na] in the recoil-energy range of 5-30 keV, based on new and literature data. Following separate analyses of the CEnuNS-search data by groups at the University of Chicago and the Moscow Engineering and Physics Institute, information from simulations, calculations, and ancillary measurements were used to inform statistical analyses of the collected data. Based on input from the Chicago analysis, the number of CEnuNS events expected from the Standard Model is 173 +/- 48; interpretation as a simple counting experiment finds 136 +/- 31 CEnuNS counts in the data, while a two-dimensional, profile likelihood fit yields 134 +/- 22 CEnuNS counts. Details of the simulations, calculations, and supporting measurements are discussed, in addition to the statistical procedures. Finally, potential improvements to the CsI[Na]-based CEnuNS measurement are presented along with future possibilities for COHERENT Collaboration, including new CEnuNS detectors and measurement of the neutrino-induced neutron spallation process.
NASA Astrophysics Data System (ADS)
Deshayes, Yannick; Verdier, Frederic; Bechou, Laurent; Tregon, Bernard; Danto, Yves; Laffitte, Dominique; Goudard, Jean Luc
2004-09-01
High performance and high reliability are two of the most important goals driving the penetration of optical transmission into telecommunication systems ranging from 880 nm to 1550 nm. Lifetime prediction defined as the time at which a parameter reaches its maximum acceptable shirt still stays the main result in terms of reliability estimation for a technology. For optoelectronic emissive components, selection tests and life testing are specifically used for reliability evaluation according to Telcordia GR-468 CORE requirements. This approach is based on extrapolation of degradation laws, based on physics of failure and electrical or optical parameters, allowing both strong test time reduction and long-term reliability prediction. Unfortunately, in the case of mature technology, there is a growing complexity to calculate average lifetime and failure rates (FITs) using ageing tests in particular due to extremely low failure rates. For present laser diode technologies, time to failure tend to be 106 hours aged under typical conditions (Popt=10 mW and T=80°C). These ageing tests must be performed on more than 100 components aged during 10000 hours mixing different temperatures and drive current conditions conducting to acceleration factors above 300-400. These conditions are high-cost, time consuming and cannot give a complete distribution of times to failure. A new approach consists in use statistic computations to extrapolate lifetime distribution and failure rates in operating conditions from physical parameters of experimental degradation laws. In this paper, Distributed Feedback single mode laser diodes (DFB-LD) used for 1550 nm telecommunication network working at 2.5 Gbit/s transfer rate are studied. Electrical and optical parameters have been measured before and after ageing tests, performed at constant current, according to Telcordia GR-468 requirements. Cumulative failure rates and lifetime distributions are computed using statistic calculations and equations of drift mechanisms versus time fitted from experimental measurements.
Effect of CorrelatedRotational Noise
NASA Astrophysics Data System (ADS)
Hancock, Benjamin; Wagner, Caleb; Baskaran, Aparna
The traditional model of a self-propelled particle (SPP) is one where the body axis along which the particle travels reorients itself through rotational diffusion. If the reorientation process was driven by colored noise, instead of the standard Gaussian white noise, the resulting statistical mechanics cannot be accessed through conventional methods. In this talk we present results comparing three methods of deriving the statistical mechanics of a SPP with a reorientation process driven by colored noise. We illustrate the differences/similarities in the resulting statistical mechanics by their ability to accurately capture the particles response to external aligning fields.
NASA Astrophysics Data System (ADS)
Jaffke, Patrick; Möller, Peter; Stetcu, Ionel; Talou, Patrick; Schmitt, Christelle
2018-03-01
We implement fission fragment yields, calculated using Brownian shape-motion on a macroscopic-microscopic potential energy surface in six dimensions, into the Hauser-Feshbach statistical decay code CGMF. This combination allows us to test the impact of utilizing theoretically-calculated fission fragment yields on the subsequent prompt neutron and γ-ray emission. We draw connections between the fragment yields and the total kinetic energy TKE of the fission fragments and demonstrate that the use of calculated yields can introduce a difference in the 〈TKE〉 and, thus, the prompt neutron multiplicity
Assessment of NDE reliability data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.
1975-01-01
Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.
Granato, Gregory E.; Ries, Kernell G.; Steeves, Peter A.
2017-10-16
Streamflow statistics are needed by decision makers for many planning, management, and design activities. The U.S. Geological Survey (USGS) StreamStats Web application provides convenient access to streamflow statistics for many streamgages by accessing the underlying StreamStatsDB database. In 2016, non-interpretive streamflow statistics were compiled for streamgages located throughout the Nation and stored in StreamStatsDB for use with StreamStats and other applications. Two previously published USGS computer programs that were designed to help calculate streamflow statistics were updated to better support StreamStats as part of this effort. These programs are named “GNWISQ” (Get National Water Information System Streamflow (Q) files), updated to version 1.1.1, and “QSTATS” (Streamflow (Q) Statistics), updated to version 1.1.2.Statistics for 20,438 streamgages that had 1 or more complete years of record during water years 1901 through 2015 were calculated from daily mean streamflow data; 19,415 of these streamgages were within the conterminous United States. About 89 percent of the 20,438 streamgages had 3 or more years of record, and about 65 percent had 10 or more years of record. Drainage areas of the 20,438 streamgages ranged from 0.01 to 1,144,500 square miles. The magnitude of annual average streamflow yields (streamflow per square mile) for these streamgages varied by almost six orders of magnitude, from 0.000029 to 34 cubic feet per second per square mile. About 64 percent of these streamgages did not have any zero-flow days during their available period of record. The 18,122 streamgages with 3 or more years of record were included in the StreamStatsDB compilation so they would be available via the StreamStats interface for user-selected streamgages. All the statistics are available in a USGS ScienceBase data release.
WASP (Write a Scientific Paper) using Excel - 7: The t-distribution.
Grech, Victor
2018-03-01
The calculation of descriptive statistics after data collection provides researchers with an overview of the shape and nature of their datasets, along with basic descriptors, and may help identify true or incorrect outlier values. This exercise should always precede inferential statistics, when possible. This paper provides some pointers for doing so in Microsoft Excel, both statically and dynamically, with Excel's functions, including the calculation of standard deviation and variance and the relevance of the t-distribution. Copyright © 2018 Elsevier B.V. All rights reserved.
Statistical error propagation in ab initio no-core full configuration calculations of light nuclei
Navarro Pérez, R.; Amaro, J. E.; Ruiz Arriola, E.; ...
2015-12-28
We propagate the statistical uncertainty of experimental N N scattering data into the binding energy of 3H and 4He. Here, we also study the sensitivity of the magnetic moment and proton radius of the 3 H to changes in the N N interaction. The calculations are made with the no-core full configuration method in a sufficiently large harmonic oscillator basis. For those light nuclei we obtain Δ E stat (3H) = 0.015 MeV and Δ E stat ( 4He) = 0.055 MeV .
Dechartres, Agnes; Bond, Elizabeth G; Scheer, Jordan; Riveros, Carolina; Atal, Ignacio; Ravaud, Philippe
2016-11-30
Publication bias and other reporting bias have been well documented for journal articles, but no study has evaluated the nature of results posted at ClinicalTrials.gov. We aimed to assess how many randomized controlled trials (RCTs) with results posted at ClinicalTrials.gov report statistically significant results and whether the proportion of trials with significant results differs when no treatment effect estimate or p-value is posted. We searched ClinicalTrials.gov in June 2015 for all studies with results posted. We included completed RCTs with a superiority hypothesis and considered results for the first primary outcome with results posted. For each trial, we assessed whether a treatment effect estimate and/or p-value was reported at ClinicalTrials.gov and if yes, whether results were statistically significant. If no treatment effect estimate or p-value was reported, we calculated the treatment effect and corresponding p-value using results per arm posted at ClinicalTrials.gov when sufficient data were reported. From the 17,536 studies with results posted at ClinicalTrials.gov, we identified 2823 completed phase 3 or 4 randomized trials with a superiority hypothesis. Of these, 1400 (50%) reported a treatment effect estimate and/or p-value. Results were statistically significant for 844 trials (60%), with a median p-value of 0.01 (Q1-Q3: 0.001-0.26). For the 1423 trials with no treatment effect estimate or p-value posted, we could calculate the treatment effect and corresponding p-value using results reported per arm for 929 (65%). For 494 trials (35%), p-values could not be calculated mainly because of insufficient reporting, censored data, or repeated measurements over time. For the 929 trials we could calculate p-values, we found statistically significant results for 342 (37%), with a median p-value of 0.19 (Q1-Q3: 0.005-0.59). Half of the trials with results posted at ClinicalTrials.gov reported a treatment effect estimate and/or p-value, with significant results for 60% of these. p-values could be calculated from results reported per arm at ClinicalTrials.gov for only 65% of the other trials. The proportion of significant results was much lower for these trials, which suggests a selective posting of treatment effect estimates and/or p-values when results are statistically significant.
ERIC Educational Resources Information Center
Vaughn, Brandon K.; Wang, Pei-Yu
2009-01-01
The emergence of technology has led to numerous changes in mathematical and statistical teaching and learning which has improved the quality of instruction and teacher/student interactions. The teaching of statistics, for example, has shifted from mathematical calculations to higher level cognitive abilities such as reasoning, interpretation, and…
Statistical dielectronic recombination rates for multielectron ions in plasma
NASA Astrophysics Data System (ADS)
Demura, A. V.; Leont'iev, D. S.; Lisitsa, V. S.; Shurygin, V. A.
2017-10-01
We describe the general analytic derivation of the dielectronic recombination (DR) rate coefficient for multielectron ions in a plasma based on the statistical theory of an atom in terms of the spatial distribution of the atomic electron density. The dielectronic recombination rates for complex multielectron tungsten ions are calculated numerically in a wide range of variation of the plasma temperature, which is important for modern nuclear fusion studies. The results of statistical theory are compared with the data obtained using level-by-level codes ADPAK, FAC, HULLAC, and experimental results. We consider different statistical DR models based on the Thomas-Fermi distribution, viz., integral and differential with respect to the orbital angular momenta of the ion core and the trapped electron, as well as the Rost model, which is an analog of the Frank-Condon model as applied to atomic structures. In view of its universality and relative simplicity, the statistical approach can be used for obtaining express estimates of the dielectronic recombination rate coefficients in complex calculations of the parameters of the thermonuclear plasmas. The application of statistical methods also provides information for the dielectronic recombination rates with much smaller computer time expenditures as compared to available level-by-level codes.
Hill, Mary C.
2010-01-01
Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.
[Sem: a suitable statistical software adaptated for research in oncology].
Kwiatkowski, F; Girard, M; Hacene, K; Berlie, J
2000-10-01
Many softwares have been adapted for medical use; they rarely enable conveniently both data management and statistics. A recent cooperative work ended up in a new software, Sem (Statistics Epidemiology Medicine), which allows data management of trials and, as well, statistical treatments on them. Very convenient, it can be used by non professional in statistics (biologists, doctors, researchers, data managers), since usually (excepted with multivariate models), the software performs by itself the most adequate test, after what complementary tests can be requested if needed. Sem data base manager (DBM) is not compatible with usual DBM: this constitutes a first protection against loss of privacy. Other shields (passwords, cryptage...) strengthen data security, all the more necessary today since Sem can be run on computers nets. Data organization enables multiplicity: forms can be duplicated by patient. Dates are treated in a special but transparent manner (sorting, date and delay calculations...). Sem communicates with common desktop softwares, often with a simple copy/paste. So, statistics can be easily performed on data stored in external calculation sheets, and slides by pasting graphs with a single mouse click (survival curves...). Already used over fifty places in different hospitals for daily work, this product, combining data management and statistics, appears to be a convenient and innovative solution.
1999 Iowa crash facts : a summary of motor vehicle crash statistics on Iowa roadways
DOT National Transportation Integrated Search
1999-01-01
All information concerning Iowa traffic crashes was taken from report forms : provided by investigating officers and drivers involved in crashes. : All statistics are gathered and calculated by the Iowa Department of Transportations Office of Driv...
1997 Iowa crash summary : a summary of motor vehicle crash statistics on Iowa roadways
DOT National Transportation Integrated Search
1997-01-01
All information concerning Iowa : traffic crashes was taken from report forms provided by investigating officers and drivers involved in : crashes. : All statistics are gathered and calculated by the Iowa Department of Transportations Office of Dr...
1996 Iowa crash summary : a summary of motor vehicle crash statistics on Iowa roadways
DOT National Transportation Integrated Search
1996-01-01
All information concerning Iowa : traffic crashes was taken from report forms provided by investigating officers and drivers involved in : crashes. : All statistics are gathered and calculated by the Iowa Department of Transportations Office of Dr...
1998 Iowa crash summary : a summary of motor vehicle crash statistics on Iowa roadways
DOT National Transportation Integrated Search
1998-01-01
All information concerning Iowa : traffic crashes was taken from report forms provided by investigating officers and drivers involved in : crashes. : All statistics are gathered and calculated by the Iowa Department of Transportations Office of Dr...
Non-specific low back pain: occupational or lifestyle consequences?
Stričević, Jadranka; Papež, Breda Jesenšek
2015-12-01
Nursing occupation was identified as a risk occupation for the development of low back pain (LBP). The aim of our study was to find out how much occupational factors influence the development of LBP in hospital nursing personnel. Non-experimental approach with a cross-sectional survey and statistical analysis. Nine hundred questionnaires were distributed among nursing personnel, 663 were returned and 659 (73.2 %) were considered for the analysis. Univariate and multivariate statistics for LBP risk was calculated by the binary logistic regression. The χ(2), influence factor, 95 % confidence interval and P value were calculated. Multivariate binary logistic regression was calculated by the Wald method to omit insignificant variables. Not performing exercises represented the highest risk for the development of LBP (OR 2.8, 95 % CI 1.7-4.4; p < 0.001). The second and third ranked risk factors were frequent manual lifting > 10 kg (OR 2.4, 95 % CI 1.5-3.8; p < 0.001) and duration of employment ≥ 19 years (OR 2.4, 95 % CI 1.6-3.7; p < 0.001). The fourth ranked risk factor was better physical condition by frequent recreation and sports, which reduced the risk for the development of LBP (OR 0.4, 95 % CI 0.3-0.7; p = 0.001). Work with the computer ≥ 2 h per day as last significant risk factor also reduced the risk for the development of LBP (OR 0.6, 95 % CI 0.4-0.1; p = 0.049). Risk factors for LBP established in our study (exercises, duration of employment, frequent manual lifting, recreation and sports and work with the computer) are not specifically linked to the working environment of the nursing personnel. Rather than focusing on mechanical causes and direct workload in the development of non-specific LBP, the complex approach to LBP including genetics, psychosocial environment, lifestyle and quality of life is coming more to the fore.
Statistical mechanics of the cluster Ising model
NASA Astrophysics Data System (ADS)
Smacchia, Pietro; Amico, Luigi; Facchi, Paolo; Fazio, Rosario; Florio, Giuseppe; Pascazio, Saverio; Vedral, Vlatko
2011-08-01
We study a Hamiltonian system describing a three-spin-1/2 clusterlike interaction competing with an Ising-like antiferromagnetic interaction. We compute free energy, spin-correlation functions, and entanglement both in the ground and in thermal states. The model undergoes a quantum phase transition between an Ising phase with a nonvanishing magnetization and a cluster phase characterized by a string order. Any two-spin entanglement is found to vanish in both quantum phases because of a nontrivial correlation pattern. Nevertheless, the residual multipartite entanglement is maximal in the cluster phase and dependent on the magnetization in the Ising phase. We study the block entropy at the critical point and calculate the central charge of the system, showing that the criticality of the system is beyond the Ising universality class.
Validity criteria for Fermi’s golden rule scattering rates applied to metallic nanowires
NASA Astrophysics Data System (ADS)
Moors, Kristof; Sorée, Bart; Magnus, Wim
2016-09-01
Fermi’s golden rule underpins the investigation of mobile carriers propagating through various solids, being a standard tool to calculate their scattering rates. As such, it provides a perturbative estimate under the implicit assumption that the effect of the interaction Hamiltonian which causes the scattering events is sufficiently small. To check the validity of this assumption, we present a general framework to derive simple validity criteria in order to assess whether the scattering rates can be trusted for the system under consideration, given its statistical properties such as average size, electron density, impurity density et cetera. We derive concrete validity criteria for metallic nanowires with conduction electrons populating a single parabolic band subjected to different elastic scattering mechanisms: impurities, grain boundaries and surface roughness.
Quantal diffusion description of multinucleon transfers in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Ayik, S.; Yilmaz, B.; Yilmaz, O.; Umar, A. S.
2018-05-01
Employing the stochastic mean-field (SMF) approach, we develop a quantal diffusion description of the multi-nucleon transfer in heavy-ion collisions at finite impact parameters. The quantal transport coefficients are determined by the occupied single-particle wave functions of the time-dependent Hartree-Fock equations. As a result, the primary fragment mass and charge distribution functions are determined entirely in terms of the mean-field properties. This powerful description does not involve any adjustable parameter, includes the effects of shell structure, and is consistent with the fluctuation-dissipation theorem of the nonequilibrium statistical mechanics. As a first application of the approach, we analyze the fragment mass distribution in 48Ca+ 238U collisions at the center-of-mass energy Ec.m.=193 MeV and compare the calculations with the experimental data.
Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W
2007-07-01
Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.
Iancu, Violeta; Hla, Saw-Wai
2006-01-01
Single chlorophyll-a molecules, a vital resource for the sustenance of life on Earth, have been investigated by using scanning tunneling microscope manipulation and spectroscopy on a gold substrate at 4.6 K. Chlorophyll-a binds on Au(111) via its porphyrin unit while the phytyl-chain is elevated from the surface by the support of four CH3 groups. By injecting tunneling electrons from the scanning tunneling microscope tip, we are able to bend the phytyl-chain, which enables the switching of four molecular conformations in a controlled manner. Statistical analyses and structural calculations reveal that all reversible switching mechanisms are initiated by a single tunneling-electron energy-transfer process, which induces bond rotation within the phytyl-chain. PMID:16954201
Auger recombination in sodium iodide
NASA Astrophysics Data System (ADS)
McAllister, Andrew; Kioupakis, Emmanouil; Åberg, Daniel; Schleife, André
2014-03-01
Scintillators are an important tool used to detect high energy radiation - both in the interest of national security and in medicine. However, scintillator detectors currently suffer from lower energy resolutions than expected from basic counting statistics. This has been attributed to non-proportional light yield compared to incoming radiation, but the specific mechanism for this non-proportionality has not been identified. Auger recombination is a non-radiative process that could be contributing to the non-proportionality of scintillating materials. Auger recombination comes in two types - direct and phonon-assisted. We have used first-principles calculations to study Auger recombination in sodium iodide, a well characterized scintillating material. Our findings indicate that phonon-assisted Auger recombination is stronger in sodium iodide than direct Auger recombination. Computational resources provided by LLNL and NERSC. Funding provided by NA-22.
Critical Current Statistics of a Graphene-Based Josephson Junction Infrared Single Photon Detector
NASA Astrophysics Data System (ADS)
Walsh, Evan D.; Lee, Gil-Ho; Efetov, Dmitri K.; Heuck, Mikkel; Crossno, Jesse; Taniguchi, Takashi; Watanabe, Kenji; Ohki, Thomas A.; Kim, Philip; Englund, Dirk; Fong, Kin Chung
Graphene is a promising material for single photon detection due to its broadband absorption and exceptionally low specific heat. We present a photon detector using a graphene sheet as the weak link in a Josephson junction (JJ) to form a threshold detector for single infrared photons. Calculations show that such a device could experience temperature changes of a few hundred percent leading to sub-Hz dark count rates and internal efficiencies approaching unity. We have fabricated the graphene-based JJ (gJJ) detector and measure switching events that are consistent with single photon detection under illumination by an attenuated laser. We study the physical mechanism for these events through the critical current behavior of the gJJ as a function of incident photon flux.
NASA Astrophysics Data System (ADS)
Bina, C. R.
An optimization algorithm based upon the method of simulated annealing is of utility in calculating equilibrium phase assemblages as functions of pressure, temperature, and chemical composi tion. Operating by analogy to the statistical mechanics of the chemical system, it is applicable both to problems of strict chemical equilibrium and to problems involving metastability. The method reproduces known phase diagrams and illustrates the expected thermal deflection of phase transitions in thermal models of subducting lithospheric slabs and buoyant mantle plumes. It reveals temperature-induced changes in phase transition sharpness and the stability of Fe-rich γ phase within an α+γ field in cold slab thermal models, and it suggests that transitions such as the possible breakdown of silicate perovskite to mixed oxides can amplify velocity anomalies.
Many roads to synchrony: natural time scales and their algorithms.
James, Ryan G; Mahoney, John R; Ellison, Christopher J; Crutchfield, James P
2014-04-01
We consider two important time scales-the Markov and cryptic orders-that monitor how an observer synchronizes to a finitary stochastic process. We show how to compute these orders exactly and that they are most efficiently calculated from the ε-machine, a process's minimal unifilar model. Surprisingly, though the Markov order is a basic concept from stochastic process theory, it is not a probabilistic property of a process. Rather, it is a topological property and, moreover, it is not computable from any finite-state model other than the ε-machine. Via an exhaustive survey, we close by demonstrating that infinite Markov and infinite cryptic orders are a dominant feature in the space of finite-memory processes. We draw out the roles played in statistical mechanical spin systems by these two complementary length scales.
Marigo, Luca; D' Arcangelo, Camillo; DE Angelis, Francesco; Cordaro, Massimo; Vadini, Mirco; Lajolo, Carlo
2017-02-01
The purpose of this study was to evaluate the push-out bond strengths of four commercially available adhesive luting systems (two self-adhesive and two etch-and-rinse systems) after mechanical aging. Forty single-rooted anterior teeth were divided into four groups according to the luting cement system used: Cement-One (Group 1); One-Q-adhesive Bond + Axia Core Dual (Group 2); SmartCem® 2 (Group 3); and XP Bond® + Core-X™ Flow (Group 4). Anatomical Post was cemented in groups 1 and 2, and D.T. Light-Post Illusion was cemented in groups 3 and 4. All samples were subjected to masticatory stress simulation consisting of 300,000 cycles applied with a computer-controlled chewing simulator. Push-out bond strength values (MPa) were calculated at cervical, middle, and apical each level, and the total bond strengths were calculated as the averages of the three levels. Statistical analysis was performed with data analysis software and significance was set at P<0.05. Statistically significant differences in total bond strength were detected between the cements (Group 4: 3.28 MPa, Group 1: 2.77 MPa, Group 2: 2.36 MPa, Group 3: 1.13 MPa; P<0.05). Specifically, Group 1 exhibited a lower bond strength in the apical zone, Group 3 exhibited a higher strength in this zone, and groups 2 and 4 exhibited more homogeneous bonding strengths across the different anatomical zones. After artificial aging, etch-and-rinse luting systems exhibited more homogeneous bond strengths; nevertheless, Cement-One exhibited a total bond strength second only to Core-X Flow.