Quadratic adaptive algorithm for solving cardiac action potential models.
Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing
2016-10-01
An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Jones, Henry E.
1997-01-01
A study of the full-potential modeling of a blade-vortex interaction was made. A primary goal of this study was to investigate the effectiveness of the various methods of modeling the vortex. The model problem restricts the interaction to that of an infinite wing with an infinite line vortex moving parallel to its leading edge. This problem provides a convenient testing ground for the various methods of modeling the vortex while retaining the essential physics of the full three-dimensional interaction. A full-potential algorithm specifically tailored to solve the blade-vortex interaction (BVI) was developed to solve this problem. The basic algorithm was modified to include the effect of a vortex passing near the airfoil. Four different methods of modeling the vortex were used: (1) the angle-of-attack method, (2) the lifting-surface method, (3) the branch-cut method, and (4) the split-potential method. A side-by-side comparison of the four models was conducted. These comparisons included comparing generated velocity fields, a subcritical interaction, and a critical interaction. The subcritical and critical interactions are compared with experimentally generated results. The split-potential model was used to make a survey of some of the more critical parameters which affect the BVI.
Comparison of methods for the analysis of relatively simple mediation models.
Rijnhart, Judith J M; Twisk, Jos W R; Chinapaw, Mai J M; de Boer, Michiel R; Heymans, Martijn W
2017-09-01
Statistical mediation analysis is an often used method in trials, to unravel the pathways underlying the effect of an intervention on a particular outcome variable. Throughout the years, several methods have been proposed, such as ordinary least square (OLS) regression, structural equation modeling (SEM), and the potential outcomes framework. Most applied researchers do not know that these methods are mathematically equivalent when applied to mediation models with a continuous mediator and outcome variable. Therefore, the aim of this paper was to demonstrate the similarities between OLS regression, SEM, and the potential outcomes framework in three mediation models: 1) a crude model, 2) a confounder-adjusted model, and 3) a model with an interaction term for exposure-mediator interaction. Secondary data analysis of a randomized controlled trial that included 546 schoolchildren. In our data example, the mediator and outcome variable were both continuous. We compared the estimates of the total, direct and indirect effects, proportion mediated, and 95% confidence intervals (CIs) for the indirect effect across OLS regression, SEM, and the potential outcomes framework. OLS regression, SEM, and the potential outcomes framework yielded the same effect estimates in the crude mediation model, the confounder-adjusted mediation model, and the mediation model with an interaction term for exposure-mediator interaction. Since OLS regression, SEM, and the potential outcomes framework yield the same results in three mediation models with a continuous mediator and outcome variable, researchers can continue using the method that is most convenient to them.
Advances in visual representation of molecular potentials.
Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen
2010-06-01
The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.
Derivation of aerodynamic kernel functions
NASA Technical Reports Server (NTRS)
Dowell, E. H.; Ventres, C. S.
1973-01-01
The method of Fourier transforms is used to determine the kernel function which relates the pressure on a lifting surface to the prescribed downwash within the framework of Dowell's (1971) shear flow model. This model is intended to improve upon the potential flow aerodynamic model by allowing for the aerodynamic boundary layer effects neglected in the potential flow model. For simplicity, incompressible, steady flow is considered. The proposed method is illustrated by deriving known results from potential flow theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaenko, Alexander; Windus, Theresa L.; Sosonkina, Masha
2012-10-19
The design and development of scientific software components to provide an interface to the effective fragment potential (EFP) methods are reported. Multiscale modeling of physical and chemical phenomena demands the merging of software packages developed by research groups in significantly different fields. Componentization offers an efficient way to realize new high performance scientific methods by combining the best models available in different software packages without a need for package readaptation after the initial componentization is complete. The EFP method is an efficient electronic structure theory based model potential that is suitable for predictive modeling of intermolecular interactions in large molecularmore » systems, such as liquids, proteins, atmospheric aerosols, and nanoparticles, with an accuracy that is comparable to that of correlated ab initio methods. The developed components make the EFP functionality accessible for any scientific component-aware software package. The performance of the component is demonstrated on a protein interaction model, and its accuracy is compared with results obtained with coupled cluster methods.« less
Exploring a potential energy surface by machine learning for characterizing atomic transport
NASA Astrophysics Data System (ADS)
Kanamori, Kenta; Toyoura, Kazuaki; Honda, Junya; Hattori, Kazuki; Seko, Atsuto; Karasuyama, Masayuki; Shitara, Kazuki; Shiga, Motoki; Kuwabara, Akihide; Takeuchi, Ichiro
2018-03-01
We propose a machine-learning method for evaluating the potential barrier governing atomic transport based on the preferential selection of dominant points for atomic transport. The proposed method generates numerous random samples of the entire potential energy surface (PES) from a probabilistic Gaussian process model of the PES, which enables defining the likelihood of the dominant points. The robustness and efficiency of the method are demonstrated on a dozen model cases for proton diffusion in oxides, in comparison with a conventional nudge elastic band method.
A constructive model potential method for atomic interactions
NASA Technical Reports Server (NTRS)
Bottcher, C.; Dalgarno, A.
1974-01-01
A model potential method is presented that can be applied to many electron single centre and two centre systems. The development leads to a Hamiltonian with terms arising from core polarization that depend parametrically upon the positions of the valence electrons. Some of the terms have been introduced empirically in previous studies. Their significance is clarified by an analysis of a similar model in classical electrostatics. The explicit forms of the expectation values of operators at large separations of two atoms given by the model potential method are shown to be equivalent to the exact forms when the assumption is made that the energy level differences of one atom are negligible compared to those of the other.
Shen, Lin; Yang, Weitao
2016-04-12
We developed a new multiresolution method that spans three levels of resolution with quantum mechanical, atomistic molecular mechanical, and coarse-grained models. The resolution-adapted all-atom and coarse-grained water model, in which an all-atom structural description of the entire system is maintained during the simulations, is combined with the ab initio quantum mechanics and molecular mechanics method. We apply this model to calculate the redox potentials of the aqueous ruthenium and iron complexes by using the fractional number of electrons approach and thermodynamic integration simulations. The redox potentials are recovered in excellent accordance with the experimental data. The speed-up of the hybrid all-atom and coarse-grained water model renders it computationally more attractive. The accuracy depends on the hybrid all-atom and coarse-grained water model used in the combined quantum mechanical and molecular mechanical method. We have used another multiresolution model, in which an atomic-level layer of water molecules around redox center is solvated in supramolecular coarse-grained waters for the redox potential calculations. Compared with the experimental data, this alternative multilayer model leads to less accurate results when used with the coarse-grained polarizable MARTINI water or big multipole water model for the coarse-grained layer.
Modeling potential evapotranspiration of two forested watersheds in the southern Appalachians
L.Y. Rao; G. Sun; C.R. Ford; J.M. Vose
2011-01-01
Global climate change has direct impacts on watershed hydrology through altering evapotranspiration (ET) processes at multiple scales. There are many methods to estimate forest ET with models, but the most practical and the most popular one is the potential ET (PET) based method. However, the choice of PET methods for AET estimation remains challenging. This study...
Zhou, Hongyi; Skolnick, Jeffrey
2009-01-01
In this work, we develop a fully automated method for the quality assessment prediction of protein structural models generated by structure prediction approaches such as fold recognition servers, or ab initio methods. The approach is based on fragment comparisons and a consensus Cα contact potential derived from the set of models to be assessed and was tested on CASP7 server models. The average Pearson linear correlation coefficient between predicted quality and model GDT-score per target is 0.83 for the 98 targets which is better than those of other quality assessment methods that participated in CASP7. Our method also outperforms the other methods by about 3% as assessed by the total GDT-score of the selected top models. PMID:18004783
Two-body potential model based on cosine series expansion for ionic materials
Oda, Takuji; Weber, William J.; Tanigawa, Hisashi
2015-09-23
There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less
Full-potential modeling of blade-vortex interactions
NASA Technical Reports Server (NTRS)
Jones, H. E.; Caradonna, F. X.
1986-01-01
A comparison is made of four different models for predicting the unsteady loading induced by a vortex passing close to an airfoil. (1) The first model approximates the vortex effect as a change in the airfoil angle of attack. (2) The second model is related to the first but, instead of imposing only a constant velocity on the airfoil, the distributed effect of the vortex is computed and used. This is analogous to a lifting surface method. (3) The third model is to specify a branch cut discontinuity in the potential field. The vortex is modeled as a jump in potential across the branch cut, the edge of which represents the center of the vortex. (4) The fourth method models the vortex expressing the potential as the sum of a known potential due to the vortex and an unknown perturbation due to the airfoil. The purpose of the current study is to investigate the four vortex models described above and to determine their relative merits and suitability for use in large three-dimensional codes.
NASA Astrophysics Data System (ADS)
Nilsson, A.; Suttie, N.
2016-12-01
Sedimentary palaeomagnetic data may exhibit some degree of smoothing of the recorded field due to the gradual processes by which the magnetic signal is `locked-in' over time. Here we present a new Bayesian method to construct age-depth models based on palaeomagnetic data, taking into account and correcting for potential lock-in delay. The age-depth model is built on the widely used "Bacon" dating software by Blaauw and Christen (2011, Bayesian Analysis 6, 457-474) and is designed to combine both radiocarbon and palaeomagnetic measurements. To our knowledge, this is the first palaeomagnetic dating method that addresses the potential problems related post-depositional remanent magnetisation acquisition in age-depth modelling. Age-depth models, including site specific lock-in depth and lock-in filter function, produced with this method are shown to be consistent with independent results based on radiocarbon wiggle match dated sediment sections. Besides its primary use as a dating tool, our new method can also be used specifically to identify the most likely lock-in parameters for a specific record. We explore the potential to use these results to construct high-resolution geomagnetic field models based on sedimentary palaeomagnetic data, adjusting for smoothing induced by post-depositional remanent magnetisation acquisition. Potentially, this technique could enable reconstructions of Holocene geomagnetic field with the same amplitude of variability observed in archaeomagnetic field models for the past three millennia.
NASA Astrophysics Data System (ADS)
Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.
2018-07-01
Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.
NASA Astrophysics Data System (ADS)
Pan, Kok-Kwei
We have generalized the linked cluster expansion method to solve more many-body quantum systems, such as quantum spin systems with crystal-field potentials and the Hubbard model. The technique sums up all connected diagrams to a certain order of the perturbative Hamiltonian. The modified multiple-site Wick reduction theorem and the simple tau dependence of the standard basis operators have been used to facilitate the evaluation of the integration procedures in the perturbation expansion. Computational methods are developed to calculate all terms in the series expansion. As a first example, the perturbation series expansion of thermodynamic quantities of the single-band Hubbard model has been obtained using a linked cluster series expansion technique. We have made corrections to all previous results of several papers (up to fourth order). The behaviors of the three dimensional simple cubic and body-centered cubic systems have been discussed from the qualitative analysis of the perturbation series up to fourth order. We have also calculated the sixth-order perturbation series of this model. As a second example, we present the magnetic properties of spin-one Heisenberg model with arbitrary crystal-field potential using a linked cluster series expansion. The calculation of the thermodynamic properties using this method covers the whole range of temperature, in both magnetically ordered and disordered phases. The series for the susceptibility and magnetization have been obtained up to fourth order for this model. The method sums up all perturbation terms to certain order and estimates the result using a well -developed and highly successful extrapolation method (the standard ratio method). The dependence of critical temperature on the crystal-field potential and the magnetization as a function of temperature and crystal-field potential are shown. The critical behaviors at zero temperature are also shown. The range of the crystal-field potential for Ni(2+) compounds is roughly estimated based on this model using known experimental results.
NASA Astrophysics Data System (ADS)
Gonçalves, Ítalo Gomes; Kumaira, Sissa; Guadagnin, Felipe
2017-06-01
Implicit modeling has experienced a rise in popularity over the last decade due to its advantages in terms of speed and reproducibility in comparison with manual digitization of geological structures. The potential-field method consists in interpolating a scalar function that indicates to which side of a geological boundary a given point belongs to, based on cokriging of point data and structural orientations. This work proposes a vector potential-field solution from a machine learning perspective, recasting the problem as multi-class classification, which alleviates some of the original method's assumptions. The potentials related to each geological class are interpreted in a compositional data framework. Variogram modeling is avoided through the use of maximum likelihood to train the model, and an uncertainty measure is introduced. The methodology was applied to the modeling of a sample dataset provided with the software Move™. The calculations were implemented in the R language and 3D visualizations were prepared with the rgl package.
Hanson, R.T.; Flint, L.E.; Flint, A.L.; Dettinger, M.D.; Faunt, C.C.; Cayan, D.; Schmid, W.
2012-01-01
Potential climate change effects on aspects of conjunctive management of water resources can be evaluated by linking climate models with fully integrated groundwater-surface water models. The objective of this study is to develop a modeling system that links global climate models with regional hydrologic models, using the California Central Valley as a case study. The new method is a supply and demand modeling framework that can be used to simulate and analyze potential climate change and conjunctive use. Supply-constrained and demand-driven linkages in the water system in the Central Valley are represented with the linked climate models, precipitation-runoff models, agricultural and native vegetation water use, and hydrologic flow models to demonstrate the feasibility of this method. Simulated precipitation and temperature were used from the GFDL-A2 climate change scenario through the 21st century to drive a regional water balance mountain hydrologic watershed model (MHWM) for the surrounding watersheds in combination with a regional integrated hydrologic model of the Central Valley (CVHM). Application of this method demonstrates the potential transition from predominantly surface water to groundwater supply for agriculture with secondary effects that may limit this transition of conjunctive use. The particular scenario considered includes intermittent climatic droughts in the first half of the 21st century followed by severe persistent droughts in the second half of the 21st century. These climatic droughts do not yield a valley-wide operational drought but do cause reduced surface water deliveries and increased groundwater abstractions that may cause additional land subsidence, reduced water for riparian habitat, or changes in flows at the Sacramento-San Joaquin River Delta. The method developed here can be used to explore conjunctive use adaptation options and hydrologic risk assessments in regional hydrologic systems throughout the world.
NASA Astrophysics Data System (ADS)
Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.
2016-12-01
Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.
Identification of informative features for predicting proinflammatory potentials of engine exhausts.
Wang, Chia-Chi; Lin, Ying-Chi; Lin, Yuan-Chung; Jhang, Syu-Ruei; Tung, Chun-Wei
2017-08-18
The immunotoxicity of engine exhausts is of high concern to human health due to the increasing prevalence of immune-related diseases. However, the evaluation of immunotoxicity of engine exhausts is currently based on expensive and time-consuming experiments. It is desirable to develop efficient methods for immunotoxicity assessment. To accelerate the development of safe alternative fuels, this study proposed a computational method for identifying informative features for predicting proinflammatory potentials of engine exhausts. A principal component regression (PCR) algorithm was applied to develop prediction models. The informative features were identified by a sequential backward feature elimination (SBFE) algorithm. A total of 19 informative chemical and biological features were successfully identified by SBFE algorithm. The informative features were utilized to develop a computational method named FS-CBM for predicting proinflammatory potentials of engine exhausts. FS-CBM model achieved a high performance with correlation coefficient values of 0.997 and 0.943 obtained from training and independent test sets, respectively. The FS-CBM model was developed for predicting proinflammatory potentials of engine exhausts with a large improvement on prediction performance compared with our previous CBM model. The proposed method could be further applied to construct models for bioactivities of mixtures.
NASA Astrophysics Data System (ADS)
Li, Wenzhuo; Zhao, Yingying; Huang, Shuaiyu; Zhang, Song; Zhang, Lin
2017-01-01
This goal of this work was to develop a coarse-grained (CG) model of a β-O-4 type lignin polymer, because of the time consuming process required to achieve equilibrium for its atomistic model. The automatic adjustment method was used to develop the lignin CG model, which enables easy discrimination between chemically-varied polymers. In the process of building the lignin CG model, a sum of n Gaussian functions was obtained by an approximation of the corresponding atomistic potentials derived from a simple Boltzmann inversion of the distributions of the structural parameters. This allowed the establishment of the potential functions of the CG bond stretching and angular bending. To obtain the potential function of the CG dihedral angle, an algorithm similar to a Fourier progression form was employed together with a nonlinear curve-fitting method. The numerical potentials of the nonbonded portion of the lignin CG model were obtained using a potential inversion iterative method derived from the corresponding atomistic nonbonded distributions. The study results showed that the proposed CG model of lignin agreed well with its atomistic model in terms of the distributions of bond lengths, bending angles, dihedral angles and nonbonded distances between the CG beads. The lignin CG model also reproduced the static and dynamic properties of the atomistic model. The results of the comparative evaluation of the two models suggested that the designed lignin CG model was efficient and reliable.
Electrical resistance tomography using steel cased boreholes as electrodes
Daily, W.D.; Ramirez, A.L.
1999-06-22
An electrical resistance tomography method is described which uses steel cased boreholes as electrodes. The method enables mapping the electrical resistivity distribution in the subsurface from measurements of electrical potential caused by electrical currents injected into an array of electrodes in the subsurface. By use of current injection and potential measurement electrodes to generate data about the subsurface resistivity distribution, which data is then used in an inverse calculation, a model of the electrical resistivity distribution can be obtained. The inverse model may be constrained by independent data to better define an inverse solution. The method utilizes pairs of electrically conductive (steel) borehole casings as current injection electrodes and as potential measurement electrodes. The greater the number of steel cased boreholes in an array, the greater the amount of data is obtained. The steel cased boreholes may be utilized for either current injection or potential measurement electrodes. The subsurface model produced by this method can be 2 or 3 dimensional in resistivity depending on the detail desired in the calculated resistivity distribution and the amount of data to constrain the models. 2 figs.
Electrical resistance tomography using steel cased boreholes as electrodes
Daily, William D.; Ramirez, Abelardo L.
1999-01-01
An electrical resistance tomography method using steel cased boreholes as electrodes. The method enables mapping the electrical resistivity distribution in the subsurface from measurements of electrical potential caused by electrical currents injected into an array of electrodes in the subsurface. By use of current injection and potential measurement electrodes to generate data about the subsurface resistivity distribution, which data is then used in an inverse calculation, a model of the electrical resistivity distribution can be obtained. The inverse model may be constrained by independent data to better define an inverse solution. The method utilizes pairs of electrically conductive (steel) borehole casings as current injection electrodes and as potential measurement electrodes. The greater the number of steel cased boreholes in an array, the greater the amount of data is obtained. The steel cased boreholes may be utilized for either current injection or potential measurement electrodes. The subsurface model produced by this method can be 2 or 3 dimensional in resistivity depending on the detail desired in the calculated resistivity distribution and the amount of data to constain the models.
Modeling Wildfire Hazard in the Western Hindu Kush-Himalayas
NASA Astrophysics Data System (ADS)
Bylow, D.
2012-12-01
Wildfire regimes are a leading driver of global environmental change affecting a diverse array of global ecosystems. Particulates and aerosols produced by wildfires are a primary source of air pollution making the early detection and monitoring of wildfires crucial. The objectives of this study were to model regional wildfire potential and identify environmental, topological, and sociological factors that contribute to the ignition of wildfire events in the Western Hindu Kush-Himalayas of South Asia. The environmental, topological, and sociological factors were used to model regional wildfire potential through multi-criteria evaluation using a method of weighted linear combination. Moderate Resolution Imaging Spectroradiometer (MODIS) and geographic information systems (GIS) data were integrated to analyze regional wildfires and construct the model. Model validation was performed using a holdout cross validation method. The study produced a significant model of wildfire potential in the Western Hindu Kush-Himalayas.; Western Hindu Kush-Himalayas ; Western Hindu Kush-Himalayas Wildfire Potential
The current matrix elements from HAL QCD method
NASA Astrophysics Data System (ADS)
Watanabe, Kai; Ishii, Noriyoshi
2018-03-01
HAL QCD method is a method to construct a potential (HAL QCD potential) that reproduces the NN scattering phase shift faithful to the QCD. The HAL QCD potential is obtained from QCD by eliminating the degrees of freedom of quarks and gluons and leaving only two particular hadrons. Therefor, in the effective quantum mechanics of two nucleons defined by HAL QCD potential, the conserved current consists not only of the nucleon current but also an extra current originating from the potential (two-body current). Though the form of the two-body current is closely related to the potential, it is not straight forward to extract the former from the latter. In this work, we derive the the current matrix element formula in the quantum mechanics defined by the HAL QCD potential. As a first step, we focus on the non-relativistic case. To give an explicit example, we consider a second quantized non-relativistic two-channel coupling model which we refer to as the original model. From the original model, the HAL QCD potential for the open channel is constructed by eliminating the closed channel in the elastic two-particle scattering region. The current matrix element formula is derived by demanding the effective quantum mechanics defined by the HAL QCD potential to respond to the external field in the same way as the original two-channel coupling model.
NASA Technical Reports Server (NTRS)
Stern, D. P.
1976-01-01
Several mathematical methods which are available for the description of magnetic fields in space are reviewed. Examples of the application of such methods are given, with particular emphasis on work related to the geomagnetic field, and their individual properties and associated problems are described. The methods are grouped in five main classes: (1) methods based on the current density, (2) methods using the scalar magnetic potential, (3) toroidal and poloidal components of the field and spherical vector harmonics, (4) Euler potentials, and (5) local expansions of the field near a given reference point. Special attention is devoted to models of the magnetosphere, to the uniqueness of the scalar potential as derived from observed data, and to the L parameter.
Simulation of Liquid Droplet in Air and on a Solid Surface
NASA Astrophysics Data System (ADS)
Launglucknavalai, Kevin
Although multiphase gas and liquid phenomena occurs widely in engineering problems, many aspects of multiphase interaction like within droplet dynamics are still not quantified. This study aims to qualify the Lattice Boltzmann (LBM) Interparticle Potential multiphase computational method in order to build a foundation for future multiphase research. This study consists of two overall sections. The first section in Chapter 2 focuses on understanding the LBM method and Interparticle Potential model. It outlines the LBM method and how it relates to macroscopic fluid dynamics. The standard form of LBM is obtained. The perturbation solution obtaining the Navier-Stokes equations from the LBM equation is presented. Finally, the Interparticle Potential model is incorporated into the numerical LBM method. The second section in Chapter 3 presents the verification and validation cases to confirm the behavior of the single-phase and multiphase LBM models. Experimental and analytical results are used briefly to compare with numerical results when possible using Poiseuille channel flow and flow over a cylinder. While presenting the numerical results, practical considerations like converting LBM scale variables to physical scale variables are considered. Multiphase results are verified using Laplaces law and artificial behaviors of the model are explored. In this study, a better understanding of the LBM method and Interparticle Potential model is gained. This allows the numerical method to be used for comparison with experimental results in the future and provides a better understanding of multiphase physics overall.
Ohyu, Shigeharu; Okamoto, Yoshiwo; Kuriki, Shinya
2002-06-01
A novel magnetocardiographic inverse method for reconstructing the action potential amplitude (APA) and the activation time (AT) on the ventricular myocardium is proposed. This method is based on the propagated excitation model, in which the excitation is propagated through the ventricle with nonuniform height of action potential. Assumption of stepwise waveform on the transmembrane potential was introduced in the model. Spatial gradient of transmembrane potential, which is defined by APA and AT distributed in the ventricular wall, is used for the computation of a current source distribution. Based on this source model, the distributions of APA and AT are inversely reconstructed from the QRS interval of magnetocardiogram (MCG) utilizing a maximum a posteriori approach. The proposed reconstruction method was tested through computer simulations. Stability of the methods with respect to measurement noise was demonstrated. When reference APA was provided as a uniform distribution, root-mean-square errors of estimated APA were below 10 mV for MCG signal-to-noise ratios greater than, or equal to, 20 dB. Low-amplitude regions located at several sites in reference APA distributions were correctly reproduced in reconstructed APA distributions. The goal of our study is to develop a method for detecting myocardial ischemia through the depression of reconstructed APA distributions.
NASA Technical Reports Server (NTRS)
Kimmel, W. M.; Kuhn, N. S.; Berry, R. F.; Newman, J. A.
2001-01-01
An overview and status of current activities seeking alternatives to 200 grade 18Ni Steel CVM alloy for cryogenic wind tunnel models is presented. Specific improvements in material selection have been researched including availability, strength, fracture toughness and potential for use in transonic wind tunnel testing. Potential benefits from utilizing damage tolerant life-prediction methods, recently developed fatigue crack growth codes and upgraded NDE methods are also investigated. Two candidate alloys are identified and accepted for cryogenic/transonic wind tunnel models and hardware.
2014-02-01
Potential evapotranspiration is computed using the Thornthwaite Method. Infiltration is computed from a water balance. DISCLAIMER: The contents of...precipitation, rainfall, runoff, evapotranspiration , infiltration, and number of days with rainfall. A hydrology model was developed to estimate...temperatures. Potential evapotranspiration (PET) is computed using the Thornthwaite Method. Actual evapotranspiration (ET) and infiltration are computed from a
Charge-transfer modified embedded atom method dynamic charge potential for Li-Co-O system
NASA Astrophysics Data System (ADS)
Kong, Fantai; Longo, Roberto C.; Liang, Chaoping; Nie, Yifan; Zheng, Yongping; Zhang, Chenxi; Cho, Kyeongjae
2017-11-01
To overcome the limitation of conventional fixed charge potential methods for the study of Li-ion battery cathode materials, a dynamic charge potential method, charge-transfer modified embedded atom method (CT-MEAM), has been developed and applied to the Li-Co-O ternary system. The accuracy of the potential has been tested and validated by reproducing a variety of structural and electrochemical properties of LiCoO2. A detailed analysis on the local charge distribution confirmed the capability of this potential for dynamic charge modeling. The transferability of the potential is also demonstrated by its reliability in describing Li-rich Li2CoO2 and Li-deficient LiCo2O4 compounds, including their phase stability, equilibrium volume, charge states and cathode voltages. These results demonstrate that the CT-MEAM dynamic charge potential could help to overcome the challenge of modeling complex ternary transition metal oxides. This work can promote molecular dynamics studies of Li ion cathode materials and other important transition metal oxides systems that involve complex electrochemical and catalytic reactions.
Charge-transfer modified embedded atom method dynamic charge potential for Li-Co-O system.
Kong, Fantai; Longo, Roberto C; Liang, Chaoping; Nie, Yifan; Zheng, Yongping; Zhang, Chenxi; Cho, Kyeongjae
2017-11-29
To overcome the limitation of conventional fixed charge potential methods for the study of Li-ion battery cathode materials, a dynamic charge potential method, charge-transfer modified embedded atom method (CT-MEAM), has been developed and applied to the Li-Co-O ternary system. The accuracy of the potential has been tested and validated by reproducing a variety of structural and electrochemical properties of LiCoO 2 . A detailed analysis on the local charge distribution confirmed the capability of this potential for dynamic charge modeling. The transferability of the potential is also demonstrated by its reliability in describing Li-rich Li 2 CoO 2 and Li-deficient LiCo 2 O 4 compounds, including their phase stability, equilibrium volume, charge states and cathode voltages. These results demonstrate that the CT-MEAM dynamic charge potential could help to overcome the challenge of modeling complex ternary transition metal oxides. This work can promote molecular dynamics studies of Li ion cathode materials and other important transition metal oxides systems that involve complex electrochemical and catalytic reactions.
Acoustic Treatment Design Scaling Methods. Phase 2
NASA Technical Reports Server (NTRS)
Clark, L. (Technical Monitor); Parrott, T. (Technical Monitor); Jones, M. (Technical Monitor); Kraft, R. E.; Yu, J.; Kwan, H. W.; Beer, B.; Seybert, A. F.; Tathavadekar, P.
2003-01-01
The ability to design, build and test miniaturized acoustic treatment panels on scale model fan rigs representative of full scale engines provides not only cost-savings, but also an opportunity to optimize the treatment by allowing multiple tests. To use scale model treatment as a design tool, the impedance of the sub-scale liner must be known with confidence. This study was aimed at developing impedance measurement methods for high frequencies. A normal incidence impedance tube method that extends the upper frequency range to 25,000 Hz. without grazing flow effects was evaluated. The free field method was investigated as a potential high frequency technique. The potential of the two-microphone in-situ impedance measurement method was evaluated in the presence of grazing flow. Difficulties in achieving the high frequency goals were encountered in all methods. Results of developing a time-domain finite difference resonator impedance model indicated that a re-interpretation of the empirical fluid mechanical models used in the frequency domain model for nonlinear resistance and mass reactance may be required. A scale model treatment design that could be tested on the Universal Propulsion Simulator vehicle was proposed.
Linda B. Brubaker; Philip E. Higuera; T. Scott Rupp; Mark A. Olson; Patricia M. Anderson; Feng Sheng. Hu
2009-01-01
Interactions between vegetation and fire have the potential to overshadow direct effects of climate change on fire regimes in boreal forests of North America. We develop methods to compare sediment-charcoal records with fire regimes simulated by an ecological model, ALFRESCO (Alaskan Frame-based Ecosystem Code) and apply these methods to evaluate potential causes of a...
Louis R. Iverson; Anantha M. Prasad; Stephen N. Matthews; Matthew P. Peters
2010-01-01
Climate change will likely cause impacts that are species specific and significant; modeling is critical to better understand potential changes in suitable habitat. We use empirical, abundance-based habitat models utilizing decision tree-based ensemble methods to explore potential changes of 134 tree species habitats in the eastern United States (http://www.nrs.fs.fed....
OWL: A code for the two-center shell model with spherical Woods-Saxon potentials
NASA Astrophysics Data System (ADS)
Diaz-Torres, Alexis
2018-03-01
A Fortran-90 code for solving the two-center nuclear shell model problem is presented. The model is based on two spherical Woods-Saxon potentials and the potential separable expansion method. It describes the single-particle motion in low-energy nuclear collisions, and is useful for characterizing a broad range of phenomena from fusion to nuclear molecular structures.
An entropy correction method for unsteady full potential flows with strong shocks
NASA Technical Reports Server (NTRS)
Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.
1986-01-01
An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.
Fast auto-focus scheme based on optical defocus fitting model
NASA Astrophysics Data System (ADS)
Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min
2018-04-01
An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.
NASA Astrophysics Data System (ADS)
Jougnot, D.; Roubinet, D.; Linde, N.; Irving, J.
2016-12-01
Quantifying fluid flow in fractured media is a critical challenge in a wide variety of research fields and applications. To this end, geophysics offers a variety of tools that can provide important information on subsurface physical properties in a noninvasive manner. Most geophysical techniques infer fluid flow by data or model differencing in time or space (i.e., they are not directly sensitive to flow occurring at the time of the measurements). An exception is the self-potential (SP) method. When water flows in the subsurface, an excess of charge in the pore water that counterbalances electric charges at the mineral-pore water interface gives rise to a streaming current and an associated streaming potential. The latter can be measured with the SP technique, meaning that the method is directly sensitive to fluid flow. Whereas numerous field experiments suggest that the SP method may allow for the detection of hydraulically active fractures, suitable tools for numerically modeling streaming potentials in fractured media do not exist. Here, we present a highly efficient two-dimensional discrete-dual-porosity approach for solving the fluid-flow and associated self-potential problems in fractured domains. Our approach is specifically designed for complex fracture networks that cannot be investigated using standard numerical methods due to computational limitations. We then simulate SP signals associated with pumping conditions for a number of examples to show that (i) accounting for matrix fluid flow is essential for accurate SP modeling and (ii) the sensitivity of SP to hydraulically active fractures is intimately linked with fracture-matrix fluid interactions. This implies that fractures associated with strong SP amplitudes are likely to be hydraulically conductive, attracting fluid flow from the surrounding matrix.
Kwasniok, Frank; Lohmann, Gerrit
2009-12-01
A method for systematically deriving simple nonlinear dynamical models from ice-core data is proposed. It offers a tool to integrate models and theories with paleoclimatic data. The method is based on the unscented Kalman filter, a nonlinear extension of the conventional Kalman filter. Here, we adopt the abstract conceptual model of stochastically driven motion in a potential that allows for two distinctly different states. The parameters of the model-the shape of the potential and the noise level-are estimated from a North Greenland ice-core record. For the glacial period from 70 to 20 ky before present, a potential is derived that is asymmetric and almost degenerate. There is a deep well corresponding to a cold stadial state and a very shallow well corresponding to a warm interstadial state.
Estimation of zeta potential of electroosmotic flow in a microchannel using a reduced-order model.
Park, H M; Hong, S M; Lee, J S
2007-10-01
A reduced-order model is derived for electroosmotic flow in a microchannel of nonuniform cross section using the Karhunen-Loève Galerkin (KLG) procedure. The resulting reduced-order model is shown to predict electroosmotic flows accurately with minimal consumption of computer time for a wide range of zeta potential zeta and dielectric constant epsilon. Using the reduced-order model, a practical method is devised to estimate zeta from the velocity measurements of the electroosmotic flow in the microchannel. The proposed method is found to estimate zeta with reasonable accuracy even with noisy velocity measurements.
Simulation of electric double-layer capacitors: evaluation of constant potential method
NASA Astrophysics Data System (ADS)
Wang, Zhenxing; Laird, Brian; Yang, Yang; Olmsted, David; Asta, Mark
2014-03-01
Atomistic simulations can play an important role in understanding electric double-layer capacitors (EDLCs) at a molecular level. In such simulations, typically the electrode surface is modeled using fixed surface charges, which ignores the charge fluctuation induced by local fluctuations in the electrolyte solution. In this work we evaluate an explicit treatment of charges, namely constant potential method (CPM)[1], in which the electrode charges are dynamically updated to maintain constant electrode potential. We employ a model system with a graphite electrode and a LiClO4/acetonitrile electrolyte, examined as a function of electrode potential differences. Using various molecular and macroscopic properties as metrics, we compare CPM simulations on this system to results using fixed surface charges. Specifically, results for predicted capacity, electric potential gradient and solvent density profile are identical between the two methods; However, ion density profiles and solvation structure yield significantly different results.
A potential method for lift evaluation from velocity field data
NASA Astrophysics Data System (ADS)
de Guyon-Crozier, Guillaume; Mulleners, Karen
2017-11-01
Computing forces from velocity field measurements is one of the challenges in experimental aerodynamics. This work focuses on low Reynolds flows, where the dynamics of the leading and trailing edge vortices play a major role in lift production. Recent developments in 2D potential flow theory, using discrete vortex models, have shown good results for unsteady wing motions. A method is presented to calculate lift from experimental velocity field data using a discrete vortex potential flow model. The model continuously adds new point vortices at leading and trailing edges whose circulations are set directly from vorticity measurements. Forces are computed using the unsteady Blasius equation and compared with measured loads.
White, Alec F.; Head-Gordon, Martin; McCurdy, C. William
2017-01-30
The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Alec F.; Head-Gordon, Martin; McCurdy, C. William
The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less
NASA Astrophysics Data System (ADS)
Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko
Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.
IRT Model Selection Methods for Dichotomous Items
ERIC Educational Resources Information Center
Kang, Taehoon; Cohen, Allan S.
2007-01-01
Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…
Numerical human models for accident research and safety - potentials and limitations.
Praxl, Norbert; Adamec, Jiri; Muggenthaler, Holger; von Merten, Katja
2008-01-01
The method of numerical simulation is frequently used in the area of automotive safety. Recently, numerical models of the human body have been developed for the numerical simulation of occupants. Different approaches in modelling the human body have been used: the finite-element and the multibody technique. Numerical human models representing the two modelling approaches are introduced and the potentials and limitations of these models are discussed.
Johnson
1999-01-01
The electrokinetic behavior of granular quartz sand in aqueous solution is investigated by both microelectrophoresis and streaming potential methods. zeta potentials of surfaces composed of granular quartz obtained via streaming potential methods are compared to electrophoretic mobility zeta potential values of colloid-sized quartz fragments. The zeta values generated by these alternate methods are in close agreement over a wide pH range and electrolyte concentrations spanning several orders of magnitude. Streaming measurements performed on chemically heterogeneous mixtures of physically homogeneous sand are shown to obey a simple mixing model based on the surface area-weighted average of the streaming potentials associated with the individual end members. These experimental results support the applicability of the streaming potential method as a means of determining the zeta potential of granular porous media surfaces. Copyright 1999 Academic Press.
Assessment of risk due to the use of carbon fiber composites in commercial and general aviation
NASA Technical Reports Server (NTRS)
Fiksel, J.; Rosenfield, D.; Kalelkar, A.
1980-01-01
The development of a national risk profile for the total annual aircraft losses due to carbon fiber composite (CFC) usage through 1993 is discussed. The profile was developed using separate simulation methods for commercial and general aviation aircraft. A Monte Carlo method which was used to assess the risk in commercial aircraft is described. The method projects the potential usage of CFC through 1993, investigates the incidence of commercial aircraft fires, models the potential release and dispersion of carbon fibers from a fire, and estimates potential economic losses due to CFC damaging electronic equipment. The simulation model for the general aviation aircraft is described. The model emphasizes variations in facility locations and release conditions, estimates distribution of CFC released in general aviation aircraft accidents, and tabulates the failure probabilities and aggregate economic losses in the accidents.
Minimizing Higgs potentials via numerical polynomial homotopy continuation
NASA Astrophysics Data System (ADS)
Maniatis, M.; Mehta, D.
2012-08-01
The study of models with extended Higgs sectors requires to minimize the corresponding Higgs potentials, which is in general very difficult. Here, we apply a recently developed method, called numerical polynomial homotopy continuation (NPHC), which guarantees to find all the stationary points of the Higgs potentials with polynomial-like non-linearity. The detection of all stationary points reveals the structure of the potential with maxima, metastable minima, saddle points besides the global minimum. We apply the NPHC method to the most general Higgs potential having two complex Higgs-boson doublets and up to five real Higgs-boson singlets. Moreover the method is applicable to even more involved potentials. Hence the NPHC method allows to go far beyond the limits of the Gröbner basis approach.
Vectorized Jiles-Atherton hysteresis model
NASA Astrophysics Data System (ADS)
Szymański, Grzegorz; Waszak, Michał
2004-01-01
This paper deals with vector hysteresis modeling. A vector model consisting of individual Jiles-Atherton components placed along principal axes is proposed. The cross-axis coupling ensures general vector model properties. Minor loops are obtained using scaling method. The model is intended for efficient finite element method computations defined in terms of magnetic vector potential. Numerical efficiency is ensured by differential susceptibility approach.
Swerts, Ben; Chibotaru, Liviu F; Lindh, Roland; Seijo, Luis; Barandiaran, Zoila; Clima, Sergiu; Pierloot, Kristin; Hendrickx, Marc F A
2008-04-01
In this article, we present a fragment model potential approach for the description of the crystalline environment as an extension of the use of embedding ab initio model potentials (AIMPs). The biggest limitation of the embedding AIMP method is the spherical nature of its model potentials. This poses problems as soon as the method is applied to crystals containing strongly covalently bonded structures with highly nonspherical electron densities. The newly proposed method addresses this problem by keeping the full electron density as its model potential, thus allowing one to group sets of covalently bonded atoms into fragments. The implementation in the MOLCAS 7.0 quantum chemistry package of the new method, which we call the embedding fragment ab inito model potential method (embedding FAIMP), is reported here, together with results of CASSCF/CASPT2 calculations. The developed methodology is applied for two test problems: (i) the investigation of the lowest ligand field states (2)A1 and (2)B1 of the Cr(V) defect in the YVO4 crystal and (ii) the investigation of the lowest ligand field and ligand-metal charge transfer (LMCT) states at the Mn(II) substitutional impurity doped into CaCO3. Comparison with similar calculations involving AIMPs for all environmental atoms, including those from covalently bounded units, shows that the FAIMP treatment of the YVO4 units surrounding the CrO4(3-) cluster increases the excitation energy (2)B1 → (2)A1 by ca. 1000 cm(-1) at the CASSCF level of calculation. In the case of the Mn(CO3)6(10-) cluster, the FAIMP treatment of the CO3(2-) units of the environment give smaller corrections, of ca. 100 cm(-1), for the ligand-field excitation energies, which is explained by the larger ligands of this cluster. However, the correction for the energy of the lowest LMCT transition is found to be ca. 600 cm(-1) for the CASSCF and ca. 1300 cm(-1) for the CASPT2 calculation.
Electric potential calculation in molecular simulation of electric double layer capacitors
NASA Astrophysics Data System (ADS)
Wang, Zhenxing; Olmsted, David L.; Asta, Mark; Laird, Brian B.
2016-11-01
For the molecular simulation of electric double layer capacitors (EDLCs), a number of methods have been proposed and implemented to determine the one-dimensional electric potential profile between the two electrodes at a fixed potential difference. In this work, we compare several of these methods for a model LiClO4-acetonitrile/graphite EDLC simulated using both the traditional fixed-charged method (FCM), in which a fixed charge is assigned a priori to the electrode atoms, or the recently developed constant potential method (CPM) (2007 J. Chem. Phys. 126 084704), where the electrode charges are allowed to fluctuate to keep the potential fixed. Based on an analysis of the full three-dimensional electric potential field, we suggest a method for determining the averaged one-dimensional electric potential profile that can be applied to both the FCM and CPM simulations. Compared to traditional methods based on numerically solving the one-dimensional Poisson’s equation, this method yields better accuracy and no supplemental assumptions.
Full-Potential Modeling of Blade-Vortex Interactions
1997-12-01
modeled by any arbitrary distribution. Stremel (ref. 23) uses a method in which the vortex is modeled with an area-weighted distribution of vorticity. A...Helicopter Rotor. Ph.D. Thesis, StanfordUniv., 1978. 23. Stremel , P. M.: Computational Methods for Non-Planar Vortex Wake Flow Fields. M.S. Thesis
A Note on Comparing Examinee Classification Methods for Cognitive Diagnosis Models
ERIC Educational Resources Information Center
Huebner, Alan; Wang, Chun
2011-01-01
Cognitive diagnosis models have received much attention in the recent psychometric literature because of their potential to provide examinees with information regarding multiple fine-grained discretely defined skills, or attributes. This article discusses the issue of methods of examinee classification for cognitive diagnosis models, which are…
The analytical transfer matrix method for PT-symmetric complex potential
NASA Astrophysics Data System (ADS)
Naceri, Leila; Hammou, Amine B.
2017-07-01
We have extended the analytical transfer matrix (ATM) method to solve quantum mechanical bound state problems with complex PT-symmetric potentials. Our work focuses on a class of models studied by Bender and Jones, we calculate the energy eigenvalues, discuss the critical values of g and compare the results with those obtained from other methods such as exact numerical computation and WKB approximation method.
Model Predictive Control of LCL Three-level Photovoltaic Grid-connected Inverter
NASA Astrophysics Data System (ADS)
Liang, Cheng; Tian, Engang; Pang, Baobing; Li, Juan; Yang, Yang
2018-05-01
In this paper, neutral point clamped three-level inverter circuit is analyzed to establish a mathematical model of the three-level inverter in the αβ coordinate system. The causes and harms of the midpoint potential imbalance problem are described. The paper use the method of model predictive control to control the entire inverter circuit[1]. The simulation model of the inverter system is built in Matlab/Simulink software. It is convenient to control the grid-connected current, suppress the unbalance of the midpoint potential and reduce the switching frequency by changing the weight coefficient in the cost function. The superiority of the model predictive control in the control method of the inverter system is verified.
A Comparative Study of High and Low Fidelity Fan Models for Turbofan Engine System Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1991-01-01
In this paper, a heterogeneous propulsion system simulation method is presented. The method is based on the formulation of a cycle model of a gas turbine engine. The model includes the nonlinear characteristics of the engine components via use of empirical data. The potential to simulate the entire engine operation on a computer without the aid of data is demonstrated by numerically generating "performance maps" for a fan component using two flow models of varying fidelity. The suitability of the fan models were evaluated by comparing the computed performance with experimental data. A discussion of the potential benefits and/or difficulties in connecting simulations solutions of differing fidelity is given.
A Stellar Dynamical Black Hole Mass for the Reverberation Mapped AGN NGC 5273
NASA Astrophysics Data System (ADS)
Batiste, Merida; Bentz, Misty C.; Valluri, Monica; Onken, Christopher A.
2018-01-01
We present preliminary results from stellar dynamical modeling of the mass of the central super-massive black hole (MBH) in the active galaxy NGC 5273. NGC 5273 is one of the few AGN with a secure MBH measurement from reverberation-mapping that is also nearby enough to measure MBH with stellar dynamical modeling. Dynamical modeling and reverberation-mapping are the two most heavily favored methods of direct MBH determination in the literature, however the specific limitations of each method means that there are very few galaxies for which both can be used. To date only two such galaxies, NGC 3227 and NGC 4151, have MBH determinations from both methods. Given this small sample size, it is not yet clear that the two methods give consistent results. Moreover, given the inherent uncertainties and potential systematic biases in each method, it is likewise unclear whether one method should be preferred over the other. This study is part of an ongoing project to increase the sample of galaxies with secure MBH measurements from both methods, so that a direct comparison may be made. NGC 5273 provides a particularly valuable comparison because it is free of kinematic substructure (e.g. the presence of a bar, as is the case for NGC 4151) which can complicate and potentially bias results from stellar dynamical modeling. I will discuss our current results as well as the advantages and limitations of each method, and the potential sources of systematic bias that may affect comparison between results.
Zaretzki, Jed; Bergeron, Charles; Rydberg, Patrik; Huang, Tao-wei; Bennett, Kristin P; Breneman, Curt M
2011-07-25
This article describes RegioSelectivity-Predictor (RS-Predictor), a new in silico method for generating predictive models of P450-mediated metabolism for drug-like compounds. Within this method, potential sites of metabolism (SOMs) are represented as "metabolophores": A concept that describes the hierarchical combination of topological and quantum chemical descriptors needed to represent the reactivity of potential metabolic reaction sites. RS-Predictor modeling involves the use of metabolophore descriptors together with multiple-instance ranking (MIRank) to generate an optimized descriptor weight vector that encodes regioselectivity trends across all cases in a training set. The resulting pathway-independent (O-dealkylation vs N-oxidation vs Csp(3) hydroxylation, etc.), isozyme-specific regioselectivity model may be used to predict potential metabolic liabilities. In the present work, cross-validated RS-Predictor models were generated for a set of 394 substrates of CYP 3A4 as a proof-of-principle for the method. Rank aggregation was then employed to merge independently generated predictions for each substrate into a single consensus prediction. The resulting consensus RS-Predictor models were shown to reliably identify at least one observed site of metabolism in the top two rank-positions on 78% of the substrates. Comparisons between RS-Predictor and previously described regioselectivity prediction methods reveal new insights into how in silico metabolite prediction methods should be compared.
Data Assimilation in the Solar Wind: Challenges and First Results
NASA Astrophysics Data System (ADS)
Lang, Matthew; Browne, Phil; van Leeuwen, Peter Jan; Owens, Matt
2017-04-01
Data assimilation (DA) is currently underused in the solar wind field to improve the modelled variables using observations. Data assimilation has been used in Numerical Weather Prediction (NWP) models with great success, and it can be seen that the improvement of DA methods in NWP modelling has led to improvements in forecasting skill over the past 20-30 years. The state of the art DA methods developed for NWP modelling have never been applied to space weather models, hence it is important to implement the improvements that can be gained from these methods to improve our understanding of the solar wind and how to model it. The ENLIL solar wind model has been coupled to the EMPIRE data assimilation library in order to apply these advanced data assimilation methods to a space weather model. This coupling allows multiple data assimilation methods to be applied to ENLIL with relative ease. I shall discuss twin experiments that have been undertaken, applying the LETKF to the ENLIL model when a CME occurs in the observation and when it does not. These experiments show that there is potential in the application of advanced data assimilation methods to the solar wind field, however, there is still a long way to go until it can be applied effectively. I shall discuss these issues and suggest potential avenues for future research in this area.
Tensor renormalization group methods for spin and gauge models
NASA Astrophysics Data System (ADS)
Zou, Haiyuan
The analysis of the error of perturbative series by comparing it to the exact solution is an important tool to understand the non-perturbative physics of statistical models. For some toy models, a new method can be used to calculate higher order weak coupling expansion and modified perturbation theory can be constructed. However, it is nontrivial to generalize the new method to understand the critical behavior of high dimensional spin and gauge models. Actually, it is a big challenge in both high energy physics and condensed matter physics to develop accurate and efficient numerical algorithms to solve these problems. In this thesis, one systematic way named tensor renormalization group method is discussed. The applications of the method to several spin and gauge models on a lattice are investigated. theoretically, the new method allows one to write an exact representation of the partition function of models with local interactions. E.g. O(N) models, Z2 gauge models and U(1) gauge models. Practically, by using controllable approximations, results in both finite volume and the thermodynamic limit can be obtained. Another advantage of the new method is that it is insensitive to sign problems for models with complex coupling and chemical potential. Through the new approach, the Fisher's zeros of the 2D O(2) model in the complex coupling plane can be calculated and the finite size scaling of the results agrees well with the Kosterlitz-Thouless assumption. Applying the method to the O(2) model with a chemical potential, new phase diagram of the models can be obtained. The structure of the tensor language may provide a new tool to understand phase transition properties in general.
An assessment of two methods for identifying undocumented levees using remotely sensed data
Czuba, Christiana R.; Williams, Byron K.; Westman, Jack; LeClaire, Keith
2015-01-01
Many undocumented and commonly unmaintained levees exist in the landscape complicating flood forecasting, risk management, and emergency response. This report describes a pilot study completed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers to assess two methods to identify undocumented levees by using remotely sensed, high-resolution topographic data. For the first method, the U.S. Army Corps of Engineers examined hillshades computed from a digital elevation model that was derived from light detection and ranging (lidar) to visually identify potential levees and then used detailed site visits to assess the validity of the identifications. For the second method, the U.S. Geological Survey applied a wavelet transform to a lidar-derived digital elevation model to identify potential levees. The hillshade method was applied to Delano, Minnesota, and the wavelet-transform method was applied to Delano and Springfield, Minnesota. Both methods were successful in identifying levees but also identified other features that required interpretation to differentiate from levees such as constructed barriers, high banks, and bluffs. Both methods are complementary to each other, and a potential conjunctive method for testing in the future includes (1) use of the wavelet-transform method to rapidly identify slope-break features in high-resolution topographic data, (2) further examination of topographic data using hillshades and aerial photographs to classify features and map potential levees, and (3) a verification check of each identified potential levee with local officials and field visits.
Optimized Free Energies from Bidirectional Single-Molecule Force Spectroscopy
NASA Astrophysics Data System (ADS)
Minh, David D. L.; Adib, Artur B.
2008-05-01
An optimized method for estimating path-ensemble averages using data from processes driven in opposite directions is presented. Based on this estimator, bidirectional expressions for reconstructing free energies and potentials of mean force from single-molecule force spectroscopy—valid for biasing potentials of arbitrary stiffness—are developed. Numerical simulations on a model potential indicate that these methods perform better than unidirectional strategies.
NASA Astrophysics Data System (ADS)
Ghiglieri, Giorgio; Barbieri, Giulio; Vernier, Antonio; Carletti, Alberto; Demurtas, Nicola; Pinna, Rosanna; Pittalis, Daniele
2009-12-01
SummaryThe paper describes the methodological and innovative approach, which aims to evaluate the potential risk of nitrate pollution in aquifers from agricultural practices by combining intrinsic aquifer vulnerability to contamination, according to the SINTACS R5 method, with agricultural nitrates hazard assessment, according to the IPNOA index. The proposed parametric model adopts a geographically based integrated evaluation system, comprising qualitative and semi-quantitative indicators. In some cases, the authors have modified this model, revising and adjusting scores and weights of the parameter to account for the different environmental conditions, and calibrating accordingly. The method has been successfully implemented and validated in the pilot area of the Alghero coastal plain (northwestern Sardinia, Italy) where aquifers with high productivity are present. The classes with a major score (high potential risk) are in the central part of the plain, in correspondence with the most productive aquifers, where most actual or potential pollution sources are concentrated. These are mainly represented by intensive agricultural activities, by industrial agglomerate and diffused urbanisation. For calibrating the model and optimizing and/or weighting the examined factors, the modelling results were validated by comparison with groundwater quality data, in particular nitrate content, and with the potential pollution sources census data. The parametric method is a popular approach to groundwater vulnerability assessment, in contrast to groundwater flow model and statistical method ones: it is, indeed, relatively inexpensive and straightforward, and use data commonly available or that can be estimated. The zoning of nitrate vulnerable areas provides regional authorities with a useful decision support tool for planning land-use properly managing groundwater and combating and/or mitigating desertification processes. However, a careful validation of the results is indispensable for reliable application.
Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.
2009-01-01
With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water transmission. Seven sites with differing geologic contexts were selected following review of the capacitively coupled resistivity data collected in 2004. A reevaluation of these sites using the mean, minimum-unadjusted, and minimum-adjusted methods was performed to compare the different approaches for estimating leakage potential. Five of the seven sites contained underlying confining units, for which the minimum-unadjusted and minimum-adjusted methods accounted for the confining-unit effect. Estimates of overall leakage potential were lower for the minimum-unadjusted and minimum-adjusted methods than those estimated by the mean method. For most sites, the local heterogeneity adjustment procedure of the minimum-adjusted method resulted in slightly larger overall leakage-potential estimates. In contrast to the mean method, the two minimum-based methods allowed the least permeable areas to control the overall vertical permeability of the subsurface. The minimum-adjusted method refined leakage-potential estimation by additionally including local lithologic heterogeneity effects.
NASA Astrophysics Data System (ADS)
Orozco Cortés, Luis Fernando; Fernández García, Nicolás
2014-05-01
A method to obtain the general solution of any constant piecewise potential is presented, this is achieved by means of the analysis of the transfer matrices in each cutoff. The resonance phenomenon together with the supersymmetric quantum mechanics technique allow us to construct a wide family of complex potentials which can be used as theoretical models for optical systems. The method is applied to the particular case for which the potential function has six cutoff points.
Estimating soil matric potential in Owens Valley, California
Sorenson, Stephen K.; Miller, R.F.; Welch, M.R.; Groeneveld, D.P.; Branson, F.A.
1988-01-01
Much of the floor of the Owens Valley, California, is covered with alkaline scrub and alkaline meadow plant communities, whose existence is dependent partly on precipitation and partly on water infiltrated into the rooting zone from the shallow water table. The extent to which these plant communities are capable of adapting to and surviving fluctuations in the water table depends on physiological adaptations of the plants and on the water content, matric potential characteristics of the soils. Two methods were used to estimate soil matric potential in test sites in Owens Valley. The first was the filter-paper method, which uses water content of filter papers equilibrated to water content of soil samples taken with a hand auger. The other method of estimating soil matric potential was a modeling approach based on data from this and previous investigations. These data indicate that the base 10 logarithm of soil matric potential is a linear function of gravimetric soil water content for a particular soil. Estimates of soil water characteristic curves were made at two sites by averaging the gravimetric soil water content and soil matric potential values from multiple samples at 0.1 m depths derived by using the hand auger and filter paper method and entering these values in the soil water model. The characteristic curves then were used to estimate soil matric potential from estimates of volumetric soil water content derived from neutron-probe readings. Evaluation of the modeling technique at two study sites indicated that estimates of soil matric potential within 0.5 pF units of the soil matric potential value derived by using the filter paper method could be obtained 90 to 95% of the time in soils where water content was less than field capacity. The greatest errors occurred at depths where there was a distinct transition between soils of different textures. (Lantz-PTT)
SELF-BLM: Prediction of drug-target interactions via self-training SVM.
Keum, Jongsoo; Nam, Hojung
2017-01-01
Predicting drug-target interactions is important for the development of novel drugs and the repositioning of drugs. To predict such interactions, there are a number of methods based on drug and target protein similarity. Although these methods, such as the bipartite local model (BLM), show promise, they often categorize unknown interactions as negative interaction. Therefore, these methods are not ideal for finding potential drug-target interactions that have not yet been validated as positive interactions. Thus, here we propose a method that integrates machine learning techniques, such as self-training support vector machine (SVM) and BLM, to develop a self-training bipartite local model (SELF-BLM) that facilitates the identification of potential interactions. The method first categorizes unlabeled interactions and negative interactions among unknown interactions using a clustering method. Then, using the BLM method and self-training SVM, the unlabeled interactions are self-trained and final local classification models are constructed. When applied to four classes of proteins that include enzymes, G-protein coupled receptors (GPCRs), ion channels, and nuclear receptors, SELF-BLM showed the best performance for predicting not only known interactions but also potential interactions in three protein classes compare to other related studies. The implemented software and supporting data are available at https://github.com/GIST-CSBL/SELF-BLM.
A method of solid-solid phase equilibrium calculation by molecular dynamics
NASA Astrophysics Data System (ADS)
Karavaev, A. V.; Dremov, V. V.
2016-12-01
A method for evaluation of solid-solid phase equilibrium curves in molecular dynamics simulation for a given model of interatomic interaction is proposed. The method allows to calculate entropies of crystal phases and provides an accuracy comparable with that of the thermodynamic integration method by Frenkel and Ladd while it is much simpler in realization and less intense computationally. The accuracy of the proposed method was demonstrated in MD calculations of entropies for EAM potential for iron and for MEAM potential for beryllium. The bcc-hcp equilibrium curves for iron calculated for the EAM potential by the thermodynamic integration method and by the proposed one agree quite well.
Fully Bayesian tests of neutrality using genealogical summary statistics.
Drummond, Alexei J; Suchard, Marc A
2008-10-31
Many data summary statistics have been developed to detect departures from neutral expectations of evolutionary models. However questions about the neutrality of the evolution of genetic loci within natural populations remain difficult to assess. One critical cause of this difficulty is that most methods for testing neutrality make simplifying assumptions simultaneously about the mutational model and the population size model. Consequentially, rejecting the null hypothesis of neutrality under these methods could result from violations of either or both assumptions, making interpretation troublesome. Here we harness posterior predictive simulation to exploit summary statistics of both the data and model parameters to test the goodness-of-fit of standard models of evolution. We apply the method to test the selective neutrality of molecular evolution in non-recombining gene genealogies and we demonstrate the utility of our method on four real data sets, identifying significant departures of neutrality in human influenza A virus, even after controlling for variation in population size. Importantly, by employing a full model-based Bayesian analysis, our method separates the effects of demography from the effects of selection. The method also allows multiple summary statistics to be used in concert, thus potentially increasing sensitivity. Furthermore, our method remains useful in situations where analytical expectations and variances of summary statistics are not available. This aspect has great potential for the analysis of temporally spaced data, an expanding area previously ignored for limited availability of theory and methods.
NASA Astrophysics Data System (ADS)
Roubinet, D.; Linde, N.; Jougnot, D.; Irving, J.
2016-05-01
Numerous field experiments suggest that the self-potential (SP) geophysical method may allow for the detection of hydraulically active fractures and provide information about fracture properties. However, a lack of suitable numerical tools for modeling streaming potentials in fractured media prevents quantitative interpretation and limits our understanding of how the SP method can be used in this regard. To address this issue, we present a highly efficient two-dimensional discrete-dual-porosity approach for solving the fluid flow and associated self-potential problems in fractured rock. Our approach is specifically designed for complex fracture networks that cannot be investigated using standard numerical methods. We then simulate SP signals associated with pumping conditions for a number of examples to show that (i) accounting for matrix fluid flow is essential for accurate SP modeling and (ii) the sensitivity of SP to hydraulically active fractures is intimately linked with fracture-matrix fluid interactions. This implies that fractures associated with strong SP amplitudes are likely to be hydraulically conductive, attracting fluid flow from the surrounding matrix.
Seven lessons from manyfield inflation in random potentials
NASA Astrophysics Data System (ADS)
Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David
2018-01-01
We study inflation in models with many interacting fields subject to randomly generated scalar potentials. We use methods from non-equilibrium random matrix theory to construct the potentials and an adaption of the `transport method' to evolve the two-point correlators during inflation. This construction allows, for the first time, for an explicit study of models with up to 100 interacting fields supporting a period of `approximately saddle-point' inflation. We determine the statistical predictions for observables by generating over 30,000 models with 2–100 fields supporting at least 60 efolds of inflation. These studies lead us to seven lessons: i) Manyfield inflation is not single-field inflation, ii) The larger the number of fields, the simpler and sharper the predictions, iii) Planck compatibility is not rare, but future experiments may rule out this class of models, iv) The smoother the potentials, the sharper the predictions, v) Hyperparameters can transition from stiff to sloppy, vi) Despite tachyons, isocurvature can decay, vii) Eigenvalue repulsion drives the predictions. We conclude that many of the `generic predictions' of single-field inflation can be emergent features of complex inflation models.
Comparison of the Melting Temperatures of Classical and Quantum Water Potential Models
NASA Astrophysics Data System (ADS)
Du, Sen; Yoo, Soohaeng; Li, Jinjin
2017-08-01
As theoretical approaches and technical methods improve over time, the field of computer simulations for water has greatly progressed. Water potential models become much more complex when additional interactions and advanced theories are considered. Macroscopic properties of water predicted by computer simulations using water potential models are expected to be consistent with experimental outcomes. As such, discrepancies between computer simulations and experiments could be a criterion to comment on the performances of various water potential models. Notably, water can occur not only as liquid phases but also as solid and vapor phases. Therefore, the melting temperature related to the solid and liquid phase equilibrium is an effective parameter to judge the performances of different water potential models. As a mini review, our purpose is to introduce some water models developed in recent years and the melting temperatures obtained through simulations with such models. Moreover, some explanations referred to in the literature are described for the additional evaluation of the water potential models.
Modeling specific action potentials in the human atria based on a minimal single-cell model.
Richter, Yvonne; Lind, Pedro G; Maass, Philipp
2018-01-01
We present an effective method to model empirical action potentials of specific patients in the human atria based on the minimal model of Bueno-Orovio, Cherry and Fenton adapted to atrial electrophysiology. In this model, three ionic are currents introduced, where each of it is governed by a characteristic time scale. By applying a nonlinear optimization procedure, a best combination of the respective time scales is determined, which allows one to reproduce specific action potentials with a given amplitude, width and shape. Possible applications for supporting clinical diagnosis are pointed out.
Vasil'ev, G F
2013-01-01
Owing to methodical disadvantages, the theory of control still lacks the potential for the analysis of biological systems. To get the full benefit of the method in addition to the algorithmic model of control (as of today the only used model in the theory of control) a parametric model of control is offered to employ. The reasoning for it is explained. The approach suggested provides the possibility to use all potential of the modern theory of control for the analysis of biological systems. The cybernetic approach is shown taking a system of the rise of glucose concentration in blood as an example.
Conditioning of Model Identification Task in Immune Inspired Optimizer SILO
NASA Astrophysics Data System (ADS)
Wojdan, K.; Swirski, K.; Warchol, M.; Maciorowski, M.
2009-10-01
Methods which provide good conditioning of model identification task in immune inspired, steady-state controller SILO (Stochastic Immune Layer Optimizer) are presented in this paper. These methods are implemented in a model based optimization algorithm. The first method uses a safe model to assure that gains of the process's model can be estimated. The second method is responsible for elimination of potential linear dependences between columns of observation matrix. Moreover new results from one of SILO implementation in polish power plant are presented. They confirm high efficiency of the presented solution in solving technical problems.
Zhang, Z; Jewett, D L
1994-01-01
Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.
Development of a machine learning potential for graphene
NASA Astrophysics Data System (ADS)
Rowe, Patrick; Csányi, Gábor; Alfè, Dario; Michaelides, Angelos
2018-02-01
We present an accurate interatomic potential for graphene, constructed using the Gaussian approximation potential (GAP) machine learning methodology. This GAP model obtains a faithful representation of a density functional theory (DFT) potential energy surface, facilitating highly accurate (approaching the accuracy of ab initio methods) molecular dynamics simulations. This is achieved at a computational cost which is orders of magnitude lower than that of comparable calculations which directly invoke electronic structure methods. We evaluate the accuracy of our machine learning model alongside that of a number of popular empirical and bond-order potentials, using both experimental and ab initio data as references. We find that whilst significant discrepancies exist between the empirical interatomic potentials and the reference data—and amongst the empirical potentials themselves—the machine learning model introduced here provides exemplary performance in all of the tested areas. The calculated properties include: graphene phonon dispersion curves at 0 K (which we predict with sub-meV accuracy), phonon spectra at finite temperature, in-plane thermal expansion up to 2500 K as compared to NPT ab initio molecular dynamics simulations and a comparison of the thermally induced dispersion of graphene Raman bands to experimental observations. We have made our potential freely available online at [http://www.libatoms.org].
Decoding spike timing: the differential reverse correlation method
Tkačik, Gašper; Magnasco, Marcelo O.
2009-01-01
It is widely acknowledged that detailed timing of action potentials is used to encode information, for example in auditory pathways; however the computational tools required to analyze encoding through timing are still in their infancy. We present a simple example of encoding, based on a recent model of time-frequency analysis, in which units fire action potentials when a certain condition is met, but the timing of the action potential depends also on other features of the stimulus. We show that, as a result, spike-triggered averages are smoothed so much they do not represent the true features of the encoding. Inspired by this example, we present a simple method, differential reverse correlations, that can separate an analysis of what causes a neuron to spike, and what controls its timing. We analyze with this method the leaky integrate-and-fire neuron and show the method accurately reconstructs the model's kernel. PMID:18597928
NASA Astrophysics Data System (ADS)
Ozdemir, Adnan
2011-07-01
SummaryThe purpose of this study is to produce a groundwater spring potential map of the Sultan Mountains in central Turkey, based on a logistic regression method within a Geographic Information System (GIS) environment. Using field surveys, the locations of the springs (440 springs) were determined in the study area. In this study, 17 spring-related factors were used in the analysis: geology, relative permeability, land use/land cover, precipitation, elevation, slope, aspect, total curvature, plan curvature, profile curvature, wetness index, stream power index, sediment transport capacity index, distance to drainage, distance to fault, drainage density, and fault density map. The coefficients of the predictor variables were estimated using binary logistic regression analysis and were used to calculate the groundwater spring potential for the entire study area. The accuracy of the final spring potential map was evaluated based on the observed springs. The accuracy of the model was evaluated by calculating the relative operating characteristics. The area value of the relative operating characteristic curve model was found to be 0.82. These results indicate that the model is a good estimator of the spring potential in the study area. The spring potential map shows that the areas of very low, low, moderate and high groundwater spring potential classes are 105.586 km 2 (28.99%), 74.271 km 2 (19.906%), 101.203 km 2 (27.14%), and 90.05 km 2 (24.671%), respectively. The interpretations of the potential map showed that stream power index, relative permeability of lithologies, geology, elevation, aspect, wetness index, plan curvature, and drainage density play major roles in spring occurrence and distribution in the Sultan Mountains. The logistic regression approach has not yet been used to delineate groundwater potential zones. In this study, the logistic regression method was used to locate potential zones for groundwater springs in the Sultan Mountains. The evolved model was found to be in strong agreement with the available groundwater spring test data. Hence, this method can be used routinely in groundwater exploration under favourable conditions.
NASA Astrophysics Data System (ADS)
Mogaji, Kehinde Anthony; Lim, Hwee San
2018-06-01
The application of a GIS - based Dempster - Shafer data driven model named as evidential belief function EBF- methodology to groundwater potential conditioning factors (GPCFs) derived from geophysical and hydrogeological data sets for assessing groundwater potentiality was presented in this study. The proposed method's efficacy in managing degree of uncertainty in spatial predictive models motivated this research. The method procedural approaches entail firstly, the database containing groundwater data records (bore wells location inventory, hydrogeological data record, etc.) and geophysical measurement data construction. From the database, different influencing groundwater occurrence factors, namely aquifer layer thickness, aquifer layer resistivity, overburden material resistivity, overburden material thickness, aquifer hydraulic conductivity and aquifer transmissivity were extracted and prepared. Further, the bore well location inventories were partitioned randomly into a ratio of 70% (19 wells) for model training and 30% (9 wells) for model testing. The synthesized of the GPCFs via applying the DS - EBF model algorithms produced the groundwater productivity potential index (GPPI) map which demarcated the area into low - medium, medium, medium - high and high potential zones. The analyzed percentage degree of uncertainty for the predicted lows potential zones classes and mediums/highs potential zones classes are >10% and <10%, respectively. The DS theory model-based GPPI map's validation through ROC approach established prediction rate accuracy of 88.8%. Successively, the determined transverse resistance (TR) values in the range of 1280 and 30,000 Ω my for the area geoelectrically delineated aquifer units of the predicted potential zones through Dar - Zarrouk Parameter analysis quantitatively confirm the DS theory modeling prediction results. This research results have expand the capability of DS - EBF model in predictive modeling by effective uncertainty management. Thus, the produced map could form part of decision support system reliable to be used by local authorities for groundwater exploitation and management in the area.
Milky Way mass and potential recovery using tidal streams in a realistic halo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonaca, Ana; Geha, Marla; Küpper, Andreas H. W.
2014-11-01
We present a new method for determining the Galactic gravitational potential based on forward modeling of tidal stellar streams. We use this method to test the performance of smooth and static analytic potentials in representing realistic dark matter halos, which have substructure and are continually evolving by accretion. Our FAST-FORWARD method uses a Markov Chain Monte Carlo algorithm to compare, in six-dimensional phase space, an 'observed' stream to models created in trial analytic potentials. We analyze a large sample of streams that evolved in the Via Lactea II (VL2) simulation, which represents a realistic Galactic halo potential. The recovered potentialmore » parameters are in agreement with the best fit to the global, present-day VL2 potential. However, merely assuming an analytic potential limits the dark matter halo mass measurement to an accuracy of 5%-20%, depending on the choice of analytic parameterization. Collectively, the mass estimates using streams from our sample reach this fundamental limit, but individually they can be highly biased. Individual streams can both under- and overestimate the mass, and the bias is progressively worse for those with smaller perigalacticons, motivating the search for tidal streams at galactocentric distances larger than 70 kpc. We estimate that the assumption of a static and smooth dark matter potential in modeling of the GD-1- and Pal5-like streams introduces an error of up to 50% in the Milky Way mass estimates.« less
Consistent use of the standard model effective potential.
Andreassen, Anders; Frost, William; Schwartz, Matthew D
2014-12-12
The stability of the standard model is determined by the true minimum of the effective Higgs potential. We show that the potential at its minimum when computed by the traditional method is strongly dependent on the gauge parameter. It moreover depends on the scale where the potential is calculated. We provide a consistent method for determining absolute stability independent of both gauge and calculation scale, order by order in perturbation theory. This leads to a revised stability bounds m(h)(pole)>(129.4±2.3) GeV and m(t)(pole)<(171.2±0.3) GeV. We also show how to evaluate the effect of new physics on the stability bound without resorting to unphysical field values.
NASA Astrophysics Data System (ADS)
Tuckness, D. G.; Jost, B.
1995-08-01
Current knowledge of the lunar gravity field is presented. The various methods used in determining these gravity fields are investigated and analyzed. It will be shown that weaknesses exist in the current models of the lunar gravity field. The dominant part of this weakness is caused by the lack of lunar tracking data information (farside, polar areas), which makes modeling the total lunar potential difficult. Comparisons of the various lunar models reveal an agreement in the low-order coefficients of the Legendre polynomials expansions. However, substantial differences in the models can exist in the higher-order harmonics. The main purpose of this study is to assess today's lunar gravity field models for use in tomorrow's lunar mission designs and operations.
1982-08-01
Vortex Sheet Figure 4 - Properties of Singularity Sheets they may be used to model different types of flow. Transfer of boundary... Vortex Sheet Equivalence Singularity Behavior Using Green’s theorem it is clear that the problem of potential flow over a body can be modeled using ...that source, doublet, or vortex singularities can be used to model potential flow problems, and that the doublet and vortex singularities are
Hill, Mary C.; Tiedeman, Claire
2007-01-01
Methods and guidelines for developing and using mathematical modelsTurn to Effective Groundwater Model Calibration for a set of methods and guidelines that can help produce more accurate and transparent mathematical models. The models can represent groundwater flow and transport and other natural and engineered systems. Use this book and its extensive exercises to learn methods to fully exploit the data on hand, maximize the model's potential, and troubleshoot any problems that arise. Use the methods to perform:Sensitivity analysis to evaluate the information content of dataData assessment to identify (a) existing measurements that dominate model development and predictions and (b) potential measurements likely to improve the reliability of predictionsCalibration to develop models that are consistent with the data in an optimal mannerUncertainty evaluation to quantify and communicate errors in simulated results that are often used to make important societal decisionsMost of the methods are based on linear and nonlinear regression theory.Fourteen guidelines show the reader how to use the methods advantageously in practical situations.Exercises focus on a groundwater flow system and management problem, enabling readers to apply all the methods presented in the text. The exercises can be completed using the material provided in the book, or as hands-on computer exercises using instructions and files available on the text's accompanying Web site.Throughout the book, the authors stress the need for valid statistical concepts and easily understood presentation methods required to achieve well-tested, transparent models. Most of the examples and all of the exercises focus on simulating groundwater systems; other examples come from surface-water hydrology and geophysics.The methods and guidelines in the text are broadly applicable and can be used by students, researchers, and engineers to simulate many kinds systems.
New Methods for Estimating Seasonal Potential Climate Predictability
NASA Astrophysics Data System (ADS)
Feng, Xia
This study develops two new statistical approaches to assess the seasonal potential predictability of the observed climate variables. One is the univariate analysis of covariance (ANOCOVA) model, a combination of autoregressive (AR) model and analysis of variance (ANOVA). It has the advantage of taking into account the uncertainty of the estimated parameter due to sampling errors in statistical test, which is often neglected in AR based methods, and accounting for daily autocorrelation that is not considered in traditional ANOVA. In the ANOCOVA model, the seasonal signals arising from external forcing are determined to be identical or not to assess any interannual variability that may exist is potentially predictable. The bootstrap is an attractive alternative method that requires no hypothesis model and is available no matter how mathematically complicated the parameter estimator. This method builds up the empirical distribution of the interannual variance from the resamplings drawn with replacement from the given sample, in which the only predictability in seasonal means arises from the weather noise. These two methods are applied to temperature and water cycle components including precipitation and evaporation, to measure the extent to which the interannual variance of seasonal means exceeds the unpredictable weather noise compared with the previous methods, including Leith-Shukla-Gutzler (LSG), Madden, and Katz. The potential predictability of temperature from ANOCOVA model, bootstrap, LSG and Madden exhibits a pronounced tropical-extratropical contrast with much larger predictability in the tropics dominated by El Nino/Southern Oscillation (ENSO) than in higher latitudes where strong internal variability lowers predictability. Bootstrap tends to display highest predictability of the four methods, ANOCOVA lies in the middle, while LSG and Madden appear to generate lower predictability. Seasonal precipitation from ANOCOVA, bootstrap, and Katz, resembling that for temperature, is more predictable over the tropical regions, and less predictable in extropics. Bootstrap and ANOCOVA are in good agreement with each other, both methods generating larger predictability than Katz. The seasonal predictability of evaporation over land bears considerably similarity with that of temperature using ANOCOVA, bootstrap, LSG and Madden. The remote SST forcing and soil moisture reveal substantial seasonality in their relations with the potentially predictable seasonal signals. For selected regions, either SST or soil moisture or both shows significant relationships with predictable signals, hence providing indirect insight on slowly varying boundary processes involved to enable useful seasonal climate predication. A multivariate analysis of covariance (MANOCOVA) model is established to identify distinctive predictable patterns, which are uncorrelated with each other. Generally speaking, the seasonal predictability from multivariate model is consistent with that from ANOCOVA. Besides unveiling the spatial variability of predictability, MANOCOVA model also reveals the temporal variability of each predictable pattern, which could be linked to the periodic oscillations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLaughlin, E.; Gupta, S.
This project mainly involves a molecular dynamics and Monte Carlo study of the effect of molecular shape on thermophysical properties of bulk fluids with an emphasis on the aromatic hydrocarbon liquids. In this regard we have studied the modeling, simulation methodologies, and predictive and correlating methods for thermodynamic properties of fluids of nonspherical molecules. In connection with modeling we have studied the use of anisotropic site-site potentials, through a modification of the Gay-Berne Gaussian overlap potential, to successfully model the aromatic rings after adding the necessary electrostatic moments. We have also shown these interaction sites should be located at themore » geometric centers of the chemical groups. In connection with predictive methods, we have shown two perturbation type theories to work well for fluids modeled using one-center anisotropic potentials and the possibility exists for extending these to anisotropic site-site models. In connection with correlation methods, we have studied, through simulations, the effect of molecular shape on the attraction term in the generalized van der Waals equation of state for fluids of nonspherical molecules and proposed a possible form which is to be studied further. We have successfully studied the vector and parallel processing aspects of molecular simulations for fluids of nonspherical molecules.« less
NASA Astrophysics Data System (ADS)
Fraser, S. A.; Wood, N. J.; Johnston, D. M.; Leonard, G. S.; Greening, P. D.; Rossetto, T.
2014-11-01
Evacuation of the population from a tsunami hazard zone is vital to reduce life-loss due to inundation. Geospatial least-cost distance modelling provides one approach to assessing tsunami evacuation potential. Previous models have generally used two static exposure scenarios and fixed travel speeds to represent population movement. Some analyses have assumed immediate departure or a common evacuation departure time for all exposed population. Here, a method is proposed to incorporate time-variable exposure, distributed travel speeds, and uncertain evacuation departure time into an existing anisotropic least-cost path distance framework. The method is demonstrated for hypothetical local-source tsunami evacuation in Napier City, Hawke's Bay, New Zealand. There is significant diurnal variation in pedestrian evacuation potential at the suburb level, although the total number of people unable to evacuate is stable across all scenarios. Whilst some fixed travel speeds approximate a distributed speed approach, others may overestimate evacuation potential. The impact of evacuation departure time is a significant contributor to total evacuation time. This method improves least-cost modelling of evacuation dynamics for evacuation planning, casualty modelling, and development of emergency response training scenarios. However, it requires detailed exposure data, which may preclude its use in many situations.
Fraser, Stuart A.; Wood, Nathan J.; Johnston, David A.; Leonard, Graham S.; Greening, Paul D.; Rossetto, Tiziana
2014-01-01
Evacuation of the population from a tsunami hazard zone is vital to reduce life-loss due to inundation. Geospatial least-cost distance modelling provides one approach to assessing tsunami evacuation potential. Previous models have generally used two static exposure scenarios and fixed travel speeds to represent population movement. Some analyses have assumed immediate departure or a common evacuation departure time for all exposed population. Here, a method is proposed to incorporate time-variable exposure, distributed travel speeds, and uncertain evacuation departure time into an existing anisotropic least-cost path distance framework. The method is demonstrated for hypothetical local-source tsunami evacuation in Napier City, Hawke's Bay, New Zealand. There is significant diurnal variation in pedestrian evacuation potential at the suburb level, although the total number of people unable to evacuate is stable across all scenarios. Whilst some fixed travel speeds approximate a distributed speed approach, others may overestimate evacuation potential. The impact of evacuation departure time is a significant contributor to total evacuation time. This method improves least-cost modelling of evacuation dynamics for evacuation planning, casualty modelling, and development of emergency response training scenarios. However, it requires detailed exposure data, which may preclude its use in many situations.
Modeling of thin film GaAs growth
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.
1982-01-01
A potential scaling Monte Carlo model of crystal growth is developed. The model is a modification of the solid-on-solid method for studying crystal growth in that potentials at surface sites are continuously updated on a time scale reflecting the surface events of migration, incorporation and evaporation. The model allows for B on A type of crystal growth and lattice disregistry by the assignment of potential values at various surface sites. The surface adatoms are periodically assigned a random energy from a Boltzmann distribution and this energy determines whether the adatoms evaporate, migrate or remain stationary during the sampling interval. For each addition or migration of an adatom, the surface potentials are adjusted to reflect the adsorption, migration or desorption potential changes.
Stoykov, Nikolay S; Kuiken, Todd A; Lowery, Madeleine M; Taflove, Allen
2003-09-01
We present what we believe to be the first algorithms that use a simple scalar-potential formulation to model linear Debye and Lorentz dielectric dispersions at low frequencies in the context of finite-element time-domain (FETD) numerical solutions of electric potential. The new algorithms, which permit treatment of multiple-pole dielectric relaxations, are based on the auxiliary differential equation method and are unconditionally stable. We validate the algorithms by comparison with the results of a previously reported method based on the Fourier transform. The new algorithms should be useful in calculating the transient response of biological materials subject to impulsive excitation. Potential applications include FETD modeling of electromyography, functional electrical stimulation, defibrillation, and effects of lightning and impulsive electric shock.
NASA Astrophysics Data System (ADS)
Palmer, Troy A.; Alexay, Christopher C.
2006-05-01
This paper addresses the variety and impact of dispersive model variations for infrared materials and, in particular, the level to which certain optical designs are affected by this potential variation in germanium. This work offers a method for anticipating and/or minimizing the pitfalls such potential model variations may have on a candidate optical design.
Modelisations et inversions tri-dimensionnelles en prospections gravimetrique et electrique
NASA Astrophysics Data System (ADS)
Boulanger, Olivier
The aim of this thesis is the application of gravity and resistivity methods for mining prospecting. The objectives of the present study are: (1) to build a fast gravity inversion method to interpret surface data; (2) to develop a tool for modelling the electrical potential acquired at surface and in boreholes when the resistivity distribution is heterogeneous; and (3) to define and implement a stochastic inversion scheme allowing the estimation of the subsurface resistivity from electrical data. The first technique concerns the elaboration of a three dimensional (3D) inversion program allowing the interpretation of gravity data using a selection of constraints such as the minimum distance, the flatness, the smoothness and the compactness. These constraints are integrated in a Lagrangian formulation. A multi-grid technique is also implemented to resolve separately large and short gravity wavelengths. The subsurface in the survey area is divided into juxtaposed rectangular prismatic blocks. The problem is solved by calculating the model parameters, i.e. the densities of each block. Weights are given to each block depending on depth, a priori information on density, and density range allowed for the region under investigation. The present code is tested on synthetic data. Advantages and behaviour of each method are compared in the 3D reconstruction. Recovery of geometry (depth, size) and density distribution of the original model is dependent on the set of constraints used. The best combination of constraints experimented for multiple bodies seems to be flatness and minimum volume for multiple bodies. The inversion method is tested on real gravity data. The second tool developed in this thesis is a three-dimensional electrical resistivity modelling code to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh allowing an exact estimation of the potential. Modelling generates a huge matrix made of Green's functions which is stored by using the method of pyramidal compression. The third method consists to interpret electrical potential measurements from a non-linear geostatistical approach including new constraints. This method estimates an analytical covariance model for the resistivity parameters from the potential data. (Abstract shortened by UMI.)
Qualitative model-based diagnosis using possibility theory
NASA Technical Reports Server (NTRS)
Joslyn, Cliff
1994-01-01
The potential for the use of possibility in the qualitative model-based diagnosis of spacecraft systems is described. The first sections of the paper briefly introduce the Model-Based Diagnostic (MBD) approach to spacecraft fault diagnosis; Qualitative Modeling (QM) methodologies; and the concepts of possibilistic modeling in the context of Generalized Information Theory (GIT). Then the necessary conditions for the applicability of possibilistic methods to qualitative MBD, and a number of potential directions for such an application, are described.
Dong, Xingjian; Peng, Zhike; Hua, Hongxing; Meng, Guang
2014-01-01
An efficient spectral element (SE) with electric potential degrees of freedom (DOF) is proposed to investigate the static electromechanical responses of a piezoelectric bimorph for its actuator and sensor functions. A sublayer model based on the piecewise linear approximation for the electric potential is used to describe the nonlinear distribution of electric potential through the thickness of the piezoelectric layers. An equivalent single layer (ESL) model based on first-order shear deformation theory (FSDT) is used to describe the displacement field. The Legendre orthogonal polynomials of order 5 are used in the element interpolation functions. The validity and the capability of the present SE model for investigation of global and local responses of the piezoelectric bimorph are confirmed by comparing the present solutions with those obtained from coupled 3-D finite element (FE) analysis. It is shown that, without introducing any higher-order electric potential assumptions, the current method can accurately describe the distribution of the electric potential across the thickness even for a rather thick bimorph. It is revealed that the effect of electric potential is significant when the bimorph is used as sensor while the effect is insignificant when the bimorph is used as actuator, and therefore, the present study may provide a better understanding of the nonlinear induced electric potential for bimorph sensor and actuator. PMID:24561399
Novel scheme to compute chemical potentials of chain molecules on a lattice
NASA Astrophysics Data System (ADS)
Mooij, G. C. A. M.; Frenkel, D.
We present a novel method that allows efficient computation of the total number of allowed conformations of a chain molecule in a dense phase. Using this method, it is possible to estimate the chemical potential of such a chain molecule. We have tested the present method in simulations of a two-dimensional monolayer of chain molecules on a lattice (Whittington-Chapman model) and compared it with existing schemes to compute the chemical potential. We find that the present approach is two to three orders of magnitude faster than the most efficient of the existing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hua Chiaho; Shukla, Hemant I.; Merchant, Thomas E.
2007-02-01
Purpose: To estimate potential differences in volumetric bone growth in children with sarcoma treated with intensity-modulated (IMRT) and conformal (CRT) radiation therapy using an empiric dose-effect model. Methods and Materials: A random coefficient model was used to estimate potential volumetric bone growth of 36 pelvic bones (ischiopubis and ilium) from 11 patients 4 years after radiotherapy. The model incorporated patient age, pretreatment bone volume, integral dose >35 Gy, and time since completion of radiation therapy. Three dosimetry plans were entered into the model: the actual CRT/IMRT plan, a nontreated comparable IMRT/CRT plan, and an idealized plan in which dose wasmore » delivered only to the planning target volume. The results were compared with modeled normal bone growth. Results: The model predicted that by using the idealized, IMRT, and CRT approaches, patients would maintain 93%, 87%, and 84%, respectively (p = 0.06), of their expected normal growth. Patients older than 10 years would maintain 98% of normal growth, regardless of treatment method. Those younger than 10 years would maintain 87% (idealized), 76% (IMRT), or 70% (CRT) of their expected growth (p = 0.015). Post hoc testing (Tukey) revealed that the CRT and IMRT approaches differed significantly from the idealized one but not from each other. Conclusions: Dose-effect models facilitate the comparison of treatment methods and potential interventions. Although treatment methods do not alter the growth of flat bones in older pediatric patients, they may significantly impact bone growth in children younger than age 10 years, especially as we move toward techniques with high conformity and sharper dose gradient.« less
The electrical self-potential method is a non-intrusive snow-hydrological sensor
NASA Astrophysics Data System (ADS)
Thompson, S. S.; Kulessa, B.; Essery, R. L. H.; Lüthi, M. P.
2015-08-01
Our ability to measure, quantify and assimilate hydrological properties and processes of snow in operational models is disproportionally poor compared to the significance of seasonal snowmelt as a global water resource and major risk factor in flood and avalanche forecasting. Encouraged by recent theoretical, modelling and laboratory work, we show here that the diurnal evolution of aerially-distributed self-potential magnitudes closely track those of bulk meltwater fluxes in melting in-situ snowpacks at Rhone and Jungfraujoch glaciers, Switzerland. Numerical modelling infers temporally-evolving liquid water contents in the snowpacks on successive days in close agreement with snow-pit measurements. Muting previous concerns, the governing physical and chemical properties of snow and meltwater became temporally invariant for modelling purposes. Because measurement procedure is straightforward and readily automated for continuous monitoring over significant spatial scales, we conclude that the self-potential geophysical method is a highly-promising non-intrusive snow-hydrological sensor for measurement practice, modelling and operational snow forecasting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less
Sundararaman, Ravishankar; Goddard, William A; Arias, Tomas A
2017-03-21
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Finally, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.
Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.
2017-03-16
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less
A phantom axon setup for validating models of action potential recordings.
Rossel, Olivier; Soulier, Fabien; Bernard, Serge; Guiraud, David; Cathébras, Guy
2016-08-01
Electrode designs and strategies for electroneurogram recordings are often tested first by computer simulations and then by animal models, but they are rarely implanted for long-term evaluation in humans. The models show that the amplitude of the potential at the surface of an axon is higher in front of the nodes of Ranvier than at the internodes; however, this has not been investigated through in vivo measurements. An original experimental method is presented to emulate a single fiber action potential in an infinite conductive volume, allowing the potential of an axon to be recorded at both the nodes of Ranvier and the internodes, for a wide range of electrode-to-fiber radial distances. The paper particularly investigates the differences in the action potential amplitude along the longitudinal axis of an axon. At a short radial distance, the action potential amplitude measured in front of a node of Ranvier is two times larger than in the middle of two nodes. Moreover, farther from the phantom axon, the measured action potential amplitude is almost constant along the longitudinal axis. The results of this new method confirm the computer simulations, with a correlation of 97.6 %.
Estimating soil matric potential in Owens Valley, California
Sorenson, Stephen K.; Miller, Reuben F.; Welch, Michael R.; Groeneveld, David P.; Branson, Farrel A.
1989-01-01
Much of the floor of Owens Valley, California, is covered with alkaline scrub and alkaline meadow plant communities, whose existence is dependent partly on precipitation and partly on water infiltrated into the rooting zone from the shallow water table. The extent to which these plant communities are capable of adapting to and surviving fluctuations in the water table depends on physiological adaptations of the plants and on the water content, matric potential characteristics of the soils. Two methods were used to estimate soil matric potential in test sites in Owens Valley. The first, the filter-paper method, uses water content of filter papers equilibrated to water content of soil samples taken with a hand auger. The previously published calibration relations used to estimate soil matric potential from the water content of the filter papers were modified on the basis of current laboratory data. The other method of estimating soil matric potential was a modeling approach based on data from this and previous investigations. These data indicate that the base-10 logarithm of soil matric potential is a linear function of gravimetric soil water content for a particular soil. The slope and intercepts of this function vary with the texture and saturation capacity of the soil. Estimates of soil water characteristic curves were made at two sites by averaging the gravimetric soil water content and soil matric potential values from multiple samples at 0.1-m depth intervals derived by using the hand auger and filter-paper method and entering these values in the soil water model. The characteristic curves then were used to estimate soil matric potential from estimates of volumetric soil water content derived from neutron-probe readings. Evaluation of the modeling technique at two study sites indicated that estimates of soil matric potential within 0.5 pF units of the soil matric potential value derived by using the filter-paper method could be obtained 90 to 95 percent of the time in soils where water content was less than field capacity. The greatest errors occurred at depths where there was a distinct transition between soils of different textures.
Outcome modelling strategies in epidemiology: traditional methods and basic alternatives
Greenland, Sander; Daniel, Rhian; Pearce, Neil
2016-01-01
Abstract Controlling for too many potential confounders can lead to or aggravate problems of data sparsity or multicollinearity, particularly when the number of covariates is large in relation to the study size. As a result, methods to reduce the number of modelled covariates are often deployed. We review several traditional modelling strategies, including stepwise regression and the ‘change-in-estimate’ (CIE) approach to deciding which potential confounders to include in an outcome-regression model for estimating effects of a targeted exposure. We discuss their shortcomings, and then provide some basic alternatives and refinements that do not require special macros or programming. Throughout, we assume the main goal is to derive the most accurate effect estimates obtainable from the data and commercial software. Allowing that most users must stay within standard software packages, this goal can be roughly approximated using basic methods to assess, and thereby minimize, mean squared error (MSE). PMID:27097747
Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul
2012-11-26
The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application of the developed property models for the estimation of environment-related properties and uncertainties of the estimated property values is highlighted through an illustrative example. The developed property models provide reliable estimates of environment-related properties needed to perform process synthesis, design, and analysis of sustainable chemical processes and allow one to evaluate the effect of uncertainties of estimated property values on the calculated performance of processes giving useful insights into quality and reliability of the design of sustainable processes.
Solar energy market penetration models - Science or number mysticism
NASA Technical Reports Server (NTRS)
Warren, E. H., Jr.
1980-01-01
The forecast market potential of a solar technology is an important factor determining its R&D funding. Since solar energy market penetration models are the method used to forecast market potential, they have a pivotal role in a solar technology's development. This paper critiques the applicability of the most common solar energy market penetration models. It is argued that the assumptions underlying the foundations of rigorously developed models, or the absence of a reasonable foundation for the remaining models, restrict their applicability.
Frappier, Vincent; Najmanovich, Rafael J.
2014-01-01
Normal mode analysis (NMA) methods are widely used to study dynamic aspects of protein structures. Two critical components of NMA methods are coarse-graining in the level of simplification used to represent protein structures and the choice of potential energy functional form. There is a trade-off between speed and accuracy in different choices. In one extreme one finds accurate but slow molecular-dynamics based methods with all-atom representations and detailed atom potentials. On the other extreme, fast elastic network model (ENM) methods with Cα−only representations and simplified potentials that based on geometry alone, thus oblivious to protein sequence. Here we present ENCoM, an Elastic Network Contact Model that employs a potential energy function that includes a pairwise atom-type non-bonded interaction term and thus makes it possible to consider the effect of the specific nature of amino-acids on dynamics within the context of NMA. ENCoM is as fast as existing ENM methods and outperforms such methods in the generation of conformational ensembles. Here we introduce a new application for NMA methods with the use of ENCoM in the prediction of the effect of mutations on protein stability. While existing methods are based on machine learning or enthalpic considerations, the use of ENCoM, based on vibrational normal modes, is based on entropic considerations. This represents a novel area of application for NMA methods and a novel approach for the prediction of the effect of mutations. We compare ENCoM to a large number of methods in terms of accuracy and self-consistency. We show that the accuracy of ENCoM is comparable to that of the best existing methods. We show that existing methods are biased towards the prediction of destabilizing mutations and that ENCoM is less biased at predicting stabilizing mutations. PMID:24762569
A brain-region-based meta-analysis method utilizing the Apriori algorithm.
Niu, Zhendong; Nie, Yaoxin; Zhou, Qian; Zhu, Linlin; Wei, Jieyao
2016-05-18
Brain network connectivity modeling is a crucial method for studying the brain's cognitive functions. Meta-analyses can unearth reliable results from individual studies. Meta-analytic connectivity modeling is a connectivity analysis method based on regions of interest (ROIs) which showed that meta-analyses could be used to discover brain network connectivity. In this paper, we propose a new meta-analysis method that can be used to find network connectivity models based on the Apriori algorithm, which has the potential to derive brain network connectivity models from activation information in the literature, without requiring ROIs. This method first extracts activation information from experimental studies that use cognitive tasks of the same category, and then maps the activation information to corresponding brain areas by using the automatic anatomical label atlas, after which the activation rate of these brain areas is calculated. Finally, using these brain areas, a potential brain network connectivity model is calculated based on the Apriori algorithm. The present study used this method to conduct a mining analysis on the citations in a language review article by Price (Neuroimage 62(2):816-847, 2012). The results showed that the obtained network connectivity model was consistent with that reported by Price. The proposed method is helpful to find brain network connectivity by mining the co-activation relationships among brain regions. Furthermore, results of the co-activation relationship analysis can be used as a priori knowledge for the corresponding dynamic causal modeling analysis, possibly achieving a significant dimension-reducing effect, thus increasing the efficiency of the dynamic causal modeling analysis.
Whole Protein Native Fitness Potentials
NASA Astrophysics Data System (ADS)
Faraggi, Eshel; Kloczkowski, Andrzej
2013-03-01
Protein structure prediction can be separated into two tasks: sample the configuration space of the protein chain, and assign a fitness between these hypothetical models and the native structure of the protein. One of the more promising developments in this area is that of knowledge based energy functions. However, standard approaches using pair-wise interactions have shown shortcomings demonstrated by the superiority of multi-body-potentials. These shortcomings are due to residue pair-wise interaction being dependent on other residues along the chain. We developed a method that uses whole protein information filtered through machine learners to score protein models based on their likeness to native structures. For all models we calculated parameters associated with the distance to the solvent and with distances between residues. These parameters, in addition to energy estimates obtained by using a four-body-potential, DFIRE, and RWPlus were used as training for machine learners to predict the fitness of the models. Testing on CASP 9 targets showed that our method is superior to DFIRE, RWPlus, and the four-body potential, which are considered standards in the field.
Relaxation and approximate factorization methods for the unsteady full potential equation
NASA Technical Reports Server (NTRS)
Shankar, V.; Ide, H.; Gorski, J.
1984-01-01
The unsteady form of the full potential equation is solved in conservation form, using implicit methods based on approximate factorization and relaxation schemes. A local time linearization for density is introduced to enable solution to the equation in terms of phi, the velocity potential. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity, to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi obtained from requirements of density continuity. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. Results are presented for flows over airfoils, cylinders, and spheres. Comparisons are made with available Euler and full potential results.
A Comparison of Neural Networks and Fuzzy Logic Methods for Process Modeling
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.; Sala, Dorel M.; Berke, Laszlo
1996-01-01
The goal of this work was to analyze the potential of neural networks and fuzzy logic methods to develop approximate response surfaces as process modeling, that is for mapping of input into output. Structural response was chosen as an example. Each of the many methods surveyed are explained and the results are presented. Future research directions are also discussed.
NASA Astrophysics Data System (ADS)
Guillen, George; Rainey, Gail; Morin, Michelle
2004-04-01
Currently, the Minerals Management Service uses the Oil Spill Risk Analysis model (OSRAM) to predict the movement of potential oil spills greater than 1000 bbl originating from offshore oil and gas facilities. OSRAM generates oil spill trajectories using meteorological and hydrological data input from either actual physical measurements or estimates generated from other hydrological models. OSRAM and many other models produce output matrices of average, maximum and minimum contact probabilities to specific landfall or target segments (columns) from oil spills at specific points (rows). Analysts and managers are often interested in identifying geographic areas or groups of facilities that pose similar risks to specific targets or groups of targets if a spill occurred. Unfortunately, due to the potentially large matrix generated by many spill models, this question is difficult to answer without the use of data reduction and visualization methods. In our study we utilized a multivariate statistical method called cluster analysis to group areas of similar risk based on potential distribution of landfall target trajectory probabilities. We also utilized ArcView™ GIS to display spill launch point groupings. The combination of GIS and multivariate statistical techniques in the post-processing of trajectory model output is a powerful tool for identifying and delineating areas of similar risk from multiple spill sources. We strongly encourage modelers, statistical and GIS software programmers to closely collaborate to produce a more seamless integration of these technologies and approaches to analyzing data. They are complimentary methods that strengthen the overall assessment of spill risks.
Editorial: Mathematical Methods and Modeling in Machine Fault Diagnosis
Yan, Ruqiang; Chen, Xuefeng; Li, Weihua; ...
2014-12-18
Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issuemore » is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.« less
Observational constraints on Tachyon and DBI inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sheng; Liddle, Andrew R., E-mail: sl277@sussex.ac.uk, E-mail: arl@roe.ac.uk
2014-03-01
We present a systematic method for evaluation of perturbation observables in non-canonical single-field inflation models within the slow-roll approximation, which allied with field redefinitions enables predictions to be established for a wide range of models. We use this to investigate various non-canonical inflation models, including Tachyon inflation and DBI inflation. The Lambert W function will be used extensively in our method for the evaluation of observables. In the Tachyon case, in the slow-roll approximation the model can be approximated by a canonical field with a redefined potential, which yields predictions in better agreement with observations than the canonical equivalents. Formore » DBI inflation models we consider contributions from both the scalar potential and the warp geometry. In the case of a quartic potential, we find a formula for the observables under both non-relativistic (sound speed c{sub s}{sup 2} ∼ 1) and relativistic behaviour (c{sub s}{sup 2} || 1) of the scalar DBI inflaton. For a quadratic potential we find two branches in the non-relativistic c{sub s}{sup 2} ∼ 1 case, determined by the competition of model parameters, while for the relativistic case c{sub s}{sup 2} → 0, we find consistency with results already in the literature. We present a comparison to the latest Planck satellite observations. Most of the non-canonical models we investigate, including the Tachyon, are better fits to data than canonical models with the same potential, but we find that DBI models in the slow-roll regime have difficulty in matching the data.« less
NASA Astrophysics Data System (ADS)
Pochampally, Kishore K.; Gupta, Surendra M.; Kamarthi, Sagar V.
2004-02-01
Although there are many quantitative models in the literature to design a reverse supply chain, every model assumes that all the recovery facilities that are engaged in the supply chain have enough potential to efficiently re-process the incoming used products. Motivated by the risk of re-processing used products in facilities of insufficient potentiality, this paper proposes a method to identify potential facilities in a set of candidate recovery facilities operating in a region where a reverse supply chain is to be established. In this paper, the problem is solved using a newly developed method called physical programming. The most significant advantage of using physical programming is that it allows a decision maker to express his preferences for values of criteria (for comparing the alternatives), not in the traditional form of weights but in terms of ranges of different degrees of desirability, such as ideal range, desirable range, highly desirable range, undesirable range, and unacceptable range. A numerical example is considered to illustrate the proposed method.
An efficient soil water balance model based on hybrid numerical and statistical methods
NASA Astrophysics Data System (ADS)
Mao, Wei; Yang, Jinzhong; Zhu, Yan; Ye, Ming; Liu, Zhao; Wu, Jingwei
2018-04-01
Most soil water balance models only consider downward soil water movement driven by gravitational potential, and thus cannot simulate upward soil water movement driven by evapotranspiration especially in agricultural areas. In addition, the models cannot be used for simulating soil water movement in heterogeneous soils, and usually require many empirical parameters. To resolve these problems, this study derives a new one-dimensional water balance model for simulating both downward and upward soil water movement in heterogeneous unsaturated zones. The new model is based on a hybrid of numerical and statistical methods, and only requires four physical parameters. The model uses three governing equations to consider three terms that impact soil water movement, including the advective term driven by gravitational potential, the source/sink term driven by external forces (e.g., evapotranspiration), and the diffusive term driven by matric potential. The three governing equations are solved separately by using the hybrid numerical and statistical methods (e.g., linear regression method) that consider soil heterogeneity. The four soil hydraulic parameters required by the new models are as follows: saturated hydraulic conductivity, saturated water content, field capacity, and residual water content. The strength and weakness of the new model are evaluated by using two published studies, three hypothetical examples and a real-world application. The evaluation is performed by comparing the simulation results of the new model with corresponding results presented in the published studies, obtained using HYDRUS-1D and observation data. The evaluation indicates that the new model is accurate and efficient for simulating upward soil water flow in heterogeneous soils with complex boundary conditions. The new model is used for evaluating different drainage functions, and the square drainage function and the power drainage function are recommended. Computational efficiency of the new model makes it particularly suitable for large-scale simulation of soil water movement, because the new model can be used with coarse discretization in space and time.
Gao, Jiali; Major, Dan T; Fan, Yao; Lin, Yen-Lin; Ma, Shuhua; Wong, Kin-Yiu
2008-01-01
A method for incorporating quantum mechanics into enzyme kinetics modeling is presented. Three aspects are emphasized: 1) combined quantum mechanical and molecular mechanical methods are used to represent the potential energy surface for modeling bond forming and breaking processes, 2) instantaneous normal mode analyses are used to incorporate quantum vibrational free energies to the classical potential of mean force, and 3) multidimensional tunneling methods are used to estimate quantum effects on the reaction coordinate motion. Centroid path integral simulations are described to make quantum corrections to the classical potential of mean force. In this method, the nuclear quantum vibrational and tunneling contributions are not separable. An integrated centroid path integral-free energy perturbation and umbrella sampling (PI-FEP/UM) method along with a bisection sampling procedure was summarized, which provides an accurate, easily convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. In the ensemble-averaged variational transition state theory with multidimensional tunneling (EA-VTST/MT), these three aspects of quantum mechanical effects can be individually treated, providing useful insights into the mechanism of enzymatic reactions. These methods are illustrated by applications to a model process in the gas phase, the decarboxylation reaction of N-methyl picolinate in water, and the proton abstraction and reprotonation process catalyzed by alanine racemase. These examples show that the incorporation of quantum mechanical effects is essential for enzyme kinetics simulations.
NASA Astrophysics Data System (ADS)
Errington, Jeffrey Richard
This work focuses on the development of intermolecular potential models for real fluids. United-atom models have been developed for both non-polar and polar fluids. The models have been optimized to the vapor-liquid coexistence properties. Histogram reweighting techniques were used to calculate phase behavior. The Hamiltonian scaling grand canonical Monte Carlo method was developed to enable the determination of thermodynamic properties of several related Hamiltonians from a single simulation. With this method, the phase behavior of variations of the Buckingham exponential-6 potential was determined. Reservoir grand canonical Monte Carlo simulations were developed to simulate molecules with complex architectures and/or stiff intramolecular constraints. The scheme is based on the creation of a reservoir of ideal chains from which structures are selected for insertion during a simulation. New intermolecular potential models have been developed for water, the n-alkane homologous series, benzene, cyclohexane, carbon dioxide, ammonia and methanol. The models utilize the Buckingham exponential-6 potential to model non-polar interactions and point charges to describe polar interactions. With the exception of water, the new models reproduce experimental saturated densities, vapor pressures and critical parameters to within a few percent. In the case of water, we found a set of parameters that describes the phase behavior better than other available point charge models while giving a reasonable description of the liquid structure. The mixture behavior of water-hydrocarbon mixtures has also been examined. The Henry's law constants of methane, ethane, benzene and cyclohexane in water were determined using Widom insertion and expanded ensemble techniques. In addition the high-pressure phase behavior of water-methane and water-ethane systems was studied using the Gibbs ensemble method. The results from this study indicate that it is possible to obtain a good description of the phase behavior of pure components using united-atom models. The mixture behavior of non-polar systems, including highly asymmetric components, was in good agreement with experiment. The calculations for the highly non-ideal water-hydrocarbon mixtures reproduced experimental behavior with varying degrees of success. The results indicate that multibody effects, such as polarizability, must be taken into account when modeling mixtures of polar and non-polar components.
Lattice hydrodynamic model based traffic control: A transportation cyber-physical system approach
NASA Astrophysics Data System (ADS)
Liu, Hui; Sun, Dihua; Liu, Weining
2016-11-01
Lattice hydrodynamic model is a typical continuum traffic flow model, which describes the jamming transition of traffic flow properly. Previous studies in lattice hydrodynamic model have shown that the use of control method has the potential to improve traffic conditions. In this paper, a new control method is applied in lattice hydrodynamic model from a transportation cyber-physical system approach, in which only one lattice site needs to be controlled in this control scheme. The simulation verifies the feasibility and validity of this method, which can ensure the efficient and smooth operation of the traffic flow.
NASA Astrophysics Data System (ADS)
Kaftan, Ilknur; Sindirgi, Petek
2013-04-01
Self-potential (SP) is one of the oldest geophysical methods that provides important information about near-surface structures. Several methods have been developed to interpret SP data using simple geometries. This study investigated inverse solution of a buried, polarized sphere-shaped self-potential (SP ) anomaly via Multilayer Perceptron Neural Networks ( MLPNN ). The polarization angle ( α ) and depth to the centre of sphere ( h )were estimated. The MLPNN is applied to synthetic and field SP data. In order to see the capability of the method in detecting the number of sources, MLPNN was applied to different spherical models at different depths and locations.. Additionally, the performance of MLPNN was tested by adding random noise to the same synthetic test data. The sphere model successfully obtained similar parameters under different S/N ratios. Then, MLPNN method was applied to two field examples. The first one is the cross section taken from the SP anomaly map of the Ergani-Süleymanköy (Turkey) copper mine. MLPNN was also applied to SP data from Seferihisar Izmir (Western Turkey) geothermal field. The MLPNN results showed good agreement with the original synthetic data set. The effect of The technique gave satisfactory results following the addition of 5% and 10% Gaussian noise levels. The MLPNN results were compared to other SP interpretation techniques, such as Normalized Full Gradient (NFG), inverse solution and nomogram methods. All of the techniques showed strong similarity. Consequently, the synthetic and field applications of this study show that MLPNN provides reliable evaluation of the self potential data modelled by the sphere model.
A mean spherical model for soft potentials: The hard core revealed as a perturbation
NASA Technical Reports Server (NTRS)
Rosenfeld, Y.; Ashcroft, N. W.
1978-01-01
The mean spherical approximation for fluids is extended to treat the case of dense systems interacting via soft-potentials. The extension takes the form of a generalized statement concerning the behavior of the direct correlation function c(r) and radial distribution g(r). From a detailed analysis that views the hard core portion of a potential as a perturbation on the whole, a specific model is proposed which possesses analytic solutions for both Coulomb and Yukawa potentials, in addition to certain other remarkable properties. A variational principle for the model leads to a relatively simple method for obtaining numerical solutions.
NASA Technical Reports Server (NTRS)
Shu, J. Y.
1983-01-01
Two different singularity methods have been utilized to calculate the potential flow past a three dimensional non-lifting body. Two separate FORTRAN computer programs have been developed to implement these theoretical models, which will in the future allow inclusion of the fuselage effect in a pair of existing subcritical wing design computer programs. The first method uses higher order axial singularity distributions to model axisymmetric bodies of revolution in an either axial or inclined uniform potential flow. Use of inset of the singularity line away from the body for blunt noses, and cosine-type element distributions have been applied to obtain the optimal results. Excellent agreement to five significant figures with the exact solution pressure coefficient value has been found for a series of ellipsoids at different angles of attack. Solutions obtained for other axisymmetric bodies compare well with available experimental data. The second method utilizes distributions of singularities on the body surface, in the form of a discrete vortex lattice. This program is capable of modeling arbitrary three dimensional non-lifting bodies. Much effort has been devoted to finding the optimal method of calculating the tangential velocity on the body surface, extending techniques previously developed by other workers.
An introduction to using Bayesian linear regression with clinical data.
Baldwin, Scott A; Larson, Michael J
2017-11-01
Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.
Statistical properties of nonlinear one-dimensional wave fields
NASA Astrophysics Data System (ADS)
Chalikov, D.
2005-06-01
A numerical model for long-term simulation of gravity surface waves is described. The model is designed as a component of a coupled Wave Boundary Layer/Sea Waves model, for investigation of small-scale dynamic and thermodynamic interactions between the ocean and atmosphere. Statistical properties of nonlinear wave fields are investigated on a basis of direct hydrodynamical modeling of 1-D potential periodic surface waves. The method is based on a nonstationary conformal surface-following coordinate transformation; this approach reduces the principal equations of potential waves to two simple evolutionary equations for the elevation and the velocity potential on the surface. The numerical scheme is based on a Fourier transform method. High accuracy was confirmed by validation of the nonstationary model against known solutions, and by comparison between the results obtained with different resolutions in the horizontal. The scheme allows reproduction of the propagation of steep Stokes waves for thousands of periods with very high accuracy. The method here developed is applied to simulation of the evolution of wave fields with large number of modes for many periods of dominant waves. The statistical characteristics of nonlinear wave fields for waves of different steepness were investigated: spectra, curtosis and skewness, dispersion relation, life time. The prime result is that wave field may be presented as a superposition of linear waves is valid only for small amplitudes. It is shown as well, that nonlinear wave fields are rather a superposition of Stokes waves not linear waves. Potential flow, free surface, conformal mapping, numerical modeling of waves, gravity waves, Stokes waves, breaking waves, freak waves, wind-wave interaction.
Modeling potential habitats for alien species Dreissena polymorpha in continental USA
Mingyang, Li; Yunwei, Ju; Kumar, Sunil; Stohlgren, Thomas J.
2008-01-01
The effective measure to minimize the damage of invasive species is to block the potential invasive species to enter into suitable areas. 1864 occurrence points with GPS coordinates and 34 environmental variables from Daymet datasets were gathered, and 4 modeling methods, i.e., Logistic Regression (LR), Classification and Regression Trees (CART), Genetic Algorithm for Rule-Set Prediction (GARP), and maximum entropy method (Maxent), were introduced to generate potential geographic distributions for invasive species Dreissena polymorpha in Continental USA. Then 3 statistical criteria of the area under the Receiver Operating Characteristic curve (AUC), Pearson correlation (COR) and Kappa value were calculated to evaluate the performance of the models, followed by analyses on major contribution variables. Results showed that in terms of the 3 statistical criteria, the prediction results of the 4 ecological niche models were either excellent or outstanding, in which Maxent outperformed the others in 3 aspects of predicting current distribution habitats, selecting major contribution factors, and quantifying the influence of environmental variables on habitats. Distance to water, elevation, frequency of precipitation and solar radiation were 4 environmental forcing factors. The method suggested in the paper can have some reference meaning for modeling habitats of alien species in China and provide a direction to prevent Mytilopsis sallei on the Chinese coast line.
Moller, Peter; Ichikawa, Takatoshi
2015-12-23
In this study, we propose a method to calculate the two-dimensional (2D) fission-fragment yield Y(Z,N) versus both proton and neutron number, with inclusion of odd-even staggering effects in both variables. The approach is to use the Brownian shape-motion on a macroscopic-microscopic potential-energy surface which, for a particular compound system is calculated versus four shape variables: elongation (quadrupole moment Q 2), neck d, left nascent fragment spheroidal deformation ϵ f1, right nascent fragment deformation ϵ f2 and two asymmetry variables, namely proton and neutron numbers in each of the two fragments. The extension of previous models 1) introduces a method tomore » calculate this generalized potential-energy function and 2) allows the correlated transfer of nucleon pairs in one step, in addition to sequential transfer. In the previous version the potential energy was calculated as a function of Z and N of the compound system and its shape, including the asymmetry of the shape. We outline here how to generalize the model from the “compound-system” model to a model where the emerging fragment proton and neutron numbers also enter, over and above the compound system composition.« less
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-04-01
In this paper, a number of classical and intelligent methods, including interquartile, autoregressive integrated moving average (ARIMA), artificial neural network (ANN) and support vector machine (SVM), have been proposed to quantify potential thermal anomalies around the time of the 11 August 2012 Varzeghan, Iran, earthquake (Mw = 6.4). The duration of the data set, which is comprised of Aqua-MODIS land surface temperature (LST) night-time snapshot images, is 62 days. In order to quantify variations of LST data obtained from satellite images, the air temperature (AT) data derived from the meteorological station close to the earthquake epicenter has been taken into account. For the models examined here, results indicate the following: (i) ARIMA models, which are the most widely used in the time series community for short-term forecasting, are quickly and easily implemented, and can efficiently act through linear solutions. (ii) A multilayer perceptron (MLP) feed-forward neural network can be a suitable non-parametric method to detect the anomalous changes of a non-linear time series such as variations of LST. (iii) Since SVMs are often used due to their many advantages for classification and regression tasks, it can be shown that, if the difference between the predicted value using the SVM method and the observed value exceeds the pre-defined threshold value, then the observed value could be regarded as an anomaly. (iv) ANN and SVM methods could be powerful tools in modeling complex phenomena such as earthquake precursor time series where we may not know what the underlying data generating process is. There is good agreement in the results obtained from the different methods for quantifying potential anomalies in a given LST time series. This paper indicates that the detection of the potential thermal anomalies derive credibility from the overall efficiencies and potentialities of the four integrated methods.
Evaluation of Contamination Inspection and Analysis Methods through Modeling System Performance
NASA Technical Reports Server (NTRS)
Seasly, Elaine; Dever, Jason; Stuban, Steven M. F.
2016-01-01
Contamination is usually identified as a risk on the risk register for sensitive space systems hardware. Despite detailed, time-consuming, and costly contamination control efforts during assembly, integration, and test of space systems, contaminants are still found during visual inspections of hardware. Improved methods are needed to gather information during systems integration to catch potential contamination issues earlier and manage contamination risks better. This research explores evaluation of contamination inspection and analysis methods to determine optical system sensitivity to minimum detectable molecular contamination levels based on IEST-STD-CC1246E non-volatile residue (NVR) cleanliness levels. Potential future degradation of the system is modeled given chosen modules representative of optical elements in an optical system, minimum detectable molecular contamination levels for a chosen inspection and analysis method, and determining the effect of contamination on the system. By modeling system performance based on when molecular contamination is detected during systems integration and at what cleanliness level, the decision maker can perform trades amongst different inspection and analysis methods and determine if a planned method is adequate to meet system requirements and manage contamination risk.
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.
2018-01-09
Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M N
2018-02-13
Generalized extended Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate "shadow" potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential to any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.
Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less
Prediction model of sinoatrial node field potential using high order partial least squares.
Feng, Yu; Cao, Hui; Zhang, Yanbin
2015-01-01
High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).
Neville, Timothy J; Salmon, Paul M
2016-07-01
As sport becomes more complex, there is potential for ergonomics concepts to help enhance the performance of sports officials. The concept of Situation Awareness (SA) appears pertinent given the requirement for officials to understand what is going on in order to make decisions. Although numerous models exist, none have been applied to examine officials, and only several recent examples have been applied to sport. This paper examines SA models and methods to identify if any have applicability to officials in sport (OiS). Evaluation of the models and methods identified potential applications of individual, team and systems models of SA. The paper further demonstrates that the Distributed Situation Awareness model is suitable for studying officials in fastball sports. It is concluded that the study of SA represents a key area of multidisciplinary research for both ergonomics and sports science in the context of OiS. Practitioner Summary: Despite obvious synergies, applications of cognitive ergonomics concepts in sport are sparse. This is especially so for Officials in Sport (OiS). This article presents an evaluation of Situation Awareness models and methods, providing practitioners with guidance on which are the most suitable for OiS system design and evaluation.
NASA Astrophysics Data System (ADS)
Xiao, Wenbin; Dong, Wencai
2016-06-01
In the framework of 3D potential flow theory, Bessho form translating-pulsating source Green's function in frequency domain is chosen as the integral kernel in this study and hybrid source-and-dipole distribution model of the boundary element method is applied to directly solve the velocity potential for advancing ship in regular waves. Numerical characteristics of the Green function show that the contribution of local-flow components to velocity potential is concentrated at the nearby source point area and the wave component dominates the magnitude of velocity potential in the far field. Two kinds of mathematical models, with or without local-flow components taken into account, are adopted to numerically calculate the longitudinal motions of Wigley hulls, which demonstrates the applicability of translating-pulsating source Green's function method for various ship forms. In addition, the mesh analysis of discrete surface is carried out from the perspective of ship-form characteristics. The study shows that the longitudinal motion results by the simplified model are somewhat greater than the experimental data in the resonant zone, and the model can be used as an effective tool to predict ship seakeeping properties. However, translating-pulsating source Green function method is only appropriate for the qualitative analysis of motion response in waves if the ship geometrical shape fails to satisfy the slender-body assumption.
Limited view angle iterative CT reconstruction
NASA Astrophysics Data System (ADS)
Kisner, Sherman J.; Haneda, Eri; Bouman, Charles A.; Skatter, Sondre; Kourinny, Mikhail; Bedford, Simon
2012-03-01
Computed Tomography (CT) is widely used for transportation security to screen baggage for potential threats. For example, many airports use X-ray CT to scan the checked baggage of airline passengers. The resulting reconstructions are then used for both automated and human detection of threats. Recently, there has been growing interest in the use of model-based reconstruction techniques for application in CT security systems. Model-based reconstruction offers a number of potential advantages over more traditional direct reconstruction such as filtered backprojection (FBP). Perhaps one of the greatest advantages is the potential to reduce reconstruction artifacts when non-traditional scan geometries are used. For example, FBP tends to produce very severe streaking artifacts when applied to limited view data, which can adversely affect subsequent processing such as segmentation and detection. In this paper, we investigate the use of model-based reconstruction in conjunction with limited-view scanning architectures, and we illustrate the value of these methods using transportation security examples. The advantage of limited view architectures is that it has the potential to reduce the cost and complexity of a scanning system, but its disadvantage is that limited-view data can result in structured artifacts in reconstructed images. Our method of reconstruction depends on the formulation of both a forward projection model for the system, and a prior model that accounts for the contents and densities of typical baggage. In order to evaluate our new method, we use realistic models of baggage with randomly inserted simple simulated objects. Using this approach, we show that model-based reconstruction can substantially reduce artifacts and improve important metrics of image quality such as the accuracy of the estimated CT numbers.
Validating a Coarse-Grained Potential Energy Function through Protein Loop Modelling
MacDonald, James T.; Kelley, Lawrence A.; Freemont, Paul S.
2013-01-01
Coarse-grained (CG) methods for sampling protein conformational space have the potential to increase computational efficiency by reducing the degrees of freedom. The gain in computational efficiency of CG methods often comes at the expense of non-protein like local conformational features. This could cause problems when transitioning to full atom models in a hierarchical framework. Here, a CG potential energy function was validated by applying it to the problem of loop prediction. A novel method to sample the conformational space of backbone atoms was benchmarked using a standard test set consisting of 351 distinct loops. This method used a sequence-independent CG potential energy function representing the protein using -carbon positions only and sampling conformations with a Monte Carlo simulated annealing based protocol. Backbone atoms were added using a method previously described and then gradient minimised in the Rosetta force field. Despite the CG potential energy function being sequence-independent, the method performed similarly to methods that explicitly use either fragments of known protein backbones with similar sequences or residue-specific /-maps to restrict the search space. The method was also able to predict with sub-Angstrom accuracy two out of seven loops from recently solved crystal structures of proteins with low sequence and structure similarity to previously deposited structures in the PDB. The ability to sample realistic loop conformations directly from a potential energy function enables the incorporation of additional geometric restraints and the use of more advanced sampling methods in a way that is not possible to do easily with fragment replacement methods and also enable multi-scale simulations for protein design and protein structure prediction. These restraints could be derived from experimental data or could be design restraints in the case of computational protein design. C++ source code is available for download from http://www.sbg.bio.ic.ac.uk/phyre2/PD2/. PMID:23824634
Electronic field emission models beyond the Fowler-Nordheim one
NASA Astrophysics Data System (ADS)
Lepetit, Bruno
2017-12-01
We propose several quantum mechanical models to describe electronic field emission from first principles. These models allow us to correlate quantitatively the electronic emission current with the electrode surface details at the atomic scale. They all rely on electronic potential energy surfaces obtained from three dimensional density functional theory calculations. They differ by the various quantum mechanical methods (exact or perturbative, time dependent or time independent), which are used to describe tunneling through the electronic potential energy barrier. Comparison of these models between them and with the standard Fowler-Nordheim one in the context of one dimensional tunneling allows us to assess the impact on the accuracy of the computed current of the approximations made in each model. Among these methods, the time dependent perturbative one provides a well-balanced trade-off between accuracy and computational cost.
Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo
2015-05-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme
2015-01-01
The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ionization potential depression in an atomic-solid-plasma picture
NASA Astrophysics Data System (ADS)
Rosmej, F. B.
2018-05-01
Exotic solid density matter such as heated hollow crystals allow extended material studies while their physical properties and models such as the famous ionization potential depression are presently under renewed controversial discussion. Here we develop an atomic-solid-plasma (ASP) model that permits ionization potential depression studies also for single and multiple core hole states. Numerical calculations show very good agreement with recently available data not only in absolute values but also for Z-scaled properties while currently employed methods fail. For much above solid density compression, the ASP model predicts increased K-edge energies that are related to a Fermi surface rising. This is in good agreement with recent quantum molecular dynamics simulations. For hot dense matter a quantum number dependent optical electron finite temperature ion sphere model is developed that fits well with line shift and line disappearance data from dense laser produced plasma experiments. Finally, the physical transparency of the ASP picture allows a critical discussion of current methods.
The Electrical Self-Potential Method as a Non-Intrusive Snow-Hydrological Sensor
NASA Astrophysics Data System (ADS)
Kulessa, B.; Thompson, S. S.; Luethi, M. P.; Essery, R.
2015-12-01
Building on growing momentum in the application of geophysical techniques to snow problems and, specifically, on new theory and an electrical geophysical snow hydrological model published recently; we demonstrate for the first time that the electrical self-potential geophysical technique can sense in-situ bulk meltwater fluxes. This has broad and immediate implications for snow measurement practice, modelling and operational snow forecasting. Our ability to measure, quantify and assimilate hydrological properties and processes of snow in operational models is disproportionally poor compared to the significance of seasonal snowmelt as a global water resource and major risk factor in flood and avalanche forecasting. Encouraged by recent theoretical, modelling and laboratory work, we show here that the diurnal evolution of aerially-distributed self-potential magnitudes closely track those of bulk meltwater fluxes in melting in-situ snowpacks at Rhone and Jungfraujoch glaciers, Switzerland. Numerical modelling infers temporally-evolving liquid water contents in the snowpacks on successive days in close agreement with snow-pit measurements. Muting previous concerns, the governing physical and chemical properties of snow and meltwater became temporally invariant for modelling purposes. Because measurement procedure is straightforward and readily automated for continuous monitoring over significant spatial scales, we conclude that the self-potential geophysical method is a highly-promising non-intrusive snow-hydrological sensor for measurement practice, modelling and operational snow forecasting.
NASA Astrophysics Data System (ADS)
Jabbari, Ali
2018-01-01
Surface inset permanent magnet DC machine can be used as an alternative in automation systems due to their high efficiency and robustness. Magnet segmentation is a common technique in order to mitigate pulsating torque components in permanent magnet machines. An accurate computation of air-gap magnetic field distribution is necessary in order to calculate machine performance. An exact analytical method for magnetic vector potential calculation in surface inset permanent magnet machines considering magnet segmentation has been proposed in this paper. The analytical method is based on the resolution of Laplace and Poisson equations as well as Maxwell equation in polar coordinate by using sub-domain method. One of the main contributions of the paper is to derive an expression for the magnetic vector potential in the segmented PM region by using hyperbolic functions. The developed method is applied on the performance computation of two prototype surface inset magnet segmented motors with open circuit and on load conditions. The results of these models are validated through FEM method.
An improved reaction path optimization method using a chain of conformations
NASA Astrophysics Data System (ADS)
Asada, Toshio; Sawada, Nozomi; Nishikawa, Takuya; Koseki, Shiro
2018-05-01
The efficient fast path optimization (FPO) method is proposed to optimize the reaction paths on energy surfaces by using chains of conformations. No artificial spring force is used in the FPO method to ensure the equal spacing of adjacent conformations. The FPO method is applied to optimize the reaction path on two model potential surfaces. The use of this method enabled the optimization of the reaction paths with a drastically reduced number of optimization cycles for both potentials. It was also successfully utilized to define the MEP of the isomerization of the glycine molecule in water by FPO method.
A sediment graph model based on SCS-CN method
NASA Astrophysics Data System (ADS)
Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.
2008-01-01
SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.
A MISO-ARX-Based Method for Single-Trial Evoked Potential Extraction.
Yu, Nannan; Wu, Lingling; Zou, Dexuan; Chen, Ying; Lu, Hanbing
2017-01-01
In this paper, we propose a novel method for solving the single-trial evoked potential (EP) estimation problem. In this method, the single-trial EP is considered as a complex containing many components, which may originate from different functional brain sites; these components can be distinguished according to their respective latencies and amplitudes and are extracted simultaneously by multiple-input single-output autoregressive modeling with exogenous input (MISO-ARX). The extraction process is performed in three stages: first, we use a reference EP as a template and decompose it into a set of components, which serve as subtemplates for the remaining steps. Then, a dictionary is constructed with these subtemplates, and EPs are preliminarily extracted by sparse coding in order to roughly estimate the latency of each component. Finally, the single-trial measurement is parametrically modeled by MISO-ARX while characterizing spontaneous electroencephalographic activity as an autoregression model driven by white noise and with each component of the EP modeled by autoregressive-moving-average filtering of the subtemplates. Once optimized, all components of the EP can be extracted. Compared with ARX, our method has greater tracking capabilities of specific components of the EP complex as each component is modeled individually in MISO-ARX. We provide exhaustive experimental results to show the effectiveness and feasibility of our method.
Emergent kink statistics at finite temperature
Lopez-Ruiz, Miguel Angel; Yepez-Martinez, Tochtli; Szczepaniak, Adam; ...
2017-07-25
In this paper we use 1D quantum mechanical systems with Higgs-like interaction potential to study the emergence of topological objects at finite temperature. Two different model systems are studied, the standard double-well potential model and a newly introduced discrete kink model. Using Monte-Carlo simulations as well as analytic methods, we demonstrate how kinks become abundant at low temperatures. These results may shed useful insights on how topological phenomena may occur in QCD.
Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods
NASA Astrophysics Data System (ADS)
Gong, W.; Duan, Q.; Huo, X.
2017-12-01
Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.
Roach, Shane M.; Song, Dong; Berger, Theodore W.
2012-01-01
Activity-dependent variation of neuronal thresholds for action potential (AP) generation is one of the key determinants of spike-train temporal-pattern transformations from presynaptic to postsynaptic spike trains. In this study, we model the nonlinear dynamics of the threshold variation during synaptically driven broadband intracellular activity. First, membrane potentials of single CA1 pyramidal cells were recorded under physiologically plausible broadband stimulation conditions. Second, a method was developed to measure AP thresholds from the continuous recordings of membrane potentials. It involves measuring the turning points of APs by analyzing the third-order derivatives of the membrane potentials. Four stimulation paradigms with different temporal patterns were applied to validate this method by comparing the measured AP turning points and the actual AP thresholds estimated with varying stimulation intensities. Results show that the AP turning points provide consistent measurement of the AP thresholds, except for a constant offset. It indicates that 1) the variation of AP turning points represents the nonlinearities of threshold dynamics; and 2) an optimization of the constant offset is required to achieve accurate spike prediction. Third, a nonlinear dynamical third-order Volterra model was built to describe the relations between the threshold dynamics and the AP activities. Results show that the model can predict threshold accurately based on the preceding APs. Finally, the dynamic threshold model was integrated into a previously developed single neuron model and resulted in a 33% improvement in spike prediction. PMID:22156947
Hindriks, Rikkert; Schmiedt, Joscha; Arsiwalla, Xerxes D; Peter, Alina; Verschure, Paul F M J; Fries, Pascal; Schmid, Michael C; Deco, Gustavo
2017-01-01
Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires "inverting" Poisson's equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to "invert" a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task.
Schmiedt, Joscha; Arsiwalla, Xerxes D.; Peter, Alina; Verschure, Paul F. M. J.; Fries, Pascal; Schmid, Michael C.; Deco, Gustavo
2017-01-01
Planar intra-cortical electrode (Utah) arrays provide a unique window into the spatial organization of cortical activity. Reconstruction of the current source density (CSD) underlying such recordings, however, requires “inverting” Poisson’s equation. For inter-laminar recordings, this is commonly done by the CSD method, which consists in taking the second-order spatial derivative of the recorded local field potentials (LFPs). Although the CSD method has been tremendously successful in mapping the current generators underlying inter-laminar LFPs, its application to planar recordings is more challenging. While for inter-laminar recordings the CSD method seems reasonably robust against violations of its assumptions, is it unclear as to what extent this holds for planar recordings. One of the objectives of this study is to characterize the conditions under which the CSD method can be successfully applied to Utah array data. Using forward modeling, we find that for spatially coherent CSDs, the CSD method yields inaccurate reconstructions due to volume-conducted contamination from currents in deeper cortical layers. An alternative approach is to “invert” a constructed forward model. The advantage of this approach is that any a priori knowledge about the geometrical and electrical properties of the tissue can be taken into account. Although several inverse methods have been proposed for LFP data, the applicability of existing electroencephalographic (EEG) and magnetoencephalographic (MEG) inverse methods to LFP data is largely unexplored. Another objective of our study therefore, is to assess the applicability of the most commonly used EEG/MEG inverse methods to Utah array data. Our main conclusion is that these inverse methods provide more accurate CSD reconstructions than the CSD method. We illustrate the inverse methods using event-related potentials recorded from primary visual cortex of a macaque monkey during a motion discrimination task. PMID:29253006
New formulation feed method in tariff model of solar PV in Indonesia
NASA Astrophysics Data System (ADS)
Djamal, Muchlishah Hadi; Setiawan, Eko Adhi; Setiawan, Aiman
2017-03-01
Geographically, Indonesia has 18 latitudes that correlated strongly with the potential of solar radiation for the implementation of solar photovoltaic (PV) technologies. This is becoming the basis assumption to develop a proportional model of Feed In Tariff (FIT), consequently the FIT will be vary, according to the various of latitudes in Indonesia. This paper proposed a new formulation of solar PV FIT based on the potential of solar radiation and some independent variables such as latitude, longitude, Levelized Cost of Electricity (LCOE), and also socio-economic. The Principal Component Regression (PCR) method is used to analyzed the correlation of six independent variables C1-C6 then three models of FIT are presented. Model FIT-2 is chosen because it has a small residual value and has higher financial benefit compared to the other models. This study reveals the value of variable FIT associated with solar energy potential in each region, can reduce the total FIT to be paid by the state around 80 billion rupiahs in 10 years of 1 MW photovoltaic operation at each 34 provinces in Indonesia.
A cross-species analysis method to analyze animal models' similarity to human's disease state
2012-01-01
Background Animal models are indispensable tools in studying the cause of human diseases and searching for the treatments. The scientific value of an animal model depends on the accurate mimicry of human diseases. The primary goal of the current study was to develop a cross-species method by using the animal models' expression data to evaluate the similarity to human diseases' and assess drug molecules' efficiency in drug research. Therefore, we hoped to reveal that it is feasible and useful to compare gene expression profiles across species in the studies of pathology, toxicology, drug repositioning, and drug action mechanism. Results We developed a cross-species analysis method to analyze animal models' similarity to human diseases and effectiveness in drug research by utilizing the existing animal gene expression data in the public database, and mined some meaningful information to help drug research, such as potential drug candidates, possible drug repositioning, side effects and analysis in pharmacology. New animal models could be evaluated by our method before they are used in drug discovery. We applied the method to several cases of known animal model expression profiles and obtained some useful information to help drug research. We found that trichostatin A and some other HDACs could have very similar response across cell lines and species at gene expression level. Mouse hypoxia model could accurately mimic the human hypoxia, while mouse diabetes drug model might have some limitation. The transgenic mouse of Alzheimer was a useful model and we deeply analyzed the biological mechanisms of some drugs in this case. In addition, all the cases could provide some ideas for drug discovery and drug repositioning. Conclusions We developed a new cross-species gene expression module comparison method to use animal models' expression data to analyse the effectiveness of animal models in drug research. Moreover, through data integration, our method could be applied for drug research, such as potential drug candidates, possible drug repositioning, side effects and information about pharmacology. PMID:23282076
A cross-species analysis method to analyze animal models' similarity to human's disease state.
Yu, Shuhao; Zheng, Lulu; Li, Yun; Li, Chunyan; Ma, Chenchen; Li, Yixue; Li, Xuan; Hao, Pei
2012-01-01
Animal models are indispensable tools in studying the cause of human diseases and searching for the treatments. The scientific value of an animal model depends on the accurate mimicry of human diseases. The primary goal of the current study was to develop a cross-species method by using the animal models' expression data to evaluate the similarity to human diseases' and assess drug molecules' efficiency in drug research. Therefore, we hoped to reveal that it is feasible and useful to compare gene expression profiles across species in the studies of pathology, toxicology, drug repositioning, and drug action mechanism. We developed a cross-species analysis method to analyze animal models' similarity to human diseases and effectiveness in drug research by utilizing the existing animal gene expression data in the public database, and mined some meaningful information to help drug research, such as potential drug candidates, possible drug repositioning, side effects and analysis in pharmacology. New animal models could be evaluated by our method before they are used in drug discovery. We applied the method to several cases of known animal model expression profiles and obtained some useful information to help drug research. We found that trichostatin A and some other HDACs could have very similar response across cell lines and species at gene expression level. Mouse hypoxia model could accurately mimic the human hypoxia, while mouse diabetes drug model might have some limitation. The transgenic mouse of Alzheimer was a useful model and we deeply analyzed the biological mechanisms of some drugs in this case. In addition, all the cases could provide some ideas for drug discovery and drug repositioning. We developed a new cross-species gene expression module comparison method to use animal models' expression data to analyse the effectiveness of animal models in drug research. Moreover, through data integration, our method could be applied for drug research, such as potential drug candidates, possible drug repositioning, side effects and information about pharmacology.
Modeling of nanoscale liquid mixture transport by density functional hydrodynamics
NASA Astrophysics Data System (ADS)
Dinariev, Oleg Yu.; Evseev, Nikolay V.
2017-06-01
Modeling of multiphase compositional hydrodynamics at nanoscale is performed by means of density functional hydrodynamics (DFH). DFH is the method based on density functional theory and continuum mechanics. This method has been developed by the authors over 20 years and used for modeling in various multiphase hydrodynamic applications. In this paper, DFH was further extended to encompass phenomena inherent in liquids at nanoscale. The new DFH extension is based on the introduction of external potentials for chemical components. These potentials are localized in the vicinity of solid surfaces and take account of the van der Waals forces. A set of numerical examples, including disjoining pressure, film precursors, anomalous rheology, liquid in contact with heterogeneous surface, capillary condensation, and forward and reverse osmosis, is presented to demonstrate modeling capabilities.
Advantages of multigrid methods for certifying the accuracy of PDE modeling
NASA Technical Reports Server (NTRS)
Forester, C. K.
1981-01-01
Numerical techniques for assessing and certifying the accuracy of the modeling of partial differential equations (PDE) to the user's specifications are analyzed. Examples of the certification process with conventional techniques are summarized for the three dimensional steady state full potential and the two dimensional steady Navier-Stokes equations using fixed grid methods (FG). The advantages of the Full Approximation Storage (FAS) scheme of the multigrid technique of A. Brandt compared with the conventional certification process of modeling PDE are illustrated in one dimension with the transformed potential equation. Inferences are drawn for how MG will improve the certification process of the numerical modeling of two and three dimensional PDE systems. Elements of the error assessment process that are common to FG and MG are analyzed.
BOOTSTRAPPING THE CORONAL MAGNETIC FIELD WITH STEREO: UNIPOLAR POTENTIAL FIELD MODELING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aschwanden, Markus J.; Sandman, Anne W., E-mail: aschwanden@lmsal.co
We investigate the recently quantified misalignment of {alpha}{sub mis} {approx} 20{sup 0}-40{sup 0} between the three-dimensional geometry of stereoscopically triangulated coronal loops observed with STEREO/EUVI (in four active regions (ARs)) and theoretical (potential or nonlinear force-free) magnetic field models extrapolated from photospheric magnetograms. We develop an efficient method of bootstrapping the coronal magnetic field by forward fitting a parameterized potential field model to the STEREO-observed loops. The potential field model consists of a number of unipolar magnetic charges that are parameterized by decomposing a photospheric magnetogram from the Michelson Doppler Imager. The forward-fitting method yields a best-fit magnetic field modelmore » with a reduced misalignment of {alpha}{sub PF} {approx} 13{sup 0}-20{sup 0}. We also evaluate stereoscopic measurement errors and find a contribution of {alpha}{sub SE} {approx} 7{sup 0}-12{sup 0}, which constrains the residual misalignment to {alpha}{sub NP} {approx} 11{sup 0}-17{sup 0}, which is likely due to the nonpotentiality of the ARs. The residual misalignment angle, {alpha}{sub NP}, of the potential field due to nonpotentiality is found to correlate with the soft X-ray flux of the AR, which implies a relationship between electric currents and plasma heating.« less
An analytical drain current model for symmetric double-gate MOSFETs
NASA Astrophysics Data System (ADS)
Yu, Fei; Huang, Gongyi; Lin, Wei; Xu, Chuanzhong
2018-04-01
An analytical surface-potential-based drain current model of symmetric double-gate (sDG) MOSFETs is described as a SPICE compatible model in this paper. The continuous surface and central potentials from the accumulation to the strong inversion regions are solved from the 1-D Poisson's equation in sDG MOSFETs. Furthermore, the drain current is derived from the charge sheet model as a function of the surface potential. Over a wide range of terminal voltages, doping concentrations, and device geometries, the surface potential calculation scheme and drain current model are verified by solving the 1-D Poisson's equation based on the least square method and using the Silvaco Atlas simulation results and experimental data, respectively. Such a model can be adopted as a useful platform to develop the circuit simulator and provide the clear understanding of sDG MOSFET device physics.
NASA Astrophysics Data System (ADS)
Igarashi, Akito; Tsukamoto, Shinji
2000-02-01
Biological molecular motors drive unidirectional transport and transduce chemical energy to mechanical work. In order to identify this energy conversion which is a common feature of molecular motors, many workers have studied various physical models, which consist of Brownian particles in spatially periodic potentials. Most of the models are, however, based on "single-particle" dynamics and too simple as models for biological motors, especially for actin-myosin motors, which cause muscle contraction. In this paper, particles coupled by elastic strings in an asymmetric periodic potential are considered as a model for the motors. We investigate the dynamics of the model and calculate the efficiency of energy conversion with the use of molecular dynamical method. In particular, we find that the velocity and efficiency of the elastically coupled particles where the natural length of the springs is incommensurable with the period of the periodic potential are larger than those of the corresponding single particle model.
NASA Astrophysics Data System (ADS)
Nochi, Kazuki; Kawanai, Taichi; Sasaki, Shoichi
2018-03-01
The quark potential models with an energy-independent central potential have been successful for understanding the conventional charmonium states especially below the open charm threshold. As one might consider, however, the interquark potential is in general energy-dependent, and its tendency gets stronger in higher lying states. Confirmation of whether the interquark potential is energy-independent is also important to verify the validity of the quark potential models. In this talk, we examine the energy dependence of the charmonium potential, which can be determined from the Bethe-Salpeter (BS) amplitudes of cc̅ mesons in lattice QCD.We first calculate the BS amplitudes of radially excited charmonium states, the ηc(2S) and ψ(2S) states, using the variational method and then determine both the quark kinetic mass and the charmonium potential within the HAL QCD method. Through a direct comparison of charmonium potentials determined from both the 1S and 2S states, we confirm that neither the central nor spin-spin potential shows visible energy dependence at least up to 2S state.
CONSTRUCTION OF EDUCATIONAL THEORY MODELS.
ERIC Educational Resources Information Center
MACCIA, ELIZABETH S.; AND OTHERS
THIS STUDY DELINEATED MODELS WHICH HAVE POTENTIAL USE IN GENERATING EDUCATIONAL THEORY. A THEORY MODELS METHOD WAS FORMULATED. BY SELECTING AND ORDERING CONCEPTS FROM OTHER DISCIPLINES, THE INVESTIGATORS FORMULATED SEVEN THEORY MODELS. THE FINAL STEP OF DEVISING EDUCATIONAL THEORY FROM THE THEORY MODELS WAS PERFORMED ONLY TO THE EXTENT REQUIRED TO…
Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo
2015-01-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786
Made-to-measure modelling of observed galaxy dynamics
NASA Astrophysics Data System (ADS)
Bovy, Jo; Kawata, Daisuke; Hunt, Jason A. S.
2018-01-01
Amongst dynamical modelling techniques, the made-to-measure (M2M) method for modelling steady-state systems is amongst the most flexible, allowing non-parametric distribution functions in complex gravitational potentials to be modelled efficiently using N-body particles. Here, we propose and test various improvements to the standard M2M method for modelling observed data, illustrated using the simple set-up of a one-dimensional harmonic oscillator. We demonstrate that nuisance parameters describing the modelled system's orientation with respect to the observer - e.g. an external galaxy's inclination or the Sun's position in the Milky Way - as well as the parameters of an external gravitational field can be optimized simultaneously with the particle weights. We develop a method for sampling from the high-dimensional uncertainty distribution of the particle weights. We combine this in a Gibbs sampler with samplers for the nuisance and potential parameters to explore the uncertainty distribution of the full set of parameters. We illustrate our M2M improvements by modelling the vertical density and kinematics of F-type stars in Gaia DR1. The novel M2M method proposed here allows full probabilistic modelling of steady-state dynamical systems, allowing uncertainties on the non-parametric distribution function and on nuisance parameters to be taken into account when constraining the dark and baryonic masses of stellar systems.
3D Modelling and Printing Technology to Produce Patient-Specific 3D Models.
Birbara, Nicolette S; Otton, James M; Pather, Nalini
2017-11-10
A comprehensive knowledge of mitral valve (MV) anatomy is crucial in the assessment of MV disease. While the use of three-dimensional (3D) modelling and printing in MV assessment has undergone early clinical evaluation, the precision and usefulness of this technology requires further investigation. This study aimed to assess and validate 3D modelling and printing technology to produce patient-specific 3D MV models. A prototype method for MV 3D modelling and printing was developed from computed tomography (CT) scans of a plastinated human heart. Mitral valve models were printed using four 3D printing methods and validated to assess precision. Cardiac CT and 3D echocardiography imaging data of four MV disease patients was used to produce patient-specific 3D printed models, and 40 cardiac health professionals (CHPs) were surveyed on the perceived value and potential uses of 3D models in a clinical setting. The prototype method demonstrated submillimetre precision for all four 3D printing methods used, and statistical analysis showed a significant difference (p<0.05) in precision between these methods. Patient-specific 3D printed models, particularly using multiple print materials, were considered useful by CHPs for preoperative planning, as well as other applications such as teaching and training. This study suggests that, with further advances in 3D modelling and printing technology, patient-specific 3D MV models could serve as a useful clinical tool. The findings also highlight the potential of this technology to be applied in a variety of medical areas within both clinical and educational settings. Copyright © 2017 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.
Analysis on Potential of Electric Energy Market based on Large Industrial Consumer
NASA Astrophysics Data System (ADS)
Lin, Jingyi; Zhu, Xinzhi; Yang, Shuo; Xia, Huaijian; Yang, Di; Li, Hao; Lin, Haiying
2018-01-01
The implementation of electric energy substitution by enterprises plays an important role in promoting the development of energy conservation and emission reduction in china. In order to explore alternative energy potential of industrial enterprises, to simulate and analyze the process of industrial enterprises, identify high energy consumption process and equipment, give priority to alternative energy technologies, and determine the enterprise electric energy substitution potential predictive value, this paper constructs the evaluation model of the influence factors of the electric energy substitution potential of industrial enterprises, and uses the combined weight method to determine the weight value of the evaluation factors to calculate the target value of the electric energy substitution potential. Taking the iron and steel industry as an example, this method is used to excavate the potential. The results show that the method can effectively tap the potential of the electric power industry
Properties of a Formal Method to Model Emergence in Swarm-Based Systems
NASA Technical Reports Server (NTRS)
Rouff, Christopher; Vanderbilt, Amy; Truszkowski, Walt; Rash, James; Hinchey, Mike
2004-01-01
Future space missions will require cooperation between multiple satellites and/or rovers. Developers are proposing intelligent autonomous swarms for these missions, but swarm-based systems are difficult or impossible to test with current techniques. This viewgraph presentation examines the use of formal methods in testing swarm-based systems. The potential usefulness of formal methods in modeling the ANTS asteroid encounter mission is also examined.
Inferring the gravitational potential of the Milky Way with a few precisely measured stars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price-Whelan, Adrian M.; Johnston, Kathryn V.; Hendel, David
2014-10-10
The dark matter halo of the Milky Way is expected to be triaxial and filled with substructure. It is hoped that streams or shells of stars produced by tidal disruption of stellar systems will provide precise measures of the gravitational potential to test these predictions. We develop a method for inferring the Galactic potential with tidal streams based on the idea that the stream stars were once close in phase space. Our method can flexibly adapt to any form for the Galactic potential: it works in phase-space rather than action-space and hence relies neither on our ability to derive actionsmore » nor on the integrability of the potential. Our model is probabilistic, with a likelihood function and priors on the parameters. The method can properly account for finite observational uncertainties and missing data dimensions. We test our method on synthetic data sets generated from N-body simulations of satellite disruption in a static, multi-component Milky Way, including a triaxial dark matter halo with observational uncertainties chosen to mimic current and near-future surveys of various stars. We find that with just eight well-measured stream stars, we can infer properties of a triaxial potential with precisions of the order of 5%-7%. Without proper motions, we obtain 10% constraints on most potential parameters and precisions around 5%-10% for recovering missing phase-space coordinates. These results are encouraging for the goal of using flexible, time-dependent potential models combined with larger data sets to unravel the detailed shape of the dark matter distribution around the Milky Way.« less
Phenomenological and molecular-level Petri net modeling and simulation of long-term potentiation.
Hardy, S; Robillard, P N
2005-10-01
Petri net-based modeling methods have been used in many research projects to represent biological systems. Among these, the hybrid functional Petri net (HFPN) was developed especially for biological modeling in order to provide biologists with a more intuitive Petri net-based method. In the literature, HFPNs are used to represent kinetic models at the molecular level. We present two models of long-term potentiation previously represented by differential equations which we have transformed into HFPN models: a phenomenological synapse model and a molecular-level model of the CaMKII regulation pathway. Through simulation, we obtained results similar to those of previous studies using these models. Our results open the way to a new type of modeling for systems biology where HFPNs are used to combine different levels of abstraction within one model. This approach can be useful in fully modeling a system at the molecular level when kinetic data is missing or when a full study of a system at the molecular level it is not within the scope of the research.
Low-dimensional, morphologically accurate models of subthreshold membrane potential
Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.
2009-01-01
The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386
Global optical model potential for A=3 projectiles
NASA Astrophysics Data System (ADS)
Pang, D. Y.; Roussel-Chomaz, P.; Savajols, H.; Varner, R. L.; Wolski, R.
2009-02-01
A global optical model potential (GDP08) for He3 projectiles has been obtained by simultaneously fitting the elastic scattering data of He3 from targets of 40⩽AT⩽209 at incident energies of 30⩽Einc⩽217 MeV. Uncertainties and correlation coefficients between the global potential parameters were obtained by using the bootstrap statistical method. GDP08 was found to satisfactorily account for the elastic scattering of H3 as well, which makes it a global optical potential for the A=3 nuclei. Optical model calculations using the GDP08 global potential are compared with the experimental angular distributions of differential cross sections for He3-nucleus and H3-nucleus scattering from different targets of 6⩽AT⩽232 at incident energies of 4⩽Einc⩽450 MeV. The optical potential for the doubly-magic nucleus Ca40, the low-energy correction to the real potential for nuclei with 58≲AT≲120 at Einc<30 MeV, the comparison with double-folding model calculations and the CH89 potential, and the spin-orbit potential parameters are discussed.
Symmetries for Light-Front Quantization of Yukawa Model with Renormalization
NASA Astrophysics Data System (ADS)
Żochowski, Jan; Przeszowski, Jerzy A.
2017-12-01
In this work we discuss the Yukawa model with the extra term of self-interacting scalar field in D=1+3 dimensions. We present the method of derivation the light-front commutators and anti-commutators from the Heisenberg equations induced by the kinematical generating operator of the translation P+. Mentioned Heisenberg equations are the starting point for obtaining this algebra of the (anti-) commutators. Some discrepancies between existing and proposed method of quantization are revealed. The Lorentz and the CPT symmetry, together with some features of the quantum theory were applied to obtain the two-point Wightman function for the free fermions. Moreover, these Wightman functions were computed especially without referring to the Fock expansion. The Gaussian effective potential for the Yukawa model was found in the terms of the Wightman functions. It was regularized by the space-like point-splitting method. The coupling constants within the model were redefined. The optimum mass parameters remained regularization independent. Finally, the Gaussian effective potential was renormalized.
Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models
NASA Technical Reports Server (NTRS)
Marquette, Michele L.; Sognier, Marguerite A.
2013-01-01
An improved method for culturing immature muscle cells (myoblasts) into a mature skeletal muscle overcomes some of the notable limitations of prior culture methods. The development of the method is a major advance in tissue engineering in that, for the first time, a cell-based model spontaneously fuses and differentiates into masses of highly aligned, contracting myotubes. This method enables (1) the construction of improved two-dimensional (monolayer) skeletal muscle test beds; (2) development of contracting three-dimensional tissue models; and (3) improved transplantable tissues for biomedical and regenerative medicine applications. With adaptation, this method also offers potential application for production of other tissue types (i.e., bone and cardiac) from corresponding precursor cells.
NASA Astrophysics Data System (ADS)
Chen, Xin; Sánchez-Arriaga, Gonzalo
2018-02-01
To model the sheath structure around an emissive probe with cylindrical geometry, the Orbital-Motion theory takes advantage of three conserved quantities (distribution function, transverse energy, and angular momentum) to transform the stationary Vlasov-Poisson system into a single integro-differential equation. For a stationary collisionless unmagnetized plasma, this equation describes self-consistently the probe characteristics. By solving such an equation numerically, parametric analyses for the current-voltage (IV) and floating-potential (FP) characteristics can be performed, which show that: (a) for strong emission, the space-charge effects increase with probe radius; (b) the probe can float at a positive potential relative to the plasma; (c) a smaller probe radius is preferred for the FP method to determine the plasma potential; (d) the work function of the emitting material and the plasma-ion properties do not influence the reliability of the floating-potential method. Analytical analysis demonstrates that the inflection point of an IV curve for non-emitting probes occurs at the plasma potential. The flat potential is not a self-consistent solution for emissive probes.
Shear viscosity of binary mixtures: The Gay-Berne potential
NASA Astrophysics Data System (ADS)
Khordad, R.
2012-05-01
The Gay-Berne (GB) potential model is an interesting and useful model to study the real systems. Using the potential model, we intend to examine the thermodynamical properties of some anisotropic binary mixtures in two different phases, liquid and gas. For this purpose, we apply the integral equation method and solve numerically the Percus-Yevick (PY) integral equation. Then, we obtain the expansion coefficients of correlation functions to calculate the thermodynamical properties. Finally, we compare our results with the available experimental data [e.g., HFC-125 + propane, R-125/143a, methanol + toluene, benzene + methanol, cyclohexane + ethanol, benzene + ethanol, carbon tetrachloride + ethyl acetate, and methanol + ethanol]. The results show that the GB potential model is capable for predicting the thermodynamical properties of binary mixtures with acceptable accuracy.
A zonal method for modeling powered-lift aircraft flow fields
NASA Technical Reports Server (NTRS)
Roberts, D. W.
1989-01-01
A zonal method for modeling powered-lift aircraft flow fields is based on the coupling of a three-dimensional Navier-Stokes code to a potential flow code. By minimizing the extent of the viscous Navier-Stokes zones the zonal method can be a cost effective flow analysis tool. The successful coupling of the zonal solutions provides the viscous/inviscid interations that are necessary to achieve convergent and unique overall solutions. The feasibility of coupling the two vastly different codes is demonstrated. The interzone boundaries were overlapped to facilitate the passing of boundary condition information between the codes. Routines were developed to extract the normal velocity boundary conditions for the potential flow zone from the viscous zone solution. Similarly, the velocity vector direction along with the total conditions were obtained from the potential flow solution to provide boundary conditions for the Navier-Stokes solution. Studies were conducted to determine the influence of the overlap of the interzone boundaries and the convergence of the zonal solutions on the convergence of the overall solution. The zonal method was applied to a jet impingement problem to model the suckdown effect that results from the entrainment of the inviscid zone flow by the viscous zone jet. The resultant potential flow solution created a lower pressure on the base of the vehicle which produces the suckdown load. The feasibility of the zonal method was demonstrated. By enhancing the Navier-Stokes code for powered-lift flow fields and optimizing the convergence of the coupled analysis a practical flow analysis tool will result.
NASA Astrophysics Data System (ADS)
Fraser, S. A.; Wood, N. J.; Johnston, D. M.; Leonard, G. S.; Greening, P. D.; Rossetto, T.
2014-06-01
Evacuation of the population from a tsunami hazard zone is vital to reduce life-loss due to inundation. Geospatial least-cost distance modelling provides one approach to assessing tsunami evacuation potential. Previous models have generally used two static exposure scenarios and fixed travel speeds to represent population movement. Some analyses have assumed immediate evacuation departure time or assumed a common departure time for all exposed population. In this paper, a method is proposed to incorporate time-variable exposure, distributed travel speeds, and uncertain evacuation departure time into an existing anisotropic least-cost path distance framework. The model is demonstrated for a case study of local-source tsunami evacuation in Napier City, Hawke's Bay, New Zealand. There is significant diurnal variation in pedestrian evacuation potential at the suburb-level, although the total number of people unable to evacuate is stable across all scenarios. Whilst some fixed travel speeds can approximate a distributed speed approach, others may overestimate evacuation potential. The impact of evacuation departure time is a significant contributor to total evacuation time. This method improves least-cost modelling of evacuation dynamics for evacuation planning, casualty modelling, and development of emergency response training scenarios.
Yang, Fei; Xu, Zhencheng; Zhu, Yunqiang; He, Chansheng; Wu, Genyi; Qiu, Jin Rong; Fu, Qiang; Liu, Qingsong
2013-01-01
Agricultural nonpoint source (NPS) pollution has been the most important threat to water environment quality. Understanding the spatial distribution of NPS pollution potential risk is important for taking effective measures to control and reduce NPS pollution. A Transformed-Agricultural Nonpoint Pollution Potential Index (T-APPI) model was constructed for evaluating the national NPS pollution potential risk in this study; it was also combined with remote sensing and geographic information system techniques for evaluation on the large scale and at 1 km2 spatial resolution. This model considers many factors contributing to the NPS pollution as the original APPI model, summarized as four indicators of the runoff, sediment production, chemical use and the people and animal load. These four indicators were analysed in detail at 1 km2 spatial resolution throughout China. The T-APPI model distinguished the four indicators into pollution source factors and transport process factors; it also took their relationship into consideration. The studied results showed that T-APPI is a credible and convenient method for NPS pollution potential risk evaluation. The results also indicated that the highest NPS pollution potential risk is distributed in the middle-southern Jiangsu province. Several other regions, including the North China Plain, Chengdu Basin Plain, Jianghan Plain, cultivated lands in Guangdong and Guangxi provinces, also showed serious NPS pollution potential. This study can provide a scientific reference for predicting the future NPS pollution risk throughout China and may be helpful for taking reasonable and effective measures for preventing and controlling NPS pollution.
Kandel, Saugat; Salomon-Ferrer, Romelia; Larsen, Adrien B; Jain, Abhinandan; Vaidehi, Nagarajan
2016-01-28
The Internal Coordinate Molecular Dynamics (ICMD) method is an attractive molecular dynamics (MD) method for studying the dynamics of bonded systems such as proteins and polymers. It offers a simple venue for coarsening the dynamics model of a system at multiple hierarchical levels. For example, large scale protein dynamics can be studied using torsional dynamics, where large domains or helical structures can be treated as rigid bodies and the loops connecting them as flexible torsions. ICMD with such a dynamic model of the protein, combined with enhanced conformational sampling method such as temperature replica exchange, allows the sampling of large scale domain motion involving high energy barrier transitions. Once these large scale conformational transitions are sampled, all-torsion, or even all-atom, MD simulations can be carried out for the low energy conformations sampled via coarse grained ICMD to calculate the energetics of distinct conformations. Such hierarchical MD simulations can be carried out with standard all-atom forcefields without the need for compromising on the accuracy of the forces. Using constraints to treat bond lengths and bond angles as rigid can, however, distort the potential energy landscape of the system and reduce the number of dihedral transitions as well as conformational sampling. We present here a two-part solution to overcome such distortions of the potential energy landscape with ICMD models. To alleviate the intrinsic distortion that stems from the reduced phase space in torsional MD, we use the Fixman compensating potential. To additionally alleviate the extrinsic distortion that arises from the coupling between the dihedral angles and bond angles within a force field, we propose a hybrid ICMD method that allows the selective relaxing of bond angles. This hybrid ICMD method bridges the gap between all-atom MD and torsional MD. We demonstrate with examples that these methods together offer a solution to eliminate the potential energy distortions encountered in constrained ICMD simulations of peptide molecules.
NASA Astrophysics Data System (ADS)
Kandel, Saugat; Salomon-Ferrer, Romelia; Larsen, Adrien B.; Jain, Abhinandan; Vaidehi, Nagarajan
2016-01-01
The Internal Coordinate Molecular Dynamics (ICMD) method is an attractive molecular dynamics (MD) method for studying the dynamics of bonded systems such as proteins and polymers. It offers a simple venue for coarsening the dynamics model of a system at multiple hierarchical levels. For example, large scale protein dynamics can be studied using torsional dynamics, where large domains or helical structures can be treated as rigid bodies and the loops connecting them as flexible torsions. ICMD with such a dynamic model of the protein, combined with enhanced conformational sampling method such as temperature replica exchange, allows the sampling of large scale domain motion involving high energy barrier transitions. Once these large scale conformational transitions are sampled, all-torsion, or even all-atom, MD simulations can be carried out for the low energy conformations sampled via coarse grained ICMD to calculate the energetics of distinct conformations. Such hierarchical MD simulations can be carried out with standard all-atom forcefields without the need for compromising on the accuracy of the forces. Using constraints to treat bond lengths and bond angles as rigid can, however, distort the potential energy landscape of the system and reduce the number of dihedral transitions as well as conformational sampling. We present here a two-part solution to overcome such distortions of the potential energy landscape with ICMD models. To alleviate the intrinsic distortion that stems from the reduced phase space in torsional MD, we use the Fixman compensating potential. To additionally alleviate the extrinsic distortion that arises from the coupling between the dihedral angles and bond angles within a force field, we propose a hybrid ICMD method that allows the selective relaxing of bond angles. This hybrid ICMD method bridges the gap between all-atom MD and torsional MD. We demonstrate with examples that these methods together offer a solution to eliminate the potential energy distortions encountered in constrained ICMD simulations of peptide molecules.
Using self-potential housing technique to model water seepage at the UNHAS housing Antang area
NASA Astrophysics Data System (ADS)
Syahruddin, Muhammad Hamzah
2017-01-01
The earth's surface has an electric potential that is known as self-potentiall (SP). One of the causes of the electrical potential at the earth's surface is water seepage into the ground. Electrical potential caused by water velocity seepage into the ground known as streaming potential. How to model water seepage into the ground at the housing Unhas Antang? This study was conducted to answer these questions. The self-potential measurements performed using a simple digital voltmeter Sanwa brand PC500 with a precision of 0.01 mV. While the coordinates of measurements points are self-potential using Global Positioning System. Mmeasurements results thus obtained are plotted using surfer image distribution self-potential housing Unhas Antang. The self-potential data housing Unhas Antang processed by Forward Modeling methods to get a model of water infiltration into the soil. Housing Unhas Antang self-potential has a value of 5 to 23 mV. Self-potential measurements carried out in the rainy season so it can be assumed that the measurement results caused by the velocity water seepage into the ground. The results of modeling the velocity water seepage from the surface to a depth of 3 meters was 2.4 cm/s to 0.2 cm /s. Modeling results showed that the velocity water seepage of the smaller with depth.
Time delayed Ensemble Nudging Method
NASA Astrophysics Data System (ADS)
An, Zhe; Abarbanel, Henry
Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.
Resolution and contrast in Kelvin probe force microscopy
NASA Astrophysics Data System (ADS)
Jacobs, H. O.; Leuchtmann, P.; Homan, O. J.; Stemmer, A.
1998-08-01
The combination of atomic force microscopy and Kelvin probe technology is a powerful tool to obtain high-resolution maps of the surface potential distribution on conducting and nonconducting samples. However, resolution and contrast transfer of this method have not been fully understood, so far. To obtain a better quantitative understanding, we introduce a model which correlates the measured potential with the actual surface potential distribution, and we compare numerical simulations of the three-dimensional tip-specimen model with experimental data from test structures. The observed potential is a locally weighted average over all potentials present on the sample surface. The model allows us to calculate these weighting factors and, furthermore, leads to the conclusion that good resolution in potential maps is obtained by long and slender but slightly blunt tips on cantilevers of minimal width and surface area.
Biologically-based pharmacokinetic models are being increasingly used in the risk assessment of environmental chemicals. These models are based on biological, mathematical, statistical and engineering principles. Their potential uses in risk assessment include extrapolation betwe...
Atomicrex—a general purpose tool for the construction of atomic interaction models
NASA Astrophysics Data System (ADS)
Stukowski, Alexander; Fransson, Erik; Mock, Markus; Erhart, Paul
2017-07-01
We introduce atomicrex, an open-source code for constructing interatomic potentials as well as more general types of atomic-scale models. Such effective models are required to simulate extended materials structures comprising many thousands of atoms or more, because electronic structure methods become computationally too expensive at this scale. atomicrex covers a wide range of interatomic potential types and fulfills many needs in atomistic model development. As inputs, it supports experimental property values as well as ab initio energies and forces, to which models can be fitted using various optimization algorithms. The open architecture of atomicrex allows it to be used in custom model development scenarios beyond classical interatomic potentials while thanks to its Python interface it can be readily integrated e.g., with electronic structure calculations or machine learning algorithms.
Accurate and Efficient Approximation to the Optimized Effective Potential for Exchange
NASA Astrophysics Data System (ADS)
Ryabinkin, Ilya G.; Kananenka, Alexei A.; Staroverov, Viktor N.
2013-07-01
We devise an efficient practical method for computing the Kohn-Sham exchange-correlation potential corresponding to a Hartree-Fock electron density. This potential is almost indistinguishable from the exact-exchange optimized effective potential (OEP) and, when used as an approximation to the OEP, is vastly better than all existing models. Using our method one can obtain unambiguous, nearly exact OEPs for any reasonable finite one-electron basis set at the same low cost as the Krieger-Li-Iafrate and Becke-Johnson potentials. For all practical purposes, this solves the long-standing problem of black-box construction of OEPs in exact-exchange calculations.
Chaos in pseudo-Newtonian black holes with halos
NASA Astrophysics Data System (ADS)
Guéron, E.; Letelier, P. S.
2001-03-01
Newtonian as well as special relativistic dynamics are used to study the stability of orbits of a test particle moving around a black hole with a dipolar halo. The black hole is modeled by either the usual monopole potential or the Paczyńki-Wiita pseudo-Newtonian potential. The full general relativistic similar case is also considered. The Poincaré section method and the Lyapunov characteristic exponents show that the orbits for the pseudo-Newtonian potential models are more unstable than the corresponding general relativistic geodesics.
Screening and Evaluation of Medications for Treating Cannabis Use Disorder
Panlilio, Leigh V.; Justinova, Zuzana; Trigo, Jose M.; Le Foll, Bernard
2016-01-01
Cannabis use has been increasingly accepted legally and in public opinion. However, cannabis has the potential to produce adverse physical and mental health effects and can result in cannabis use disorder (CUD) in a substantial percentage of both occasional and daily cannabis users. Many people have difficulty discontinuing use. Therefore, it would be beneficial to develop safe and effective medications for treating CUD. To achieve this, methods have been developed for screening and evaluating potential medications using animal models and controlled experimental protocols in human volunteers. In this chapter we describe: 1) animal models available for assessing the effect of potential medications on specific aspects of CUD; 2) the main findings obtained so far with these animal models; 3) the approaches used to assess potential medications in humans in laboratory experiments and clinical trials; and 4) the effectiveness of several potential pharmacotherapies on the particular aspects of CUD modeled in these human studies. PMID:27055612
Virtual Antiparticle Pairs, the Unit of Charge Epsilon and the QCD Coupling Alpha(sub s)
NASA Technical Reports Server (NTRS)
Batchelor, David
2001-01-01
New semi-classical models of virtual antiparticle pairs are used to compute the pair lifetimes, and good agreement with the Heisenberg lifetimes from quantum field theory (QFT) is found. When the results of the new models and QFT are combined, formulae for e and alpha(sub s)(q) are derived in terms of only h and c. The modeling method applies to both the electromagnetic and color forces. Evaluation of the action integral of potential field fluctuation for each interaction potential yields approx. = h/2 for both electromagnetic and color fluctuations, in agreement with QFT. Thus each model is a quantized semiclassical representation for such virtual antiparticle pairs, to good approximation. This work reduces the number of arbitrary parameters of the Standard Model by two from 18 to 16. These are remarkable, unexpected results from a basically classical method.
NASA Astrophysics Data System (ADS)
Zahid, F.; Paulsson, M.; Polizzi, E.; Ghosh, A. W.; Siddiqui, L.; Datta, S.
2005-08-01
We present a transport model for molecular conduction involving an extended Hückel theoretical treatment of the molecular chemistry combined with a nonequilibrium Green's function treatment of quantum transport. The self-consistent potential is approximated by CNDO (complete neglect of differential overlap) method and the electrostatic effects of metallic leads (bias and image charges) are included through a three-dimensional finite element method. This allows us to capture spatial details of the electrostatic potential profile, including effects of charging, screening, and complicated electrode configurations employing only a single adjustable parameter to locate the Fermi energy. As this model is based on semiempirical methods it is computationally inexpensive and flexible compared to ab initio models, yet at the same time it is able to capture salient qualitative features as well as several relevant quantitative details of transport. We apply our model to investigate recent experimental data on alkane dithiol molecules obtained in a nanopore setup. We also present a comparison study of single molecule transistors and identify electronic properties that control their performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hao; Yang, Weitao, E-mail: weitao.yang@duke.edu; Department of Physics, Duke University, Durham, North Carolina 27708
We developed a new method to calculate the atomic polarizabilities by fitting to the electrostatic potentials (ESPs) obtained from quantum mechanical (QM) calculations within the linear response theory. This parallels the conventional approach of fitting atomic charges based on electrostatic potentials from the electron density. Our ESP fitting is combined with the induced dipole model under the perturbation of uniform external electric fields of all orientations. QM calculations for the linear response to the external electric fields are used as input, fully consistent with the induced dipole model, which itself is a linear response model. The orientation of the uniformmore » external electric fields is integrated in all directions. The integration of orientation and QM linear response calculations together makes the fitting results independent of the orientations and magnitudes of the uniform external electric fields applied. Another advantage of our method is that QM calculation is only needed once, in contrast to the conventional approach, where many QM calculations are needed for many different applied electric fields. The molecular polarizabilities obtained from our method show comparable accuracy with those from fitting directly to the experimental or theoretical molecular polarizabilities. Since ESP is directly fitted, atomic polarizabilities obtained from our method are expected to reproduce the electrostatic interactions better. Our method was used to calculate both transferable atomic polarizabilities for polarizable molecular mechanics’ force fields and nontransferable molecule-specific atomic polarizabilities.« less
Participatory scenario modeling – an interactive method for visualizing the future – is one of the most promising tools for achieving sustainable land use agreements amongst diverse stakeholder groups. The method has the potential to bridge the gap between the high...
Concentration analysis of breast tissue phantoms with terahertz spectroscopy
Truong, Bao C. Q.; Fitzgerald, Anthony J.; Fan, Shuting; Wallace, Vincent P.
2018-01-01
Terahertz imaging has been previously shown to be capable of distinguishing normal breast tissue from its cancerous form, indicating its applicability to breast conserving surgery. The heterogeneous composition of breast tissue is among the main challenges to progressing this potential research towards a practical application. In this paper, two concentration analysis methods are proposed for analyzing phantoms mimicking breast tissue. The dielectric properties and the double Debye parameters were used to determine the phantom composition. The first method is wholly based on the conventional effective medium theory while the second one combines this theoretical model with empirical polynomial models. Through assessing the accuracy of these methods, their potential for application to quantifying breast tissue pathology was confirmed. PMID:29541525
Fulminant liver failure: clinical and experimental study.
Slapak, M.
1975-01-01
Clinical experience of some newer methods of hepatic support is described. The results are unpredictable and far from satisfactory. The need for an animal model in which potential therapeutic methods can be studied is emphasized. Such a model based on carefully imposed ischaemic insult to the liver in the absence of portacaval shunting is described. It is suggested that bacterial presence in the bowel together with a depression of the liver reticuloendothelial function plays an important part in the early and rapid mortality of acute liver failure. Temporary auxiliary liver transplantation using an allograft or a closely related primate heterograft seem to be the 2 best available methods of hepatic support for potentially reversible acute liver failure. Images Fig. 8 PMID:812415
Method for Predicting Thermal Buckling in Rails
DOT National Transportation Integrated Search
2018-01-01
A method is proposed herein for predicting the onset of thermal buckling in rails in such a way as to provide a means of avoiding this type of potentially devastating failure. The method consists of the development of a thermomechanical model of rail...
SemaTyP: a knowledge graph based literature mining method for drug discovery.
Sang, Shengtian; Yang, Zhihao; Wang, Lei; Liu, Xiaoxia; Lin, Hongfei; Wang, Jian
2018-05-30
Drug discovery is the process through which potential new medicines are identified. High-throughput screening and computer-aided drug discovery/design are the two main drug discovery methods for now, which have successfully discovered a series of drugs. However, development of new drugs is still an extremely time-consuming and expensive process. Biomedical literature contains important clues for the identification of potential treatments. It could support experts in biomedicine on their way towards new discoveries. Here, we propose a biomedical knowledge graph-based drug discovery method called SemaTyP, which discovers candidate drugs for diseases by mining published biomedical literature. We first construct a biomedical knowledge graph with the relations extracted from biomedical abstracts, then a logistic regression model is trained by learning the semantic types of paths of known drug therapies' existing in the biomedical knowledge graph, finally the learned model is used to discover drug therapies for new diseases. The experimental results show that our method could not only effectively discover new drug therapies for new diseases, but also could provide the potential mechanism of action of the candidate drugs. In this paper we propose a novel knowledge graph based literature mining method for drug discovery. It could be a supplementary method for current drug discovery methods.
Contento, Nicholas M.; Bohn, Paul W.
2014-05-23
While electrochemical methods are well suited for lab-on-a-chip applications, reliably coupling multiple, electrode-controlled processes in a single microfluidic channel remains a considerable challenge, because the electric fields driving electrokinetic flow make it difficult to establish a precisely known potential at the working electrode(s). The challenge of coupling electrochemical detection with microchip electrophoresis is well known; however, the problem is general, arising in other multielectrode arrangements with applications in enhanced detection and chemical processing. Here, we study the effects of induced electric fields on voltammetric behavior in a microchannel containing multiple in-channel electrodes, using a Fe(CN) 6 3/4- model system. Whenmore » an electric field is induced by applying a cathodic potential at one inchannel electrode, the half-wave potential (E 1/2) for the oxidation of ferrocyanide at an adjacent electrode shifts to more negative potentials. The E 1/2 value depends linearly on the electric field current at a separate in-channel electrode. The observed shift in E 1/2 is quantitatively described by a model, which accounts for the change in solution potential caused by the iR drop along the length of the microchannel. The model, which reliably captures changes in electrode location and solution conductivity, apportions the electric field potential between iR drop and electrochemical potential components, enabling the study of microchannel electric field magnitudes at low applied potentials. In the system studied, the iR component of the electric field potential increases exponentially with applied current before reaching an asymptotic value near 80 % of the total applied potential. The methods described will aid in the development and interpretation of future microchip electrochemistry methods, particularly those that benefit from the coupling of electrokinetic and electrochemical phenomena at low voltages.« less
A Multidimensional B-Spline Correction for Accurate Modeling Sugar Puckering in QM/MM Simulations.
Huang, Ming; Dissanayake, Thakshila; Kuechler, Erich; Radak, Brian K; Lee, Tai-Sung; Giese, Timothy J; York, Darrin M
2017-09-12
The computational efficiency of approximate quantum mechanical methods allows their use for the construction of multidimensional reaction free energy profiles. It has recently been demonstrated that quantum models based on the neglect of diatomic differential overlap (NNDO) approximation have difficulty modeling deoxyribose and ribose sugar ring puckers and thus limit their predictive value in the study of RNA and DNA systems. A method has been introduced in our previous work to improve the description of the sugar puckering conformational landscape that uses a multidimensional B-spline correction map (BMAP correction) for systems involving intrinsically coupled torsion angles. This method greatly improved the adiabatic potential energy surface profiles of DNA and RNA sugar rings relative to high-level ab initio methods even for highly problematic NDDO-based models. In the present work, a BMAP correction is developed, implemented, and tested in molecular dynamics simulations using the AM1/d-PhoT semiempirical Hamiltonian for biological phosphoryl transfer reactions. Results are presented for gas-phase adiabatic potential energy surfaces of RNA transesterification model reactions and condensed-phase QM/MM free energy surfaces for nonenzymatic and RNase A-catalyzed transesterification reactions. The results show that the BMAP correction is stable, efficient, and leads to improvement in both the potential energy and free energy profiles for the reactions studied, as compared with ab initio and experimental reference data. Exploration of the effect of the size of the quantum mechanical region indicates the best agreement with experimental reaction barriers occurs when the full CpA dinucleotide substrate is treated quantum mechanically with the sugar pucker correction.
Conductivity map from scanning tunneling potentiometry.
Zhang, Hao; Li, Xianqi; Chen, Yunmei; Durand, Corentin; Li, An-Ping; Zhang, X-G
2016-08-01
We present a novel method for extracting two-dimensional (2D) conductivity profiles from large electrochemical potential datasets acquired by scanning tunneling potentiometry of a 2D conductor. The method consists of a data preprocessing procedure to reduce/eliminate noise and a numerical conductivity reconstruction. The preprocessing procedure employs an inverse consistent image registration method to align the forward and backward scans of the same line for each image line followed by a total variation (TV) based image restoration method to obtain a (nearly) noise-free potential from the aligned scans. The preprocessed potential is then used for numerical conductivity reconstruction, based on a TV model solved by accelerated alternating direction method of multiplier. The method is demonstrated on a measurement of the grain boundary of a monolayer graphene, yielding a nearly 10:1 ratio for the grain boundary resistivity over bulk resistivity.
α -induced reactions on 115In: Cross section measurements and statistical model analysis
NASA Astrophysics Data System (ADS)
Kiss, G. G.; Szücs, T.; Mohr, P.; Török, Zs.; Huszánk, R.; Gyürky, Gy.; Fülöp, Zs.
2018-05-01
Background: α -nucleus optical potentials are basic ingredients of statistical model calculations used in nucleosynthesis simulations. While the nucleon+nucleus optical potential is fairly well known, for the α +nucleus optical potential several different parameter sets exist and large deviations, reaching sometimes even an order of magnitude, are found between the cross section predictions calculated using different parameter sets. Purpose: A measurement of the radiative α -capture and the α -induced reaction cross sections on the nucleus 115In at low energies allows a stringent test of statistical model predictions. Since experimental data are scarce in this mass region, this measurement can be an important input to test the global applicability of α +nucleus optical model potentials and further ingredients of the statistical model. Methods: The reaction cross sections were measured by means of the activation method. The produced activities were determined by off-line detection of the γ rays and characteristic x rays emitted during the electron capture decay of the produced Sb isotopes. The 115In(α ,γ )119Sb and 115In(α ,n )Sb118m reaction cross sections were measured between Ec .m .=8.83 and 15.58 MeV, and the 115In(α ,n )Sb118g reaction was studied between Ec .m .=11.10 and 15.58 MeV. The theoretical analysis was performed within the statistical model. Results: The simultaneous measurement of the (α ,γ ) and (α ,n ) cross sections allowed us to determine a best-fit combination of all parameters for the statistical model. The α +nucleus optical potential is identified as the most important input for the statistical model. The best fit is obtained for the new Atomki-V1 potential, and good reproduction of the experimental data is also achieved for the first version of the Demetriou potentials and the simple McFadden-Satchler potential. The nucleon optical potential, the γ -ray strength function, and the level density parametrization are also constrained by the data although there is no unique best-fit combination. Conclusions: The best-fit calculations allow us to extrapolate the low-energy (α ,γ ) cross section of 115In to the astrophysical Gamow window with reasonable uncertainties. However, still further improvements of the α -nucleus potential are required for a global description of elastic (α ,α ) scattering and α -induced reactions in a wide range of masses and energies.
NASA Technical Reports Server (NTRS)
Strutzenberg, L. L.; Dougherty, N. S.; Liever, P. A.; West, J. S.; Smith, S. D.
2007-01-01
This paper details advances being made in the development of Reynolds-Averaged Navier-Stokes numerical simulation tools, models, and methods for the integrated Space Shuttle Vehicle at launch. The conceptual model and modeling approach described includes the development of multiple computational models to appropriately analyze the potential debris transport for critical debris sources at Lift-Off. The conceptual model described herein involves the integration of propulsion analysis for the nozzle/plume flow with the overall 3D vehicle flowfield at Lift-Off. Debris Transport Analyses are being performed using the Shuttle Lift-Off models to assess the risk to the vehicle from Lift-Off debris and appropriately prioritized mitigation of potential debris sources to continue to reduce vehicle risk. These integrated simulations are being used to evaluate plume-induced debris environments where the multi-plume interactions with the launch facility can potentially accelerate debris particles toward the vehicle.
SCS-CN based time-distributed sediment yield model
NASA Astrophysics Data System (ADS)
Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.
2008-05-01
SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.
NASA Astrophysics Data System (ADS)
Du, Jinsong; Chen, Chao; Lesur, Vincent; Lane, Richard; Wang, Huilin
2015-06-01
We examined the mathematical and computational aspects of the magnetic potential, vector and gradient tensor fields of a tesseroid in a geocentric spherical coordinate system (SCS). This work is relevant for 3-D modelling that is performed with lithospheric vertical scales and global, continent or large regional horizontal scales. The curvature of the Earth is significant at these scales and hence, a SCS is more appropriate than the usual Cartesian coordinate system (CCS). The 3-D arrays of spherical prisms (SP; `tesseroids') can be used to model the response of volumes with variable magnetic properties. Analytical solutions do not exist for these model elements and numerical or mixed numerical and analytical solutions must be employed. We compared various methods for calculating the response in terms of accuracy and computational efficiency. The methods were (1) the spherical coordinate magnetic dipole method (MD), (2) variants of the 3-D Gauss-Legendre quadrature integration method (3-D GLQI) with (i) different numbers of nodes in each of the three directions, and (ii) models where we subdivided each SP into a number of smaller tesseroid volume elements, (3) a procedure that we term revised Gauss-Legendre quadrature integration (3-D RGLQI) where the magnetization direction which is constant in a SCS is assumed to be constant in a CCS and equal to the direction at the geometric centre of each tesseroid, (4) the Taylor's series expansion method (TSE) and (5) the rectangular prism method (RP). In any realistic application, both the accuracy and the computational efficiency factors must be considered to determine the optimum approach to employ. In all instances, accuracy improves with increasing distance from the source. It is higher in the percentage terms for potential than the vector or tensor response. The tensor errors are the largest, but they decrease more quickly with distance from the source. In our comparisons of relative computational efficiency, we found that the magnetic potential takes less time to compute than the vector response, which in turn takes less time to compute than the tensor gradient response. The MD method takes less time to compute than either the TSE or RP methods. The efficiency of the (GLQI and) RGLQI methods depends on the polynomial order, but the response typically takes longer to compute than it does for the other methods. The optimum method is a complex function of the desired accuracy, the size of the volume elements, the element latitude and the distance between the source and the observation. For a model of global extent with typical model element size (e.g. 1 degree horizontally and 10 km radially) and observations at altitudes of 10s to 100s of km, a mixture of methods based on the horizontal separation of the source and observation separation would be the optimum approach. To demonstrate the RGLQI method described within this paper, we applied it to the computation of the response for a global magnetization model for observations at 300 and 30 km altitude.
Chen, X.; Ashcroft, I. A.; Wildman, R. D.; Tuck, C. J.
2015-01-01
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic–viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic–viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance. PMID:26730216
Chen, X; Ashcroft, I A; Wildman, R D; Tuck, C J
2015-11-08
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic-viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic-viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance.
[Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (2)].
Murase, Kenya
2015-01-01
In this issue, symbolic methods for solving differential equations were firstly introduced. Of the symbolic methods, Laplace transform method was also introduced together with some examples, in which this method was applied to solving the differential equations derived from a two-compartment kinetic model and an equivalent circuit model for membrane potential. Second, series expansion methods for solving differential equations were introduced together with some examples, in which these methods were used to solve Bessel's and Legendre's differential equations. In the next issue, simultaneous differential equations and various methods for solving these differential equations will be introduced together with some examples in medical physics.
A Spectral Method for Spatial Downscaling
Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.
2014-01-01
Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037
Black hole algorithm for determining model parameter in self-potential data
NASA Astrophysics Data System (ADS)
Sungkono; Warnana, Dwa Desa
2018-01-01
Analysis of self-potential (SP) data is increasingly popular in geophysical method due to its relevance in many cases. However, the inversion of SP data is often highly nonlinear. Consequently, local search algorithms commonly based on gradient approaches have often failed to find the global optimum solution in nonlinear problems. Black hole algorithm (BHA) was proposed as a solution to such problems. As the name suggests, the algorithm was constructed based on the black hole phenomena. This paper investigates the application of BHA to solve inversions of field and synthetic self-potential (SP) data. The inversion results show that BHA accurately determines model parameters and model uncertainty. This indicates that BHA is highly potential as an innovative approach for SP data inversion.
NASA Astrophysics Data System (ADS)
Rudzinski, Joseph F.
Atomically-detailed molecular dynamics simulations have emerged as one of the most powerful theoretic tools for studying complex, condensed-phase systems. Despite their ability to provide incredible molecular insight, these simulations are insufficient for investigating complex biological processes, e.g., protein folding or molecular aggregation, on relevant length and time scales. The increasing scope and sophistication of atomically-detailed models has motivated the development of "hierarchical" approaches, which parameterize a low resolution, coarse-grained (CG) model based on simulations of an atomically-detailed model. The utility of hierarchical CG models depends on their ability to accurately incorporate the correct physics of the underlying model. One approach for ensuring this "consistency" between the models is to parameterize the CG model to reproduce the structural ensemble generated by the high resolution model. The many-body potential of mean force is the proper CG energy function for reproducing all structural distributions of the atomically-detailed model, at the CG level of resolution. However, this CG potential is a configuration-dependent free energy function that is generally too complicated to represent or simulate. The multiscale coarse-graining (MS-CG) method employs a generalized Yvon-Born-Green (g-YBG) relation to directly determine a variationally optimal approximation to the many-body potential of mean force. The MS-CG/g-YBG method provides a convenient and transparent framework for investigating the equilibrium structure of the system, at the CG level of resolution. In this work, we investigate the fundamental limitations and approximations of the MS-CG/g-YBG method. Throughout the work, we propose several theoretic constructs to directly relate the MS-CG/g-YBG method to other popular structure-based CG approaches. We investigate the physical interpretation of the MS-CG/g-YBG correlation matrix, the quantity responsible for disentangling the various contributions to the average force on a CG site. We then employ an iterative extension of the MS-CG/g-YBG method that improves the accuracy of a particular set of low order correlation functions relative to the original MS-CG/g-YBG model. We demonstrate that this method provides a powerful framework for identifying the precise source of error in an MS-CG/g-YBG model. We then propose a method for identifying an optimal CG representation, prior to the development of the CG model. We employ these techniques together to demonstrate that in the cases where the MS-CG/g-YBG method fails to determine an accurate model, a fundamental problem likely exists with the chosen CG representation or interaction set. Additionally, we explicitly demonstrate that while the iterative model successfully improves the accuracy of the low order structure, it does so by distorting the higher order structural correlations relative to the underlying model. Finally, we apply these methods to investigate the utility of the MS-CG/g- YBG method for developing models for systems with complex intramolecular structure. Overall, our results demonstrate the power of the g-YBG framework for developing accurate CG models and for investigating the driving forces of equilibrium structures for complex condensed-phase systems. This work also explicitly motivates future development of bottom-up CG methods and highlights some outstanding problems in the field. iii.
Terribile, L C; Diniz-Filho, J A F; De Marco, P
2010-05-01
The use of ecological niche models (ENM) to generate potential geographic distributions of species has rapidly increased in ecology, conservation and evolutionary biology. Many methods are available and the most used are Maximum Entropy Method (MAXENT) and the Genetic Algorithm for Rule Set Production (GARP). Recent studies have shown that MAXENT perform better than GARP. Here we used the statistics methods of ROC - AUC (area under the Receiver Operating Characteristics curve) and bootstrap to evaluate the performance of GARP and MAXENT in generate potential distribution models for 39 species of New World coral snakes. We found that values of AUC for GARP ranged from 0.923 to 0.999, whereas those for MAXENT ranged from 0.877 to 0.999. On the whole, the differences in AUC were very small, but for 10 species GARP outperformed MAXENT. Means and standard deviations for 100 bootstrapped samples with sample sizes ranging from 3 to 30 species did not show any trends towards deviations from a zero difference in AUC values of GARP minus AUC values of MAXENT. Ours results suggest that further studies are still necessary to establish under which circumstances the statistical performance of the methods vary. However, it is also important to consider the possibility that this empirical inductive reasoning may fail in the end, because we almost certainly could not establish all potential scenarios generating variation in the relative performance of models.
Novel Method to Assess Arterial Insufficiency in Rodent Hindlimb
Ziegler, Matthew A.; DiStasi, Matthew R.; Miller, Steven J.; Dalsing, Michael C.; Unthank, Joseph L.
2015-01-01
Background Lack of techniques to assess maximal blood flow capacity thwarts the use of rodent models of arterial insufficiency to evaluate therapies for intermittent claudication. We evaluated femoral vein outflow (VO) in combination with stimulated muscle contraction as a potential method to assess functional hindlimb arterial reserve and therapeutic efficacy in a rodent model of subcritical limb ischemia. Materials and methods VO was measured with perivascular flow probes at rest and during stimulated calf muscle contraction in young healthy rats (Wistar Kyoto, WKY; lean Zucker, LZR) and rats with cardiovascular risk factors (Spontaneously Hypertensive, SHR; Obese Zucker, OZR) with acute and/or chronic femoral arterial occlusion. Therapeutic efficacy was assessed by administration of Ramipril or Losartan to SHR after femoral artery excision. Results VO measurement in WKY demonstrated the utility of this method to assess hindlimb perfusion at rest and during calf muscle contraction. While application to diseased models (OZR, SHR) demonstrated normal resting perfusion compared to contralateral limbs, a significant reduction in reserve capacity was uncovered with muscle stimulation. Administration of Ramipril and Losartan demonstrated significant improvement in functional arterial reserve. Conclusion The results demonstrate that this novel method to assess distal limb perfusion in small rodents with subcritical limb ischemia is sufficient to unmask perfusion deficits not apparent at rest, detect impaired compensation in diseased animal models with risk factors, and assess therapeutic efficacy. The approach provides a significant advance in methods to investigate potential mechanisms and novel therapies for subcritical limb ischemia in pre-clinical rodent models. PMID:26850199
Neural Energy Supply-Consumption Properties Based on Hodgkin-Huxley Model
2017-01-01
Electrical activity is the foundation of the neural system. Coding theories that describe neural electrical activity by the roles of action potential timing or frequency have been thoroughly studied. However, an alternative method to study coding questions is the energy method, which is more global and economical. In this study, we clearly defined and calculated neural energy supply and consumption based on the Hodgkin-Huxley model, during firing action potentials and subthreshold activities using ion-counting and power-integral model. Furthermore, we analyzed energy properties of each ion channel and found that, under the two circumstances, power synchronization of ion channels and energy utilization ratio have significant differences. This is particularly true of the energy utilization ratio, which can rise to above 100% during subthreshold activity, revealing an overdraft property of energy use. These findings demonstrate the distinct status of the energy properties during neuronal firings and subthreshold activities. Meanwhile, after introducing a synapse energy model, this research can be generalized to energy calculation of a neural network. This is potentially important for understanding the relationship between dynamical network activities and cognitive behaviors. PMID:28316842
NASA Astrophysics Data System (ADS)
Jilinski, Pavel; Meju, Max A.; Fontes, Sergio L.
2013-10-01
The commonest technique for determination of the continental-oceanic crustal boundary or transition (COB) zone is based on locating and visually correlating bathymetric and potential field anomalies and constructing crustal models constrained by seismic data. In this paper, we present a simple method for spatial correlation of bathymetric and potential field geophysical anomalies. Angular differences between gradient directions are used to determine different types of correlation between gravity and bathymetric or magnetic data. It is found that the relationship between bathymetry and gravity anomalies can be correctly identified using this method. It is demonstrated, by comparison with previously published models for the southwest African margin, that this method enables the demarcation of the zone of transition from oceanic to continental crust assuming that this it is associated with geophysical anomalies, which can be correlated using gradient directions rather than magnitudes. We also applied this method, supported by 2-D gravity modelling, to the more complex Liberia and Cote d'Ivoire-Ghana sectors of the West African transform margin and obtained results that are in remarkable agreement with past predictions of the COB in that region. We suggest the use of this method for a first-pass interpretation as a prelude to rigorous modelling of the COB in frontier areas.
How Qualitative Methods Can be Used to Inform Model Development.
Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna
2017-06-01
Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.
A non-linear data mining parameter selection algorithm for continuous variables
Razavi, Marianne; Brady, Sean
2017-01-01
In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829
UAV path planning using artificial potential field method updated by optimal control theory
NASA Astrophysics Data System (ADS)
Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long
2016-04-01
The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.
Zhang, Jian-Hua; Böhme, Johann F
2007-11-01
In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.
Benchmark results in the 2D lattice Thirring model with a chemical potential
NASA Astrophysics Data System (ADS)
Ayyar, Venkitesh; Chandrasekharan, Shailesh; Rantaharju, Jarno
2018-03-01
We study the two-dimensional lattice Thirring model in the presence of a fermion chemical potential. Our model is asymptotically free and contains massive fermions that mimic a baryon and light bosons that mimic pions. Hence, it is a useful toy model for QCD, especially since it, too, suffers from a sign problem in the auxiliary field formulation in the presence of a fermion chemical potential. In this work, we formulate the model in both the world line and fermion-bag representations and show that the sign problem can be completely eliminated with open boundary conditions when the fermions are massless. Hence, we are able accurately compute a variety of interesting quantities in the model, and these results could provide benchmarks for other methods that are being developed to solve the sign problem in QCD.
A method to assess the potential effects of air pollution mitigation on healthcare costs.
Sætterstrøm, Bjørn; Kruse, Marie; Brønnum-Hansen, Henrik; Bønløkke, Jakob Hjort; Flachs, Esben Meulengracht; Sørensen, Jan
2012-01-01
The aim of this study was to develop a method to assess the potential effects of air pollution mitigation on healthcare costs and to apply this method to assess the potential savings related to a reduction in fine particle matter in Denmark. The effects of air pollution on health were used to identify "exposed" individuals (i.e., cases). Coronary heart disease, stroke, chronic obstructive pulmonary disease, and lung cancer were considered to be associated with air pollution. We used propensity score matching, two-part estimation, and Lin's method to estimate healthcare costs. Subsequently, we multiplied the number of saved cases due to mitigation with the healthcare costs to arrive to an expression for healthcare cost savings. The potential cost saving in the healthcare system arising from a modelled reduction in air pollution was estimated at €0.1-2.6 million per 100,000 inhabitants for the four diseases. We have illustrated an application of a method to assess the potential changes in healthcare costs due to a reduction in air pollution. The method relies on a large volume of administrative data and combines a number of established methods for epidemiological analysis.
Burkitt, A N
2006-08-01
The integrate-and-fire neuron model describes the state of a neuron in terms of its membrane potential, which is determined by the synaptic inputs and the injected current that the neuron receives. When the membrane potential reaches a threshold, an action potential (spike) is generated. This review considers the model in which the synaptic input varies periodically and is described by an inhomogeneous Poisson process, with both current and conductance synapses. The focus is on the mathematical methods that allow the output spike distribution to be analyzed, including first passage time methods and the Fokker-Planck equation. Recent interest in the response of neurons to periodic input has in part arisen from the study of stochastic resonance, which is the noise-induced enhancement of the signal-to-noise ratio. Networks of integrate-and-fire neurons behave in a wide variety of ways and have been used to model a variety of neural, physiological, and psychological phenomena. The properties of the integrate-and-fire neuron model with synaptic input described as a temporally homogeneous Poisson process are reviewed in an accompanying paper (Burkitt in Biol Cybern, 2006).
Zhang, Xue-Ying; Wen, Zong-Guo
2014-11-01
To evaluate the reduction potential of industrial water pollutant emissions and to study the application of technology simulation in pollutant control and environment management, an Industrial Reduction Potential Analysis and Environment Management (IRPAEM) model was developed based on coupling of "material-process-technology-product". The model integrated bottom-up modeling and scenario analysis method, and was applied to China's paper industry. Results showed that under CM scenario, the reduction potentials of waster water, COD and ammonia nitrogen would reach 7 x 10(8) t, 39 x 10(4) t and 0.3 x 10(4) t, respectively in 2015, 13.8 x 10(8) t, 56 x 10(4) t and 0.5 x 10(4) t, respectively in 2020. Strengthening the end-treatment would still be the key method to reduce emissions during 2010-2020, while the reduction effect of structure adjustment would be more obvious during 2015-2020. Pollution production could basically reach the domestic or international advanced level of clean production in 2015 and 2020; the index of wastewater and ammonia nitrogen would basically meet the emission standards in 2015 and 2020 while COD would not.
Multiscale modeling of a rectifying bipolar nanopore: Comparing Poisson-Nernst-Planck to Monte Carlo
NASA Astrophysics Data System (ADS)
Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső
2017-03-01
In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolyte model. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge, electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.
ANI-1, A data set of 20 million calculated off-equilibrium conformations for organic molecules
NASA Astrophysics Data System (ADS)
Smith, Justin S.; Isayev, Olexandr; Roitberg, Adrian E.
2017-12-01
One of the grand challenges in modern theoretical chemistry is designing and implementing approximations that expedite ab initio methods without loss of accuracy. Machine learning (ML) methods are emerging as a powerful approach to constructing various forms of transferable atomistic potentials. They have been successfully applied in a variety of applications in chemistry, biology, catalysis, and solid-state physics. However, these models are heavily dependent on the quality and quantity of data used in their fitting. Fitting highly flexible ML potentials, such as neural networks, comes at a cost: a vast amount of reference data is required to properly train these models. We address this need by providing access to a large computational DFT database, which consists of more than 20 M off equilibrium conformations for 57,462 small organic molecules. We believe it will become a new standard benchmark for comparison of current and future methods in the ML potential community.
Hartman, Matthew E; Dai, Dao-Fu; Laflamme, Michael A
2016-01-15
Human pluripotent stem cells (PSCs) represent an attractive source of cardiomyocytes with potential applications including disease modeling, drug discovery and safety screening, and novel cell-based cardiac therapies. Insights from embryology have contributed to the development of efficient, reliable methods capable of generating large quantities of human PSC-cardiomyocytes with cardiac purities ranging up to 90%. However, for human PSCs to meet their full potential, the field must identify methods to generate cardiomyocyte populations that are uniform in subtype (e.g. homogeneous ventricular cardiomyocytes) and have more mature structural and functional properties. For in vivo applications, cardiomyocyte production must be highly scalable and clinical grade, and we will need to overcome challenges including graft cell death, immune rejection, arrhythmogenesis, and tumorigenic potential. Here we discuss the types of human PSCs, commonly used methods to guide their differentiation into cardiomyocytes, the phenotype of the resultant cardiomyocytes, and the remaining obstacles to their successful translation. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gianotti, Rebecca L.; Bomblies, Arne; Eltahir, Elfatih A. B.
2009-08-01
This paper describes the first use of Hydrology-Entomology and Malaria Transmission Simulator (HYDREMATS), a physically based distributed hydrology model, to investigate environmental management methods for malaria vector control in the Sahelian village of Banizoumbou, Niger. The investigation showed that leveling of topographic depressions where temporary breeding habitats form during the rainy season, by altering pool basin microtopography, could reduce the pool persistence time to less than the time needed for establishment of mosquito breeding, approximately 7 days. Undertaking soil surface plowing can also reduce pool persistence time by increasing the infiltration rate through an existing pool basin. Reduction of the pool persistence time to less than the rainfall interstorm period increases the frequency of pool drying events, removing habitat for subadult mosquitoes. Both management approaches could potentially be considered within a given context. This investigation demonstrates that management methods that modify the hydrologic environment have significant potential to contribute to malaria vector control in water-limited, Sahelian Africa.
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
NASA Astrophysics Data System (ADS)
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; Phillips, Caleb; Elmore, Ryan
2017-07-01
We estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV’s ability to meet estimated city electricity consumption varies widely—from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city’s estimated electricity consumption. In addition to city-wide results, we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.
A simple vitrification method for cryobanking avian testicular tissue
USDA-ARS?s Scientific Manuscript database
Cryopreservation of testicular tissue is a promising method of preserving male reproductive potential for avian species. This study was conducted to assess whether a vitrification method can be used to preserve avian testicular tissue, using the Japanese quail (Coturnix japonica) as a model. A sim...
Minimum stiffness criteria for ring frame stiffeners of space launch vehicles
NASA Astrophysics Data System (ADS)
Friedrich, Linus; Schröder, Kai-Uwe
2016-12-01
Frame stringer-stiffened shell structures show high load carrying capacity in conjunction with low structural mass and are for this reason frequently used as primary structures of aerospace applications. Due to the great number of design variables, deriving suitable stiffening configurations is a demanding task and needs to be realized using efficient analysis methods. The structural design of ring frame stringer-stiffened shells can be subdivided into two steps. One, the design of a shell section between two ring frames. Two, the structural design of the ring frames such that a general instability mode is avoided. For sizing stringer-stiffened shell sections, several methods were recently developed, but existing ring frame sizing methods are mainly based on empirical relations or on smeared models. These methods do not mandatorily lead to reliable designs and in some cases the lightweight design potential of stiffened shell structures can thus not be exploited. In this paper, the explicit physical behaviour of ring frame stiffeners of space launch vehicles at the onset of panel instability is described using mechanical substitute models. Ring frame stiffeners of a stiffened shell structure are sized applying existing methods and the method suggested in this paper. To verify the suggested method and to demonstrate its potential, geometrically non-linear finite element analyses are performed using detailed finite element models.
Programmable Potentials: Approximate N-body potentials from coarse-level logic.
Thakur, Gunjan S; Mohr, Ryan; Mezić, Igor
2016-09-27
This paper gives a systematic method for constructing an N-body potential, approximating the true potential, that accurately captures meso-scale behavior of the chemical or biological system using pairwise potentials coming from experimental data or ab initio methods. The meso-scale behavior is translated into logic rules for the dynamics. Each pairwise potential has an associated logic function that is constructed using the logic rules, a class of elementary logic functions, and AND, OR, and NOT gates. The effect of each logic function is to turn its associated potential on and off. The N-body potential is constructed as linear combination of the pairwise potentials, where the "coefficients" of the potentials are smoothed versions of the associated logic functions. These potentials allow a potentially low-dimensional description of complex processes while still accurately capturing the relevant physics at the meso-scale. We present the proposed formalism to construct coarse-grained potential models for three examples: an inhibitor molecular system, bond breaking in chemical reactions, and DNA transcription from biology. The method can potentially be used in reverse for design of molecular processes by specifying properties of molecules that can carry them out.
Programmable Potentials: Approximate N-body potentials from coarse-level logic
NASA Astrophysics Data System (ADS)
Thakur, Gunjan S.; Mohr, Ryan; Mezić, Igor
2016-09-01
This paper gives a systematic method for constructing an N-body potential, approximating the true potential, that accurately captures meso-scale behavior of the chemical or biological system using pairwise potentials coming from experimental data or ab initio methods. The meso-scale behavior is translated into logic rules for the dynamics. Each pairwise potential has an associated logic function that is constructed using the logic rules, a class of elementary logic functions, and AND, OR, and NOT gates. The effect of each logic function is to turn its associated potential on and off. The N-body potential is constructed as linear combination of the pairwise potentials, where the “coefficients” of the potentials are smoothed versions of the associated logic functions. These potentials allow a potentially low-dimensional description of complex processes while still accurately capturing the relevant physics at the meso-scale. We present the proposed formalism to construct coarse-grained potential models for three examples: an inhibitor molecular system, bond breaking in chemical reactions, and DNA transcription from biology. The method can potentially be used in reverse for design of molecular processes by specifying properties of molecules that can carry them out.
Programmable Potentials: Approximate N-body potentials from coarse-level logic
Thakur, Gunjan S.; Mohr, Ryan; Mezić, Igor
2016-01-01
This paper gives a systematic method for constructing an N-body potential, approximating the true potential, that accurately captures meso-scale behavior of the chemical or biological system using pairwise potentials coming from experimental data or ab initio methods. The meso-scale behavior is translated into logic rules for the dynamics. Each pairwise potential has an associated logic function that is constructed using the logic rules, a class of elementary logic functions, and AND, OR, and NOT gates. The effect of each logic function is to turn its associated potential on and off. The N-body potential is constructed as linear combination of the pairwise potentials, where the “coefficients” of the potentials are smoothed versions of the associated logic functions. These potentials allow a potentially low-dimensional description of complex processes while still accurately capturing the relevant physics at the meso-scale. We present the proposed formalism to construct coarse-grained potential models for three examples: an inhibitor molecular system, bond breaking in chemical reactions, and DNA transcription from biology. The method can potentially be used in reverse for design of molecular processes by specifying properties of molecules that can carry them out. PMID:27671683
2016-08-10
IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY MEASURES DISCLAIMER The opinions or assertions contained herein are the private views of the...USARIEM TECHNICAL REPORT T16-14 METHOD FOR ESTIMATING EVAPORATIVE POTENTIAL (IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY...ASTM STANDARD SINGLE WIND VELOCITY MEASURES Adam W. Potter Biophysics and Biomedical Modeling Division U.S. Army Research Institute of Environmental
Analysis of enamel development using murine model systems: approaches and limitations
Pugach, Megan K.; Gibson, Carolyn W.
2014-01-01
A primary goal of enamel research is to understand and potentially treat or prevent enamel defects related to amelogenesis imperfecta (AI). Rodents are ideal models to assist our understanding of how enamel is formed because they are easily genetically modified, and their continuously erupting incisors display all stages of enamel development and mineralization. While numerous methods have been developed to generate and analyze genetically modified rodent enamel, it is crucial to understand the limitations and challenges associated with these methods in order to draw appropriate conclusions that can be applied translationally, to AI patient care. We have highlighted methods involved in generating and analyzing rodent enamel and potential approaches to overcoming limitations of these methods: (1) generating transgenic, knockout, and knockin mouse models, and (2) analyzing rodent enamel mineral density and functional properties (structure and mechanics) of mature enamel. There is a need for a standardized workflow to analyze enamel phenotypes in rodent models so that investigators can compare data from different studies. These methods include analyses of gene and protein expression, developing enamel histology, enamel pigment, degree of mineralization, enamel structure, and mechanical properties. Standardization of these methods with regard to stage of enamel development and sample preparation is crucial, and ideally investigators can use correlative and complementary techniques with the understanding that developing mouse enamel is dynamic and complex. PMID:25278900
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolus, Marjorie; Krack, Matthias; Freyss, Michel
Multiscale approaches are developed to build more physically based kinetic and mechanical mesoscale models to enhance the predictive capability of fuel performance codes and increase the efficiency of the development of the safer and more innovative nuclear materials needed in the future. Atomic scale methods, and in particular electronic structure and empirical potential methods, form the basis of this multiscale approach. It is therefore essential to know the accuracy of the results computed at this scale if we want to feed them into higher scale models. We focus here on the assessment of the description of interatomic interactions in uraniummore » dioxide using on the one hand electronic structure methods, in particular in the density functional theory (DFT) framework and on the other hand empirical potential methods. These two types of methods are complementary, the former enabling to get results from a minimal amount of input data and further insight into the electronic and magnetic properties, while the latter are irreplaceable for studies where a large number of atoms needs to be considered. We consider basic properties as well as specific ones, which are important for the description of nuclear fuel under irradiation. These are especially energies, which are the main data passed to higher scale models. We limit ourselves to uranium dioxide.« less
A fast mass spring model solver for high-resolution elastic objects
NASA Astrophysics Data System (ADS)
Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian
2017-03-01
Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.
Evaluation of the constant potential method in simulating electric double-layer capacitors
NASA Astrophysics Data System (ADS)
Wang, Zhenxing; Yang, Yang; Olmsted, David L.; Asta, Mark; Laird, Brian B.
2014-11-01
A major challenge in the molecular simulation of electric double layer capacitors (EDLCs) is the choice of an appropriate model for the electrode. Typically, in such simulations the electrode surface is modeled using a uniform fixed charge on each of the electrode atoms, which ignores the electrode response to local charge fluctuations in the electrolyte solution. In this work, we evaluate and compare this Fixed Charge Method (FCM) with the more realistic Constant Potential Method (CPM), [S. K. Reed et al., J. Chem. Phys. 126, 084704 (2007)], in which the electrode charges fluctuate in order to maintain constant electric potential in each electrode. For this comparison, we utilize a simplified LiClO4-acetonitrile/graphite EDLC. At low potential difference (ΔΨ ⩽ 2 V), the two methods yield essentially identical results for ion and solvent density profiles; however, significant differences appear at higher ΔΨ. At ΔΨ ⩾ 4 V, the CPM ion density profiles show significant enhancement (over FCM) of "inner-sphere adsorbed" Li+ ions very close to the electrode surface. The ability of the CPM electrode to respond to local charge fluctuations in the electrolyte is seen to significantly lower the energy (and barrier) for the approach of Li+ ions to the electrode surface.
NASA Technical Reports Server (NTRS)
Murch, Austin M.; Foster, John V.
2007-01-01
A simulation study was conducted to investigate aerodynamic modeling methods for prediction of post-stall flight dynamics of large transport airplanes. The research approach involved integrating dynamic wind tunnel data from rotary balance and forced oscillation testing with static wind tunnel data to predict aerodynamic forces and moments during highly dynamic departure and spin motions. Several state-of-the-art aerodynamic modeling methods were evaluated and predicted flight dynamics using these various approaches were compared. Results showed the different modeling methods had varying effects on the predicted flight dynamics and the differences were most significant during uncoordinated maneuvers. Preliminary wind tunnel validation data indicated the potential of the various methods for predicting steady spin motions.
Parallelized Three-Dimensional Resistivity Inversion Using Finite Elements And Adjoint State Methods
NASA Astrophysics Data System (ADS)
Schaa, Ralf; Gross, Lutz; Du Plessis, Jaco
2015-04-01
The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface resistivity. The Hessian of the regularization term is used as preconditioner which requires an additional PDE solution in each iteration step. As it turns out, the relevant PDEs are naturally formulated in the finite element framework. Using the domain decomposition method provided in Escript, the inversion scheme has been parallelized for distributed memory computers with multi-core shared memory nodes. We show numerical examples from simple layered models to complex 3D models and compare with the results from other methods. The inversion scheme is furthermore tested on a field data example to characterise localised freshwater discharge in a coastal environment.. References: L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306
2013-01-01
Gravity Wave. A slice of the potential temperature perturbation (at y=50 km) after 700 s for 30× 30× 5 elements with 4th-order polynomials . The contour...CONSTANTINESCU ‡ Key words. cloud-resolving model; compressible flow; element-based Galerkin methods; Euler; global model; IMEX; Lagrange; Legendre ...methods in terms of accuracy and efficiency for two types of geophysical fluid dynamics problems: buoyant convection and inertia- gravity waves. These
NASA Astrophysics Data System (ADS)
Frey, Holger; Haeberli, Wilfried; Huggel, Christian; Linsbauer, Andreas
2010-05-01
Due to the expected atmospheric warming, mountain glaciers will retreat, potentially collapse or even vanish completely during the 21st century. When overdeepened parts of the glacier bed are exposed in the course of glacier retreat, glacier lakes can form. Such lakes have a potential for hydropower production, which is an important source of renewable energy. Furthermore they are important elements in the perception of high-mountain landscapes and they can compensate the loss of landscape attractiveness from glacier shrinkage to a certain degree. However, glacier lakes are also a potential source of serious flood and debris flow hazards, especially in densely populated mountain ranges. Thus, methods for early detection of sites with potential lake formation are important for early planning and development of protection concepts. In this contribution we present a multi-scale approach to detect sites with potential future lake formation on four different levels of detail. The methods are developed, tested and - as far as possible - verified in the Swiss Alps; but they can be applied to mountain regions all over the world. On a first level, potential overdeepenings are estimated by selecting flat parts (slope < 5°) of the current glacier surface based on a digital elevation model (DEM) and digital glacier outlines. The same input data are used on the second level for a manual detection of overdeepenings, which are expected at locations where the following three criteria apply: (a) A distinct increase of the glacier surface slope in down-glacier direction; (b) an enlarged width followed by a narrow glacier part; and (c) regions with compressive flow (no crevasses) followed by extending flow (heavily crevassed). On the third level, more sophisticated approaches to model the glacier bed topography are applied to get more quantitative information on potential future lakes. Based on the results of this level, scenarios of future lake outbursts can be modeled with simple flow routing models. Finally, for potentially critical or dangerous situations, on-site geophysical measurements such as ground penetrating radar applied on different sections of a glacier can be performed on the fourth level to investigate the overdeepenings in more detail. These methods are verified based on historical data from the Trift glacier in the Bernese Alps, where a lake formed in front of the glacier since the 1990s up to the present. Potential future lake scenarios are presented for two regions in the Swiss Alps and the outburst potential of such future lakes is investigated for the Bernina region. The proposed method is an important step towards early detection of new potential flood hazards related to rapid glacier retreat. At the same time, it can form a basis for an integrative risk and benefit management relating to new glacier lakes.
Macroscopic modeling and simulations of supercoiled DNA with bound proteins
NASA Astrophysics Data System (ADS)
Huang, Jing; Schlick, Tamar
2002-11-01
General methods are presented for modeling and simulating DNA molecules with bound proteins on the macromolecular level. These new approaches are motivated by the need for accurate and affordable methods to simulate slow processes (on the millisecond time scale) in DNA/protein systems, such as the large-scale motions involved in the Hin-mediated inversion process. Our approaches, based on the wormlike chain model of long DNA molecules, introduce inhomogeneous potentials for DNA/protein complexes based on available atomic-level structures. Electrostatically, treat those DNA/protein complexes as sets of effective charges, optimized by our discrete surface charge optimization package, in which the charges are distributed on an excluded-volume surface that represents the macromolecular complex. We also introduce directional bending potentials as well as non-identical bead hydrodynamics algorithm to further mimic the inhomogeneous effects caused by protein binding. These models thus account for basic elements of protein binding effects on DNA local structure but remain computational tractable. To validate these models and methods, we reproduce various properties measured by both Monte Carlo methods and experiments. We then apply the developed models to study the Hin-mediated inversion system in long DNA. By simulating supercoiled, circular DNA with or without bound proteins, we observe significant effects of protein binding on global conformations and long-time dynamics of the DNA on the kilo basepair length.
Peressutti, Devis; Penney, Graeme P; Housden, R James; Kolbitsch, Christoph; Gomez, Alberto; Rijkhorst, Erik-Jan; Barratt, Dean C; Rhode, Kawal S; King, Andrew P
2013-05-01
In image-guided cardiac interventions, respiratory motion causes misalignments between the pre-procedure roadmap of the heart used for guidance and the intra-procedure position of the heart, reducing the accuracy of the guidance information and leading to potentially dangerous consequences. We propose a novel technique for motion-correcting the pre-procedural information that combines a probabilistic MRI-derived affine motion model with intra-procedure real-time 3D echocardiography (echo) images in a Bayesian framework. The probabilistic model incorporates a measure of confidence in its motion estimates which enables resolution of the potentially conflicting information supplied by the model and the echo data. Unlike models proposed so far, our method allows the final motion estimate to deviate from the model-produced estimate according to the information provided by the echo images, so adapting to the complex variability of respiratory motion. The proposed method is evaluated using gold-standard MRI-derived motion fields and simulated 3D echo data for nine volunteers and real 3D live echo images for four volunteers. The Bayesian method is compared to 5 other motion estimation techniques and results show mean/max improvements in estimation accuracy of 10.6%/18.9% for simulated echo images and 20.8%/41.5% for real 3D live echo data, over the best comparative estimation method. Copyright © 2013 Elsevier B.V. All rights reserved.
Waterlander, Wilma E; Blakely, Tony; Nghiem, Nhung; Cleghorn, Christine L; Eyles, Helen; Genc, Murat; Wilson, Nick; Jiang, Yannan; Swinburn, Boyd; Jacobi, Liana; Michie, Jo; Ni Mhurchu, Cliona
2016-07-19
There is a need for accurate and precise food price elasticities (PE, change in consumer demand in response to change in price) to better inform policy on health-related food taxes and subsidies. The Price Experiment and Modelling (Price ExaM) study aims to: I) derive accurate and precise food PE values; II) quantify the impact of price changes on quantity and quality of discrete food group purchases and; III) model the potential health and disease impacts of a range of food taxes and subsidies. To achieve this, we will use a novel method that includes a randomised Virtual Supermarket experiment and econometric methods. Findings will be applied in simulation models to estimate population health impact (quality-adjusted life-years [QALYs]) using a multi-state life-table model. The study will consist of four sequential steps: 1. We generate 5000 price sets with random price variation for all 1412 Virtual Supermarket food and beverage products. Then we add systematic price variation for foods to simulate five taxes and subsidies: a fruit and vegetable subsidy and taxes on sugar, saturated fat, salt, and sugar-sweetened beverages. 2. Using an experimental design, 1000 adult New Zealand shoppers complete five household grocery shops in the Virtual Supermarket where they are randomly assigned to one of the 5000 price sets each time. 3. Output data (i.e., multiple observations of price configurations and purchased amounts) are used as inputs to econometric models (using Bayesian methods) to estimate accurate PE values. 4. A disease simulation model will be run with the new PE values as inputs to estimate QALYs gained and health costs saved for the five policy interventions. The Price ExaM study has the potential to enhance public health and economic disciplines by introducing internationally novel scientific methods to estimate accurate and precise food PE values. These values will be used to model the potential health and disease impacts of various food pricing policy options. Findings will inform policy on health-related food taxes and subsidies. Australian New Zealand Clinical Trials Registry ACTRN12616000122459 (registered 3 February 2016).
Ignition of Hydrogen Balloons by Model-Rocket-Engine Igniters.
ERIC Educational Resources Information Center
Hartman, Nicholas T.
2003-01-01
Describes an alternative method for exploding hydrogen balloons as a classroom demonstration. Uses the method of igniting the balloons via an electronic match. Includes necessary materials to conduct the demonstration and discusses potential hazards. (SOE)
NASA Technical Reports Server (NTRS)
Beatty, T. D.
1975-01-01
A theoretical method is presented for the computation of the flow field about an axisymmetric body operating in a viscous, incompressible fluid. A potential flow method was used to determine the inviscid flow field and to yield the boundary conditions for the boundary layer solutions. Boundary layer effects in the forces of displacement thickness and empirically modeled separation streamlines are accounted for in subsequent potential flow solutions. This procedure is repeated until the solutions converge. An empirical method was used to determine base drag allowing configuration drag to be computed.
Generation of Cardiomyocytes from Pluripotent Stem Cells.
Nakahama, Hiroko; Di Pasquale, Elisa
2016-01-01
The advent of pluripotent stem cells (PSCs) enabled a multitude of studies for modeling the development of diseases and testing pharmaceutical therapeutic potential in vitro. These PSCs have been differentiated to multiple cell types to demonstrate its pluripotent potential, including cardiomyocytes (CMs). However, the efficiency and efficacy of differentiation vary greatly between different cell lines and methods. Here, we describe two different methods for acquiring CMs from human pluripotent lines. One method involves the generation of embryoid bodies, which emulates the natural developmental process, while the other method chemically activates the canonical Wnt signaling pathway to induce a monolayer of cardiac differentiation.
NASA Astrophysics Data System (ADS)
Poursina, Mohammad; Anderson, Kurt S.
2014-08-01
This paper presents a novel algorithm to approximate the long-range electrostatic potential field in the Cartesian coordinates applicable to 3D coarse-grained simulations of biopolymers. In such models, coarse-grained clusters are formed via treating groups of atoms as rigid and/or flexible bodies connected together via kinematic joints. Therefore, multibody dynamic techniques are used to form and solve the equations of motion of such coarse-grained systems. In this article, the approximations for the potential fields due to the interaction between a highly negatively/positively charged pseudo-atom and charged particles, as well as the interaction between clusters of charged particles, are presented. These approximations are expressed in terms of physical and geometrical properties of the bodies such as the entire charge, the location of the center of charge, and the pseudo-inertia tensor about the center of charge of the clusters. Further, a novel substructuring scheme is introduced to implement the presented far-field potential evaluations in a binary tree framework as opposed to the existing quadtree and octree strategies of implementing fast multipole method. Using the presented Lagrangian grids, the electrostatic potential is recursively calculated via sweeping two passes: assembly and disassembly. In the assembly pass, adjacent charged bodies are combined together to form new clusters. Then, the potential field of each cluster due to its interaction with faraway resulting clusters is recursively calculated in the disassembly pass. The method is highly compatible with multibody dynamic schemes to model coarse-grained biopolymers. Since the proposed method takes advantage of constant physical and geometrical properties of rigid clusters, improvement in the overall computational cost is observed comparing to the tradition application of fast multipole method.
2011-01-01
Background Electrotherapy is a relatively well established and efficient method of tumor treatment. In this paper we focus on analytical and numerical calculations of the potential and electric field distributions inside a tumor tissue in a two-dimensional model (2D-model) generated by means of electrode arrays with shapes of different conic sections (ellipse, parabola and hyperbola). Methods Analytical calculations of the potential and electric field distributions based on 2D-models for different electrode arrays are performed by solving the Laplace equation, meanwhile the numerical solution is solved by means of finite element method in two dimensions. Results Both analytical and numerical solutions reveal significant differences between the electric field distributions generated by electrode arrays with shapes of circle and different conic sections (elliptic, parabolic and hyperbolic). Electrode arrays with circular, elliptical and hyperbolic shapes have the advantage of concentrating the electric field lines in the tumor. Conclusion The mathematical approach presented in this study provides a useful tool for the design of electrode arrays with different shapes of conic sections by means of the use of the unifying principle. At the same time, we verify the good correspondence between the analytical and numerical solutions for the potential and electric field distributions generated by the electrode array with different conic sections. PMID:21943385
Four-body extension of the continuum-discretized coupled-channels method
NASA Astrophysics Data System (ADS)
Descouvemont, P.
2018-06-01
I develop an extension of the continuum-discretized coupled-channels (CDCC) method to reactions where both nuclei present a low breakup threshold. This leads to a four-body model, where the only inputs are the interactions describing the colliding nuclei, and the four optical potentials between the fragments. Once these potentials are chosen, the model does not contain any additional parameter. First I briefly discuss the general formalism, and emphasize the need for dealing with large coupled-channel systems. The method is tested with existing benchmarks on 4 α bound states with the Ali-Bodmer potential. Then I apply the four-body CDCC to the 11Be+d system, where I consider the 10Be(0+,2+)+n configuration for 11Be. I show that breakup channels are crucial to reproduce the elastic cross section, but that core excitation plays a weak role. The 7Li+d system is investigated with an α +t cluster model for 7Li. I show that breakup channels significantly improve the agreement with the experimental cross section, but an additional imaginary term, simulating missing transfer channels, is necessary. The full CDCC results can be interpreted by equivalent potentials. For both systems, the real part is weakly affected by breakup channels, but the imaginary part is strongly modified. I suggest that the present wave functions could be used in future DWBA calculations.
A new field method to characterise the runoff generation potential of burned hillslopes
NASA Astrophysics Data System (ADS)
Sheridan, Gary; Lane, Patrick; Langhans, Christoph
2016-04-01
The prediction of post fire runoff generation is critical for the estimation of post fire erosion processes and rates. Typical field measures for determining infiltration model parameters include ring infiltrometers, tension infiltrometers, rainfall simulators and natural runoff plots. However predicting the runoff generating potential of post-fire hillslopes is difficult due to the high spatial variability of soil properties relative to the size of the measurement method, the poorly understood relationship between water repellence and runoff generation, known scaling issues with all the above hydraulic measurements, and logistical limitations for measurements in remote environments. In this study we tested a new field method for characterizing surface runoff generation potential that overcomes these limitations and is quick, simple and cheap to apply in the field. The new field method involves the manual application of a 40mm depth of Brilliant Blue FCF food dye along a 10cm wide and 5m long transect along the contour under slightly-ponded conditions. After 24 hours the transect is excavated to a depth of 10cm and the percentage dyed area within the soil profile recorded manually. The dyed area is an index of infiltration potential of the soil during intense rainfall events, and captures both spatial variability and water repellence effects. The dye measurements were made adjacent to long term instrumented post fire rainfall-runoff plots on 7 contrasting soil types over a 6 month period, and the results show surprisingly strong correlations (r2 = 0.9) between the runoff-ratio from the plots and the dyed area. The results are used to develop an initial conceptual model that links the dye index with an infiltration model and parameters suited to burnt hillslopes. The capacity of this method to provide a simple, and reliable indicator of post fire runoff potential from different fire severities, soil types and treatments is explored in this presentation.
Interatomic potentials in condensed matter via the maximum-entropy principle
NASA Astrophysics Data System (ADS)
Carlsson, A. E.
1987-09-01
A general method is described for the calculation of interatomic potentials in condensed-matter systems by use of a maximum-entropy Ansatz for the interatomic correlation functions. The interatomic potentials are given explicitly in terms of statistical correlation functions involving the potential energy and the structure factor of a ``reference medium.'' Illustrations are given for Al-Cu alloys and a model transition metal.
Local vs. volume conductance activity of field potentials in the human subthalamic nucleus
Marmor, Odeya; Valsky, Dan; Joshua, Mati; Bick, Atira S; Arkadir, David; Tamir, Idit; Bergman, Hagai; Israel, Zvi
2017-01-01
Subthalamic nucleus field potentials have attracted growing research and clinical interest over the last few decades. However, it is unclear whether subthalamic field potentials represent locally generated neuronal subthreshold activity or volume conductance of the organized neuronal activity generated in the cortex. This study aimed at understanding of the physiological origin of subthalamic field potentials and determining the most accurate method for recording them. We compared different methods of recordings in the human subthalamic nucleus: spikes (300–9,000 Hz) and field potentials (3–100 Hz) recorded by monopolar micro- and macroelectrodes, as well as by differential-bipolar macroelectrodes. The recordings were done outside and inside the subthalamic nucleus during electrophysiological navigation for deep brain stimulation procedures (150 electrode trajectories) in 41 Parkinson’s disease patients. We modeled the signal and estimated the contribution of nearby/independent vs. remote/common activity in each recording configuration and area. Monopolar micro- and macroelectrode recordings detect field potentials that are considerably affected by common (probably cortical) activity. However, bipolar macroelectrode recordings inside the subthalamic nucleus can detect locally generated potentials. These results are confirmed by high correspondence between the model predictions and actual correlation of neuronal activity recorded by electrode pairs. Differential bipolar macroelectrode subthalamic field potentials can overcome volume conductance effects and reflect locally generated neuronal activity. Bipolar macroelectrode local field potential recordings might be used as a biological marker of normal and pathological brain functions for future electrophysiological studies and navigation systems as well as for closed-loop deep brain stimulation paradigms. NEW & NOTEWORTHY Our results integrate a new method for human subthalamic recordings with a development of an advanced mathematical model. We found that while monopolar microelectrode and macroelectrode recordings detect field potentials that are considerably affected by common (probably cortical) activity, bipolar macroelectrode recordings inside the subthalamic nucleus (STN) detect locally generated potentials that are significantly different than those recorded outside the STN. Differential bipolar subthalamic field potentials can be used in navigation and closed-loop deep brain stimulation paradigms. PMID:28202569
Rinnan, Asmund; Bruun, Sander; Lindedam, Jane; ...
2017-02-07
Here, the combination of NIR spectroscopy and chemometrics is a powerful correlation method for predicting the chemical constituents in biological matrices, such as the glucose and xylose content of straw. However, difficulties arise when it comes to predicting enzymatic glucose and xylose release potential, which is matrix dependent. Further complications are caused by xylose and glucose release potential being highly intercorrelated. This study emphasizes the importance of understanding the causal relationship between the model and the constituent of interest. It investigates the possibility of using near-infrared spectroscopy to evaluate the ethanol potential of wheat straw by analyzing more than 1000more » samples from different wheat varieties and growth conditions. During the calibration model development, the prime emphasis was to investigate the correlation structure between the two major quality traits for saccharification of wheat straw: glucose and xylose release. The large sample set enabled a versatile and robust calibration model to be developed, showing that the prediction model for xylose release is based on a causal relationship with the NIR spectral data. In contrast, the prediction of glucose release was found to be highly dependent on the intercorrelation with xylose release. If this correlation is broken, the model performance breaks down. A simple method was devised for avoiding this breakdown and can be applied to any large dataset for investigating the causality or lack of causality of a prediction model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rinnan, Asmund; Bruun, Sander; Lindedam, Jane
Here, the combination of NIR spectroscopy and chemometrics is a powerful correlation method for predicting the chemical constituents in biological matrices, such as the glucose and xylose content of straw. However, difficulties arise when it comes to predicting enzymatic glucose and xylose release potential, which is matrix dependent. Further complications are caused by xylose and glucose release potential being highly intercorrelated. This study emphasizes the importance of understanding the causal relationship between the model and the constituent of interest. It investigates the possibility of using near-infrared spectroscopy to evaluate the ethanol potential of wheat straw by analyzing more than 1000more » samples from different wheat varieties and growth conditions. During the calibration model development, the prime emphasis was to investigate the correlation structure between the two major quality traits for saccharification of wheat straw: glucose and xylose release. The large sample set enabled a versatile and robust calibration model to be developed, showing that the prediction model for xylose release is based on a causal relationship with the NIR spectral data. In contrast, the prediction of glucose release was found to be highly dependent on the intercorrelation with xylose release. If this correlation is broken, the model performance breaks down. A simple method was devised for avoiding this breakdown and can be applied to any large dataset for investigating the causality or lack of causality of a prediction model.« less
Regularized wave equation migration for imaging and data reconstruction
NASA Astrophysics Data System (ADS)
Kaplan, Sam T.
The reflection seismic experiment results in a measurement (reflection seismic data) of the seismic wavefield. The linear Born approximation to the seismic wavefield leads to a forward modelling operator that we use to approximate reflection seismic data in terms of a scattering potential. We consider approximations to the scattering potential using two methods: the adjoint of the forward modelling operator (migration), and regularized numerical inversion using the forward and adjoint operators. We implement two parameterizations of the forward modelling and migration operators: source-receiver and shot-profile. For both parameterizations, we find requisite Green's function using the split-step approximation. We first develop the forward modelling operator, and then find the adjoint (migration) operator by recognizing a Fredholm integral equation of the first kind. The resulting numerical system is generally under-determined, requiring prior information to find a solution. In source-receiver migration, the parameterization of the scattering potential is understood using the migration imaging condition, and this encourages us to apply sparse prior models to the scattering potential. To that end, we use both a Cauchy prior and a mixed Cauchy-Gaussian prior, finding better resolved estimates of the scattering potential than are given by the adjoint. In shot-profile migration, the parameterization of the scattering potential has its redundancy in multiple active energy sources (i.e. shots). We find that a smallest model regularized inverse representation of the scattering potential gives a more resolved picture of the earth, as compared to the simpler adjoint representation. The shot-profile parameterization allows us to introduce a joint inversion to further improve the estimate of the scattering potential. Moreover, it allows us to introduce a novel data reconstruction algorithm so that limited data can be interpolated/extrapolated. The linearized operators are expensive, encouraging their parallel implementation. For the source-receiver parameterization of the scattering potential this parallelization is non-trivial. Seismic data is typically corrupted by various types of noise. Sparse coding can be used to suppress noise prior to migration. It is a method that stems from information theory and that we apply to noise suppression in seismic data.
Time domain simulation of novel photovoltaic materials
NASA Astrophysics Data System (ADS)
Chung, Haejun
Thin-film silicon-based solar cells have operated far from the Shockley- Queisser limit in all experiments to date. Novel light-trapping structures, however, may help address this limitation. Finite-difference time domain simulation methods offer the potential to accurately determine the light-trapping potential of arbitrary dielectric structures, but suffer from materials modeling problems. In this thesis, existing dispersion models for novel photovoltaic materials will be reviewed, and a novel dispersion model, known as the quadratic complex rational function (QCRF), will be proposed. It has the advantage of accurately fitting experimental semiconductor dielectric values over a wide bandwidth in a numerically stable fashion. Applying the proposed dispersion model, a statistically correlated surface texturing method will be suggested, and light absorption rates of it will be explained. In future work, these designs will be combined with other structures and optimized to help guide future experiments.
Customer-Specific Transaction Risk Management in E-Commerce
NASA Astrophysics Data System (ADS)
Ruch, Markus; Sackmann, Stefan
Increasing potential for turnover in e-commerce is inextricably linked with an increase in risk. Online retailers (e-tailers), aiming for a company-wide value orientation should manage this risk. However, current approaches to risk management either use average retail prices elevated by an overall risk premium or restrict the payment methods offered to customers. Thus, they neglect customer-specific value and risk attributes and leave turnover potentials unconsidered. To close this gap, an innovative valuation model is proposed in this contribution that integrates customer-specific risk and potential turnover. The approach presented evaluates different payment methods using their risk-turnover characteristic, provides a risk-adjusted decision basis for selecting payment methods and allows e-tailers to derive automated risk management decisions per customer and transaction without reducing turnover potential.
A total variation diminishing finite difference algorithm for sonic boom propagation models
NASA Technical Reports Server (NTRS)
Sparrow, Victor W.
1993-01-01
It is difficult to accurately model the rise phases of sonic boom waveforms with traditional finite difference algorithms because of finite difference phase dispersion. This paper introduces the concept of a total variation diminishing (TVD) finite difference method as a tool for accurately modeling the rise phases of sonic booms. A standard second order finite difference algorithm and its TVD modified counterpart are both applied to the one-way propagation of a square pulse. The TVD method clearly outperforms the non-TVD method, showing great potential as a new computational tool in the analysis of sonic boom propagation.
Inverse modeling methods for indoor airborne pollutant tracking: literature review and fundamentals.
Liu, X; Zhai, Z
2007-12-01
Reduction in indoor environment quality calls for effective control and improvement measures. Accurate and prompt identification of contaminant sources ensures that they can be quickly removed and contaminated spaces isolated and cleaned. This paper discusses the use of inverse modeling to identify potential indoor pollutant sources with limited pollutant sensor data. The study reviews various inverse modeling methods for advection-dispersion problems and summarizes the methods into three major categories: forward, backward, and probability inverse modeling methods. The adjoint probability inverse modeling method is indicated as an appropriate model for indoor air pollutant tracking because it can quickly find source location, strength and release time without prior information. The paper introduces the principles of the adjoint probability method and establishes the corresponding adjoint equations for both multi-zone airflow models and computational fluid dynamics (CFD) models. The study proposes a two-stage inverse modeling approach integrating both multi-zone and CFD models, which can provide a rapid estimate of indoor pollution status and history for a whole building. Preliminary case study results indicate that the adjoint probability method is feasible for indoor pollutant inverse modeling. The proposed method can help identify contaminant source characteristics (location and release time) with limited sensor outputs. This will ensure an effective and prompt execution of building management strategies and thus achieve a healthy and safe indoor environment. The method can also help design optimal sensor networks.
NASA Astrophysics Data System (ADS)
Jensen, Daniel; Wasserman, Adam; Baczewski, Andrew
The construction of approximations to the exchange-correlation potential for warm dense matter (WDM) is a topic of significant recent interest. In this work, we study the inverse problem of Kohn-Sham (KS) DFT as a means of guiding functional design at zero temperature and in WDM. Whereas the forward problem solves the KS equations to produce a density from a specified exchange-correlation potential, the inverse problem seeks to construct the exchange-correlation potential from specified densities. These two problems require different computational methods and convergence criteria despite sharing the same mathematical equations. We present two new inversion methods based on constrained variational and PDE-constrained optimization methods. We adapt these methods to finite temperature calculations to reveal the exchange-correlation potential's temperature dependence in WDM-relevant conditions. The different inversion methods presented are applied to both non-interacting and interacting model systems for comparison. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94.
NASA Astrophysics Data System (ADS)
Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu
2017-03-01
To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.
Darboux partners of pseudoscalar Dirac potentials associated with exceptional orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulze-Halberg, Axel, E-mail: xbataxel@gmail.com; Department of Physics, Indiana University Northwest, 3400 Broadway, Gary, IN 46408; Roy, Barnana, E-mail: barnana@isical.ac.in
2014-10-15
We introduce a method for constructing Darboux (or supersymmetric) pairs of pseudoscalar and scalar Dirac potentials that are associated with exceptional orthogonal polynomials. Properties of the transformed potentials and regularity conditions are discussed. As an application, we consider a pseudoscalar Dirac potential related to the Schrödinger model for the rationally extended radial oscillator. The pseudoscalar partner potentials are constructed under the first- and second-order Darboux transformations.
Does money matter in inflation forecasting?
NASA Astrophysics Data System (ADS)
Binner, J. M.; Tino, P.; Tepper, J.; Anderson, R.; Jones, B.; Kendall, G.
2010-11-01
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regression-techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naïve random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists’ long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies.
Karimi, Davood; Ward, Rabab K
2016-10-01
Image models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, "patch-based" models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT. We first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT. Patch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated. Patch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.
NASA Astrophysics Data System (ADS)
Toropov, Andrey A.; Toropova, Alla P.
2018-06-01
Predictive model of logP for Pt(II) and Pt(IV) complexes built up with the Monte Carlo method using the CORAL software has been validated with six different splits into the training and validation sets. The improving of the predictive potential of models for six different splits has been obtained using so-called index of ideality of correlation. The suggested models give possibility to extract molecular features, which cause the increase or vice versa decrease of the logP.
2012-01-01
our own work for this discussion. DoD Instruction 5000.61 defines model validation as “the pro - cess of determining the degree to which a model and its... determined that RMAT is highly con - crete code, potentially leading to redundancies in the code itself and making RMAT more difficult to maintain...system con - ceptual models valid, and are the data used to support them adequate? (Chapters Two and Three) 2. Are the sources and methods for populating
Analogue based design of MMP-13 (Collagenase-3) inhibitors.
Sarma, J A R P; Rambabu, G; Srikanth, K; Raveendra, D; Vithal, M
2002-10-07
3D-QSAR studies using MFA and RSA methods were performed on a series of 39MMP-13 inhibitors. Model developed by MFA method has a r(2)(cv) (cross-validated) of 0.616 while its r(2) (conventional) value is 0.822. For the RSA model r(2)(cv) and r(2) are 0.681 and 0.847, respectively. Both the models indicate good internal as well as external predictive abilities. These models provide crucial information about the field descriptors for the design of potential inhibitors of MMP-13.
A 2D forward and inverse code for streaming potential problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.
2013-12-01
The self-potential method corresponds to the passive measurement of the electrical field in response to the occurrence of natural sources of current in the ground. One of these sources corresponds to the streaming current associated with the flow of the groundwater. We can therefore apply the self- potential method to recover non-intrusively some information regarding the groundwater flow. We first solve the forward problem starting with the solution of the groundwater flow problem, then computing the source current density, and finally solving a Poisson equation for the electrical potential. We use the finite-element method to solve the relevant partial differential equations. In order to reduce the number of (petrophysical) model parameters required to solve the forward problem, we introduced an effective charge density tensor of the pore water, which can be determined directly from the permeability tensor for neutral pore waters. The second aspect of our work concerns the inversion of the self-potential data using Tikhonov regularization with smoothness and weighting depth constraints. This approach accounts for the distribution of the electrical resistivity, which can be independently and approximately determined from electrical resistivity tomography. A numerical code, SP2DINV, has been implemented in Matlab to perform both the forward and inverse modeling. Three synthetic case studies are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkler, David A., E-mail: dave.winkler@csiro.au
2016-05-15
Nanomaterials research is one of the fastest growing contemporary research areas. The unprecedented properties of these materials have meant that they are being incorporated into products very quickly. Regulatory agencies are concerned they cannot assess the potential hazards of these materials adequately, as data on the biological properties of nanomaterials are still relatively limited and expensive to acquire. Computational modelling methods have much to offer in helping understand the mechanisms by which toxicity may occur, and in predicting the likelihood of adverse biological impacts of materials not yet tested experimentally. This paper reviews the progress these methods, particularly those QSAR-based,more » have made in understanding and predicting potentially adverse biological effects of nanomaterials, and also the limitations and pitfalls of these methods. - Highlights: • Nanomaterials regulators need good information to make good decisions. • Nanomaterials and their interactions with biology are very complex. • Computational methods use existing data to predict properties of new nanomaterials. • Statistical, data driven modelling methods have been successfully applied to this task. • Much more must be learnt before robust toolkits will be widely usable by regulators.« less
Baxter, S; Killoran, A; Kelly, M P; Goyder, E
2010-02-01
The nature of public health evidence presents challenges for conventional systematic review processes, with increasing recognition of the need to include a broader range of work including observational studies and qualitative research, yet with methods to combine diverse sources remaining underdeveloped. The objective of this paper is to report the application of a new approach for review of evidence in the public health sphere. The method enables a diverse range of evidence types to be synthesized in order to examine potential relationships between a public health environment and outcomes. The study drew on previous work by the National Institute for Health and Clinical Excellence on conceptual frameworks. It applied and further extended this work to the synthesis of evidence relating to one particular public health area: the enhancement of employee mental well-being in the workplace. The approach utilized thematic analysis techniques from primary research, together with conceptual modelling, to explore potential relationships between factors and outcomes. The method enabled a logic framework to be built from a diverse document set that illustrates how elements and associations between elements may impact on the well-being of employees. Whilst recognizing potential criticisms of the approach, it is suggested that logic models can be a useful way of examining the complexity of relationships between factors and outcomes in public health, and of highlighting potential areas for interventions and further research. The use of techniques from primary qualitative research may also be helpful in synthesizing diverse document types. Copyright 2010 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Temperature dependent effective potential method for accurate free energy calculations of solids
NASA Astrophysics Data System (ADS)
Hellman, Olle; Steneteg, Peter; Abrikosov, I. A.; Simak, S. I.
2013-03-01
We have developed a thorough and accurate method of determining anharmonic free energies, the temperature dependent effective potential technique (TDEP). It is based on ab initio molecular dynamics followed by a mapping onto a model Hamiltonian that describes the lattice dynamics. The formalism and the numerical aspects of the technique are described in detail. A number of practical examples are given, and results are presented, which confirm the usefulness of TDEP within ab initio and classical molecular dynamics frameworks. In particular, we examine from first principles the behavior of force constants upon the dynamical stabilization of the body centered phase of Zr, and show that they become more localized. We also calculate the phase diagram for 4He modeled with the Aziz potential and obtain results which are in favorable agreement both with respect to experiment and established techniques.
Detecting Moving Targets by Use of Soliton Resonances
NASA Technical Reports Server (NTRS)
Zak, Michael; Kulikov, Igor
2003-01-01
A proposed method of detecting moving targets in scenes that include cluttered or noisy backgrounds is based on a soliton-resonance mathematical model. The model is derived from asymptotic solutions of the cubic Schroedinger equation for a one-dimensional system excited by a position-and-time-dependent externally applied potential. The cubic Schroedinger equation has general significance for time-dependent dispersive waves. It has been used to approximate several phenomena in classical as well as quantum physics, including modulated beams in nonlinear optics, and superfluids (in particular, Bose-Einstein condensates). In the proposed method, one would take advantage of resonant interactions between (1) a soliton excited by the position-and-time-dependent potential associated with a moving target and (2) eigen-solitons, which represent dispersive waves and are solutions of the cubic Schroedinger equation for a time-independent potential.
NASA Technical Reports Server (NTRS)
Jekeli, C.
1979-01-01
Through the method of truncation functions, the oceanic geoid undulation is divided into two constituents: an inner zone contribution expressed as an integral of surface gravity disturbances over a spherical cap; and an outer zone contribution derived from a finite set of potential harmonic coefficients. Global, average error estimates are formulated for undulation differences, thereby providing accuracies for a relative geoid. The error analysis focuses on the outer zone contribution for which the potential coefficient errors are modeled. The method of computing undulations based on gravity disturbance data for the inner zone is compared to the similar, conventional method which presupposes gravity anomaly data within this zone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Subodh, E-mail: subodhssgk@gmail.com; Chand, Manesh, E-mail: maneshchand@gmail.com; Dabral, Krishna, E-mail: kmkrishna.dabral@gmail.com
2016-05-06
A modified embedded atom method (MEAM) potential model up to second neighbours has been used to calculate the phonon dispersions for Ni{sub 0.55}Pd{sub 0.45} alloy in which Pd is introduced as substitutional impurity. Using the force-constants obtained from MEAM potential, the local vibrational density of states in host Ni and substitutional Pd atoms using Green’s function method has been calculated. The calculation of phonon dispersions of NiPd alloy shows a good agreement with the experimental results. Condition of resonance mode has also been investigated and resonance mode in the frequency spectrum of impurity atom at low frequency is observed.
The acceptance of in silico models for REACH: Requirements, barriers, and perspectives
2011-01-01
In silico models have prompted considerable interest and debate because of their potential value in predicting the properties of chemical substances for regulatory purposes. The European REACH legislation promotes innovation and encourages the use of alternative methods, but in practice the use of in silico models is still very limited. There are many stakeholders influencing the regulatory trajectory of quantitative structure-activity relationships (QSAR) models, including regulators, industry, model developers and consultants. Here we outline some of the issues and challenges involved in the acceptance of these methods for regulatory purposes. PMID:21982269
Density-functional expansion methods: Grand challenges.
Giese, Timothy J; York, Darrin M
2012-03-01
We discuss the source of errors in semiempirical density functional expansion (VE) methods. In particular, we show that VE methods are capable of well-reproducing their standard Kohn-Sham density functional method counterparts, but suffer from large errors upon using one or more of these approximations: the limited size of the atomic orbital basis, the Slater monopole auxiliary basis description of the response density, and the one- and two-body treatment of the core-Hamiltonian matrix elements. In the process of discussing these approximations and highlighting their symptoms, we introduce a new model that supplements the second-order density-functional tight-binding model with a self-consistent charge-dependent chemical potential equalization correction; we review our recently reported method for generalizing the auxiliary basis description of the atomic orbital response density; and we decompose the first-order potential into a summation of additive atomic components and many-body corrections, and from this examination, we provide new insights and preliminary results that motivate and inspire new approximate treatments of the core-Hamiltonian.
Updated users' guide for TAWFIVE with multigrid
NASA Technical Reports Server (NTRS)
Melson, N. Duane; Streett, Craig L.
1989-01-01
A program for the Transonic Analysis of a Wing and Fuselage with Interacted Viscous Effects (TAWFIVE) was improved by the incorporation of multigrid and a method to specify lift coefficient rather than angle-of-attack. A finite volume full potential multigrid method is used to model the outer inviscid flow field. First order viscous effects are modeled by a 3-D integral boundary layer method. Both turbulent and laminar boundary layers are treated. Wake thickness effects are modeled using a 2-D strip method. A brief discussion of the engineering aspects of the program is given. The input, output, and use of the program are covered in detail. Sample results are given showing the effects of boundary layer corrections and the capability of the lift specification method.
Optimization Design of Minimum Total Resistance Hull Form Based on CFD Method
NASA Astrophysics Data System (ADS)
Zhang, Bao-ji; Zhang, Sheng-long; Zhang, Hui
2018-06-01
In order to reduce the resistance and improve the hydrodynamic performance of a ship, two hull form design methods are proposed based on the potential flow theory and viscous flow theory. The flow fields are meshed using body-fitted mesh and structured grids. The parameters of the hull modification function are the design variables. A three-dimensional modeling method is used to alter the geometry. The Non-Linear Programming (NLP) method is utilized to optimize a David Taylor Model Basin (DTMB) model 5415 ship under the constraints, including the displacement constraint. The optimization results show an effective reduction of the resistance. The two hull form design methods developed in this study can provide technical support and theoretical basis for designing green ships.
Marques, J M C; Pais, A A C C; Abreu, P E
2012-02-05
The efficiency of the so-called big-bang method for the optimization of atomic clusters is analysed in detail for Morse pair potentials with different ranges; here, we have used Morse potentials with four different ranges, from long- ρ = 3) to short-ranged ρ = 14) interactions. Specifically, we study the efficacy of the method in discovering low-energy structures, including the putative global minimum, as a function of the potential range and the cluster size. A new global minimum structure for long-ranged ρ = 3) Morse potential at the cluster size of n= 240 is reported. The present results are useful to assess the maximum cluster size for each type of interaction where the global minimum can be discovered with a limited number of big-bang trials. Copyright © 2011 Wiley Periodicals, Inc.
Predicting Drug-Target Interactions With Multi-Information Fusion.
Peng, Lihong; Liao, Bo; Zhu, Wen; Li, Zejun; Li, Keqin
2017-03-01
Identifying potential associations between drugs and targets is a critical prerequisite for modern drug discovery and repurposing. However, predicting these associations is difficult because of the limitations of existing computational methods. Most models only consider chemical structures and protein sequences, and other models are oversimplified. Moreover, datasets used for analysis contain only true-positive interactions, and experimentally validated negative samples are unavailable. To overcome these limitations, we developed a semi-supervised based learning framework called NormMulInf through collaborative filtering theory by using labeled and unlabeled interaction information. The proposed method initially determines similarity measures, such as similarities among samples and local correlations among the labels of the samples, by integrating biological information. The similarity information is then integrated into a robust principal component analysis model, which is solved using augmented Lagrange multipliers. Experimental results on four classes of drug-target interaction networks suggest that the proposed approach can accurately classify and predict drug-target interactions. Part of the predicted interactions are reported in public databases. The proposed method can also predict possible targets for new drugs and can be used to determine whether atropine may interact with alpha1B- and beta1- adrenergic receptors. Furthermore, the developed technique identifies potential drugs for new targets and can be used to assess whether olanzapine and propiomazine may target 5HT2B. Finally, the proposed method can potentially address limitations on studies of multitarget drugs and multidrug targets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu
2015-12-28
The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF.more » We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.« less
Quantifying spatial distribution of spurious mixing in ocean models.
Ilıcak, Mehmet
2016-12-01
Numerical mixing is inevitable for ocean models due to tracer advection schemes. Until now, there is no robust way to identify the regions of spurious mixing in ocean models. We propose a new method to compute the spatial distribution of the spurious diapycnic mixing in an ocean model. This new method is an extension of available potential energy density method proposed by Winters and Barkan (2013). We test the new method in lock-exchange and baroclinic eddies test cases. We can quantify the amount and the location of numerical mixing. We find high-shear areas are the main regions which are susceptible to numerical truncation errors. We also test the new method to quantify the numerical mixing in different horizontal momentum closures. We conclude that Smagorinsky viscosity has less numerical mixing than the Leith viscosity using the same non-dimensional constant.
NASA Astrophysics Data System (ADS)
Fan, T. S.; Wang, Z. M.; Zhu, X.; Zhu, W. J.; Zhong, C. L.
2017-09-01
In this work, the nuclear potential-energy of the deformed nuclei as a function of shape coordinates is calculated in a five-dimensional (5D) parameter space of the axially symmetric generalized Lawrence shapes, on the basis of the macroscopic-microscopic method. The liquid-drop part of the nuclear energy is calculated according to the Myers-Swiatecki model and the Lublin-Strasbourg-drop (LSD) formula. The Woods-Saxon and the folded-Yukawa potentials for deformed nuclei are used for the shell and pairing corrections of the Strutinsky-type. The pairing corrections are calculated at zero temperature, T, related to the excitation energy. The eigenvalues of Hamiltonians for protons and neutrons are found by expanding the eigen-functions in terms of harmonic-oscillator wave functions of a spheroid. Then the BCS pair is applied on the smeared-out single-particle spectrum. By comparing the results obtained by different models, the most favorable combination of the macroscopic-microscopic model is known as the LSD formula with the folded-Yukawa potential. Potential-energy landscapes for actinide isotopes are investigated based on a grid of more than 4,000,000 deformation points and the heights of static fission barriers are obtained in terms of a double-humped structure on the full 5D parameter space. In order to locate the ground state shapes, saddle points, scission points and optimal fission path on the calculated 5D potential-energy surface, the falling rain algorithm and immersion method are designed and implemented. The comparison of our results with available experimental data and others' theoretical results confirms the reliability of our calculations.
Madsen, Kristoffer H; Ewald, Lars; Siebner, Hartwig R; Thielscher, Axel
2015-01-01
Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector potential of the TMS coils. To develop an approach to reconstruct the magnetic vector potential based on automated measurements. We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel approach to determine the magnetic vector potential via volume integration of the measured field. The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well with that calculated using a model reconstructed from x-ray images. The setup can supply validated models for existing and newly appearing TMS coils. Copyright © 2015 Elsevier Inc. All rights reserved.
Particle models for discrete element modeling of bulk grain properties of wheat kernels
USDA-ARS?s Scientific Manuscript database
Recent research has shown the potential of discrete element method (DEM) in simulating grain flow in bulk handling systems. Research has also revealed that simulation of grain flow with DEM requires establishment of appropriate particle models for each grain type. This research completes the three-p...
The convolutional differentiator method for numerical modelling of acoustic and elastic wavefields
NASA Astrophysics Data System (ADS)
Zhang, Zhong-Jie; Teng, Ji-Wen; Yang, Ding-Hui
1996-02-01
Based on the techniques of forward and inverse Fourier transformation, the authors discussed the design scheme of ordinary differentiator used and applied in the simulation of acoustic and elastic wavefields in isotropic media respectively. To compress Gibbs effects by truncation effectively, Hanning window is introduced in. The model computation shows that, the convolutional differentiator method has the advantages of rapidity, low requirements of computer’s inner storage and high precision, which is a potential method of numerical simulation.
Plenary: Progress in Regional Landslide Hazard Assessment—Examples from the USA
Baum, Rex L.; Schulz, William; Brien, Dianne L.; Burns, William J.; Reid, Mark E.; Godt, Jonathan W.
2014-01-01
Landslide hazard assessment at local and regional scales contributes to mitigation of landslides in developing and densely populated areas by providing information for (1) land development and redevelopment plans and regulations, (2) emergency preparedness plans, and (3) economic analysis to (a) set priorities for engineered mitigation projects and (b) define areas of similar levels of hazard for insurance purposes. US Geological Survey (USGS) research on landslide hazard assessment has explored a range of methods that can be used to estimate temporal and spatial landslide potential and probability for various scales and purposes. Cases taken primarily from our work in the U.S. Pacific Northwest illustrate and compare a sampling of methods, approaches, and progress. For example, landform mapping using high-resolution topographic data resulted in identification of about four times more landslides in Seattle, Washington, than previous efforts using aerial photography. Susceptibility classes based on the landforms captured 93 % of all historical landslides (all types) throughout the city. A deterministic model for rainfall infiltration and shallow landslide initiation, TRIGRS, was able to identify locations of 92 % of historical shallow landslides in southwest Seattle. The potentially unstable areas identified by TRIGRS occupied only 26 % of the slope areas steeper than 20°. Addition of an unsaturated infiltration model to TRIGRS expands the applicability of the model to areas of highly permeable soils. Replacement of the single cell, 1D factor of safety with a simple 3D method of columns improves accuracy of factor of safety predictions for both saturated and unsaturated infiltration models. A 3D deterministic model for large, deep landslides, SCOOPS, combined with a three-dimensional model for groundwater flow, successfully predicted instability in steep areas of permeable outwash sand and topographic reentrants. These locations are consistent with locations of large, deep, historically active landslides. For an area in Seattle, a composite of the three maps illustrates how maps produced by different approaches might be combined to assess overall landslide potential. Examples from Oregon, USA, illustrate how landform mapping and deterministic analysis for shallow landslide potential have been adapted into standardized methods for efficiently producing detailed landslide inventory and shallow landslide susceptibility maps that have consistent content and format statewide.
Hastings, K L
2001-02-02
Immune-based systemic hypersensitivities account for a significant number of adverse drug reactions. There appear to be no adequate nonclinical models to predict systemic hypersensitivity to small molecular weight drugs. Although there are very good methods for detecting drugs that can induce contact sensitization, these have not been successfully adapted for prediction of systemic hypersensitivity. Several factors have made the development of adequate models difficult. The term systemic hypersensitivity encompases many discrete immunopathologies. Each type of immunopathology presumably is the result of a specific cluster of immunologic and biochemical phenomena. Certainly other factors, such as genetic predisposition, metabolic idiosyncrasies, and concomitant diseases, further complicate the problem. Therefore, it may be difficult to find common mechanisms upon which to construct adequate models to predict specific types of systemic hypersensitivity reactions. There is some reason to hope, however, that adequate methods could be developed for at least identifying drugs that have the potential to produce signs indicative of a general hazard for immune-based reactions.
NASA Astrophysics Data System (ADS)
Gianotti, R. L.; Bomblies, A.; Eltahir, E. A.
2008-12-01
This study describes the use of HYDREMATS, a physically-based distributed hydrology model, to investigate environmental management methods for malaria vector control in the Sahelian village of Banizoumbou, Niger. The model operates at fine spatial and temporal scales to enable explicit simulation of individual pool dynamics and isolation of mosquito breeding habitats. The results showed that leveling of topographic depressions where temporary breeding habitats form during the rainy season could reduce the persistence time of a pool to less than the time needed for establishment of mosquito breeding, approximately 7 days. Increasing the surface soil permeability by ploughing could also reduce the persistence time of a pool but this technique was not as effective as leveling. Therefore it is considered that leveling should be the preferred of the two options where possible. This investigation demonstrates that management methods that modify the hydrologic environment have significant potential to contribute to malaria vector control and human health improvement in Sahelian Africa.
Multiscale Modeling of UHTC: Thermal Conductivity
NASA Technical Reports Server (NTRS)
Lawson, John W.; Murry, Daw; Squire, Thomas; Bauschlicher, Charles W.
2012-01-01
We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.
Automatic Parametrization of Somatosensory Evoked Potentials With Chirp Modeling.
Vayrynen, Eero; Noponen, Kai; Vipin, Ashwati; Thow, X Y; Al-Nashash, Hasan; Kortelainen, Jukka; All, Angelo
2016-09-01
In this paper, an approach using polynomial phase chirp signals to model somatosensory evoked potentials (SEPs) is proposed. SEP waveforms are assumed as impulses undergoing group velocity dispersion while propagating along a multipath neural connection. Mathematical analysis of pulse dispersion resulting in chirp signals is performed. An automatic parameterization of SEPs is proposed using chirp models. A Particle Swarm Optimization algorithm is used to optimize the model parameters. Features describing the latencies and amplitudes of SEPs are automatically derived. A rat model is then used to evaluate the automatic parameterization of SEPs in two experimental cases, i.e., anesthesia level and spinal cord injury (SCI). Experimental results show that chirp-based model parameters and the derived SEP features are significant in describing both anesthesia level and SCI changes. The proposed automatic optimization based approach for extracting chirp parameters offers potential for detailed SEP analysis in future studies. The method implementation in Matlab technical computing language is provided online.
Potential barge transportation for inbound corn and grain
DOT National Transportation Integrated Search
1997-12-31
This research develops a model for estimating future barge and rail rates for decision making. The Box-Jenkins and the Regression Analysis with ARIMA errors forecasting methods were used to develop appropriate models for determining future rates. A s...
Sommerfeld, Thomas; Ehara, Masahiro
2015-01-21
The energy of a temporary anion can be computed by adding a stabilizing potential to the molecular Hamiltonian, increasing the stabilization until the temporary state is turned into a bound state, and then further increasing the stabilization until enough bound state energies have been collected so that these can be extrapolated back to vanishing stabilization. The lifetime can be obtained from the same data, but only if the extrapolation is done through analytic continuation of the momentum as a function of the square root of a shifted stabilizing parameter. This method is known as analytic continuation of the coupling constant, and it requires--at least in principle--that the bound-state input data are computed with a short-range stabilizing potential. In the context of molecules and ab initio packages, long-range Coulomb stabilizing potentials are, however, far more convenient and have been used in the past with some success, although the error introduced by the long-rang nature of the stabilizing potential remains unknown. Here, we introduce a soft-Voronoi box potential that can serve as a short-range stabilizing potential. The difference between a Coulomb and the new stabilization is analyzed in detail for a one-dimensional model system as well as for the (2)Πu resonance of CO2(-), and in both cases, the extrapolation results are compared to independently computed resonance parameters, from complex scaling for the model, and from complex absorbing potential calculations for CO2(-). It is important to emphasize that for both the model and for CO2(-), all three sets of results have, respectively, been obtained with the same electronic structure method and basis set so that the theoretical description of the continuum can be directly compared. The new soft-Voronoi-box-based extrapolation is then used to study the influence of the size of diffuse and the valence basis sets on the computed resonance parameters.
Gauge-independent decoherence models for solids in external fields
NASA Astrophysics Data System (ADS)
Wismer, Michael S.; Yakovlev, Vladislav S.
2018-04-01
We demonstrate gauge-invariant modeling of an open system of electrons in a periodic potential interacting with an optical field. For this purpose, we adapt the covariant derivative to the case of mixed states and put forward a decoherence model that has simple analytical forms in the length and velocity gauges. We demonstrate our methods by calculating harmonic spectra in the strong-field regime and numerically verifying the equivalence of the deterministic master equation to the stochastic Monte Carlo wave-function method.
NASA Astrophysics Data System (ADS)
Aji Hapsoro, Cahyo; Purqon, Acep; Srigutomo, Wahyu
2017-07-01
2-D Time Domain Electromagnetic (TDEM) has been successfully conducted to illustrate the value of Electric field distribution under the Earth surface. Electric field compared by magnetic field is used to analyze resistivity and resistivity is one of physical properties which very important to determine the reservoir potential area of geothermal systems as one of renewable energy. In this modeling we used Time Domain Electromagnetic method because it can solve EM field interaction problem with complex geometry and to analyze transient problems. TDEM methods used to model the value of electric and magnetic fields as a function of the time combined with the function of distance and depth. The result of this modeling is Electric field intensity value which is capable to describe the structure of the Earth’s subsurface. The result of this modeling can be applied to describe the Earths subsurface resistivity values to determine the reservoir potential of geothermal systems.
Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Zhang, Linfeng; Han, Jiequn; Wang, Han; Car, Roberto; E, Weinan
2018-04-01
We introduce a scheme for molecular simulations, the deep potential molecular dynamics (DPMD) method, based on a many-body potential and interatomic forces generated by a carefully crafted deep neural network trained with ab initio data. The neural network model preserves all the natural symmetries in the problem. It is first-principles based in the sense that there are no ad hoc components aside from the network model. We show that the proposed scheme provides an efficient and accurate protocol in a variety of systems, including bulk materials and molecules. In all these cases, DPMD gives results that are essentially indistinguishable from the original data, at a cost that scales linearly with system size.
Review on solving the forward problem in EEG source analysis
Hallez, Hans; Vanrumste, Bart; Grech, Roberta; Muscat, Joseph; De Clercq, Wim; Vergult, Anneleen; D'Asseler, Yves; Camilleri, Kenneth P; Fabri, Simon G; Van Huffel, Sabine; Lemahieu, Ignace
2007-01-01
Background The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. Conclusion Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem. PMID:18053144
Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns
Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain
2015-01-01
Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917
Flood, Nicola; Page, Andrew; Hooke, Geoff
2018-05-03
Routine outcome monitoring benefits treatment by identifying potential no change and deterioration. The present study compared two methods of identifying early change and their ability to predict negative outcomes on self-report symptom and wellbeing measures. 1467 voluntary day patients participated in a 10-day group Cognitive Behaviour Therapy (CBT) program and completed the symptom and wellbeing measures daily. Early change, as defined by (a) the clinical significance method and (b) longitudinal modelling, was compared on each measure. Early change, as defined by the simpler clinical significance method, was superior at predicting negative outcomes than longitudinal modelling. The longitudinal modelling method failed to detect a group of deteriorated patients, and agreement between the early change methods and the final unchanged outcome was higher for the clinical significance method. Therapists could use the clinical significance early change method during treatment to alert them of patients at risk for negative outcomes, which in turn could allow therapists to prevent those negative outcomes from occurring.
Thermal transport in the Falicov-Kimball model
NASA Astrophysics Data System (ADS)
Freericks, J. K.; Zlatić, V.
2001-12-01
We prove the Jonson-Mahan theorem for the thermopower of the Falicov-Kimball model by solving explicitly for correlation functions in the large dimensional limit. We prove a similar result for the thermal conductivity. We separate the results for thermal transport into the pieces of the heat current that arise from the kinetic energy and those that arise from the potential energy. Our method of proof is specific to the Falicov-Kimball model, but illustrates the near cancellations between the kinetic- and potential-energy pieces of the heat current implied by the Jonson-Mahan theorem.
Bujkiewicz, Sylwia; Thompson, John R; Riley, Richard D; Abrams, Keith R
2016-03-30
A number of meta-analytical methods have been proposed that aim to evaluate surrogate endpoints. Bivariate meta-analytical methods can be used to predict the treatment effect for the final outcome from the treatment effect estimate measured on the surrogate endpoint while taking into account the uncertainty around the effect estimate for the surrogate endpoint. In this paper, extensions to multivariate models are developed aiming to include multiple surrogate endpoints with the potential benefit of reducing the uncertainty when making predictions. In this Bayesian multivariate meta-analytic framework, the between-study variability is modelled in a formulation of a product of normal univariate distributions. This formulation is particularly convenient for including multiple surrogate endpoints and flexible for modelling the outcomes which can be surrogate endpoints to the final outcome and potentially to one another. Two models are proposed, first, using an unstructured between-study covariance matrix by assuming the treatment effects on all outcomes are correlated and second, using a structured between-study covariance matrix by assuming treatment effects on some of the outcomes are conditionally independent. While the two models are developed for the summary data on a study level, the individual-level association is taken into account by the use of the Prentice's criteria (obtained from individual patient data) to inform the within study correlations in the models. The modelling techniques are investigated using an example in relapsing remitting multiple sclerosis where the disability worsening is the final outcome, while relapse rate and MRI lesions are potential surrogates to the disability progression. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Using Visual Analysis to Evaluate and Refine Multilevel Models of Single-Case Studies
ERIC Educational Resources Information Center
Baek, Eun Kyeng; Petit-Bois, Merlande; Van den Noortgate, Wim; Beretvas, S. Natasha; Ferron, John M.
2016-01-01
In special education, multilevel models of single-case research have been used as a method of estimating treatment effects over time and across individuals. Although multilevel models can accurately summarize the effect, it is known that if the model is misspecified, inferences about the effects can be biased. Concern with the potential for model…
Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz
2013-01-01
The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...
Neuronal models for evaluation of proliferation in vitro using high content screening
In vitro test methods can provide a rapid approach for the screening of large numbers of chemicals for their potential to produce toxicity (hazard identification). In order to identify potential developmental neurotoxicants, a battery of in vitro tests for neurodevelopmental proc...
Model uncertainties do not affect observed patterns of species richness in the Amazon
Sales, Lilian Patrícia; Neves, Olívia Viana; De Marco, Paulo
2017-01-01
Background Climate change is arguably a major threat to biodiversity conservation and there are several methods to assess its impacts on species potential distribution. Yet the extent to which different approaches on species distribution modeling affect species richness patterns at biogeographical scale is however unaddressed in literature. In this paper, we verified if the expected responses to climate change in biogeographical scale—patterns of species richness and species vulnerability to climate change—are affected by the inputs used to model and project species distribution. Methods We modeled the distribution of 288 vertebrate species (amphibians, birds and mammals), all endemic to the Amazon basin, using different combinations of the following inputs known to affect the outcome of species distribution models (SDMs): 1) biological data type, 2) modeling methods, 3) greenhouse gas emission scenarios and 4) climate forecasts. We calculated uncertainty with a hierarchical ANOVA in which those different inputs were considered factors. Results The greatest source of variation was the modeling method. Model performance interacted with data type and modeling method. Absolute values of variation on suitable climate area were not equal among predictions, but some biological patterns were still consistent. All models predicted losses on the area that is climatically suitable for species, especially for amphibians and primates. All models also indicated a current East-western gradient on endemic species richness, from the Andes foot downstream the Amazon river. Again, all models predicted future movements of species upwards the Andes mountains and overall species richness losses. Conclusions From a methodological perspective, our work highlights that SDMs are a useful tool for assessing impacts of climate change on biodiversity. Uncertainty exists but biological patterns are still evident at large spatial scales. As modeling methods are the greatest source of variation, choosing the appropriate statistics according to the study objective is also essential for estimating the impacts of climate change on species distribution. Yet from a conservation perspective, we show that Amazon endemic fauna is potentially vulnerable to climate change, due to expected reductions on suitable climate area. Climate-driven faunal movements are predicted towards the Andes mountains, which might work as climate refugia for migrating species. PMID:29023503
Real-Time Kinetic Modeling of Voltage-Gated Ion Channels Using Dynamic Clamp
Milescu, Lorin S.; Yamanishi, Tadashi; Ptak, Krzysztof; Mogri, Murtaza Z.; Smith, Jeffrey C.
2008-01-01
We propose what to our knowledge is a new technique for modeling the kinetics of voltage-gated ion channels in a functional context, in neurons or other excitable cells. The principle is to pharmacologically block the studied channel type, and to functionally replace it with dynamic clamp, on the basis of a computational model. Then, the parameters of the model are modified in real time (manually or automatically), with the objective of matching the dynamical behavior of the cell (e.g., action potential shape and spiking frequency), but also the transient and steady-state properties of the model (e.g., those derived from voltage-clamp recordings). Through this approach, one may find a model and parameter values that explain both the observed cellular dynamics and the biophysical properties of the channel. We extensively tested the method, focusing on Nav models. Complex Markov models (10–12 states or more) could be accurately integrated in real time at >50 kHz using the transition probability matrix, but not the explicit Euler method. The practicality of the technique was tested with experiments in raphe pacemaker neurons. Through automated real-time fitting, a Hodgkin-Huxley model could be found that reproduced well the action potential shape and the spiking frequency. Adding a virtual axonal compartment with a high density of Nav channels further improved the action potential shape. The computational procedure was implemented in the free QuB software, running under Microsoft Windows and featuring a friendly graphical user interface. PMID:18375511
NASA Astrophysics Data System (ADS)
Šprlák, M.; Han, S.-C.; Featherstone, W. E.
2017-12-01
Rigorous modelling of the spherical gravitational potential spectra from the volumetric density and geometry of an attracting body is discussed. Firstly, we derive mathematical formulas for the spatial analysis of spherical harmonic coefficients. Secondly, we present a numerically efficient algorithm for rigorous forward modelling. We consider the finite-amplitude topographic modelling methods as special cases, with additional postulates on the volumetric density and geometry. Thirdly, we implement our algorithm in the form of computer programs and test their correctness with respect to the finite-amplitude topography routines. For this purpose, synthetic and realistic numerical experiments, applied to the gravitational field and geometry of the Moon, are performed. We also investigate the optimal choice of input parameters for the finite-amplitude modelling methods. Fourth, we exploit the rigorous forward modelling for the determination of the spherical gravitational potential spectra inferred by lunar crustal models with uniform, laterally variable, radially variable, and spatially (3D) variable bulk density. Also, we analyse these four different crustal models in terms of their spectral characteristics and band-limited radial gravitation. We demonstrate applicability of the rigorous forward modelling using currently available computational resources up to degree and order 2519 of the spherical harmonic expansion, which corresponds to a resolution of 2.2 km on the surface of the Moon. Computer codes, a user manual and scripts developed for the purposes of this study are publicly available to potential users.
NASA Astrophysics Data System (ADS)
Lütgebaucks, Cornelis; Gonella, Grazia; Roke, Sylvie
2016-11-01
The electrostatic environment of aqueous systems is an essential ingredient for the function of any living system. To understand the electrostatic properties and their molecular foundation in soft, living, and three-dimensional systems, we developed a table-top model-free method to determine the surface potential of nano- and microscopic objects in aqueous solutions. Angle-resolved nonresonant second harmonic (SH) scattering measurements contain enough information to determine the surface potential unambiguously, without making assumptions on the structure of the interfacial region. The scattered SH light that is emitted from both the particle interface and the diffuse double layer can be detected in two different polarization states that have independent scattering patterns. The angular shape and intensity are determined by the surface potential and the second-order surface susceptibility. Calibrating the response with the SH intensity of bulk water, a single, unique surface potential value can be extracted. We demonstrate the method with 80 nm bare oil droplets in water and ˜50 nm dioleoylphosphatidylcholine (DOPC) and dioleoylphosphatidylserine (DOPS) liposomes at various ionic strengths.
A Predictive Model for Medical Events Based on Contextual Embedding of Temporal Sequences
Wang, Zhimu; Huang, Yingxiang; Wang, Shuang; Wang, Fei; Jiang, Xiaoqian
2016-01-01
Background Medical concepts are inherently ambiguous and error-prone due to human fallibility, which makes it hard for them to be fully used by classical machine learning methods (eg, for tasks like early stage disease prediction). Objective Our work was to create a new machine-friendly representation that resembles the semantics of medical concepts. We then developed a sequential predictive model for medical events based on this new representation. Methods We developed novel contextual embedding techniques to combine different medical events (eg, diagnoses, prescriptions, and labs tests). Each medical event is converted into a numerical vector that resembles its “semantics,” via which the similarity between medical events can be easily measured. We developed simple and effective predictive models based on these vectors to predict novel diagnoses. Results We evaluated our sequential prediction model (and standard learning methods) in estimating the risk of potential diseases based on our contextual embedding representation. Our model achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.79 on chronic systolic heart failure and an average AUC of 0.67 (over the 80 most common diagnoses) using the Medical Information Mart for Intensive Care III (MIMIC-III) dataset. Conclusions We propose a general early prognosis predictor for 80 different diagnoses. Our method computes numeric representation for each medical event to uncover the potential meaning of those events. Our results demonstrate the efficiency of the proposed method, which will benefit patients and physicians by offering more accurate diagnosis. PMID:27888170
Ovchinnikov, Victor; Nam, Kwangho; Karplus, Martin
2016-08-25
A method is developed to obtain simultaneously free energy profiles and diffusion constants from restrained molecular simulations in diffusive systems. The method is based on low-order expansions of the free energy and diffusivity as functions of the reaction coordinate. These expansions lead to simple analytical relationships between simulation statistics and model parameters. The method is tested on 1D and 2D model systems; its accuracy is found to be comparable to or better than that of the existing alternatives, which are briefly discussed. An important aspect of the method is that the free energy is constructed by integrating its derivatives, which can be computed without need for overlapping sampling windows. The implementation of the method in any molecular simulation program that supports external umbrella potentials (e.g., CHARMM) requires modification of only a few lines of code. As a demonstration of its applicability to realistic biomolecular systems, the method is applied to model the α-helix ↔ β-sheet transition in a 16-residue peptide in implicit solvent, with the reaction coordinate provided by the string method. Possible modifications of the method are briefly discussed; they include generalization to multidimensional reaction coordinates [in the spirit of the model of Ermak and McCammon (Ermak, D. L.; McCammon, J. A. J. Chem. Phys. 1978, 69, 1352-1360)], a higher-order expansion of the free energy surface, applicability in nonequilibrium systems, and a simple test for Markovianity. In view of the small overhead of the method relative to standard umbrella sampling, we suggest its routine application in the cases where umbrella potential simulations are appropriate.
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Davidson, Shaun M; Docherty, Paul D; Murray, Rua
2017-03-01
Parameter identification is an important and widely used process across the field of biomedical engineering. However, it is susceptible to a number of potential difficulties, such as parameter trade-off, causing premature convergence at non-optimal parameter values. The proposed Dimensional Reduction Method (DRM) addresses this issue by iteratively reducing the dimension of hyperplanes where trade off occurs, and running subsequent identification processes within these hyperplanes. The DRM was validated using clinical data to optimize 4 parameters of the widely used Bergman Minimal Model of glucose and insulin kinetics, as well as in-silico data to optimize 5 parameters of the Pulmonary Recruitment (PR) Model. Results were compared with the popular Levenberg-Marquardt (LMQ) Algorithm using a Monte-Carlo methodology, with both methods afforded equivalent computational resources. The DRM converged to a lower or equal residual value in all tests run using the Bergman Minimal Model and actual patient data. For the PR model, the DRM attained significantly lower overall median parameter error values and lower residuals in the vast majority of tests. This shows the DRM has potential to provide better resolution of optimum parameter values for the variety of biomedical models in which significant levels of parameter trade-off occur. Copyright © 2017 Elsevier Inc. All rights reserved.
Integrating language models into classifiers for BCI communication: a review
NASA Astrophysics Data System (ADS)
Speier, W.; Arnold, C.; Pouratian, N.
2016-06-01
Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.
Integrating language models into classifiers for BCI communication: a review.
Speier, W; Arnold, C; Pouratian, N
2016-06-01
The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.
An analytical model for hydraulic fracturing in shallow bedrock formations.
dos Santos, José Sérgio; Ballestero, Thomas Paul; Pitombeira, Ernesto da Silva
2011-01-01
A theoretical method is proposed to estimate post-fracturing fracture size and transmissivity, and as a test of the methodology, data collected from two wells were used for verification. This method can be employed before hydrofracturing in order to obtain estimates of the potential hydraulic benefits of hydraulic fracturing. Five different pumping test analysis methods were used to evaluate the well hydraulic data. The most effective methods were the Papadopulos-Cooper model (1967), which includes wellbore storage effects, and the Gringarten-Ramey model (1974), known as the single horizontal fracture model. The hydraulic parameters resulting from fitting these models to the field data revealed that as a result of hydraulic fracturing, the transmissivity increased more than 46 times in one well and increased 285 times in the other well. The model developed by dos Santos (2008), which considers horizontal radial fracture propagation from the hydraulically fractured well, was used to estimate potential fracture geometry after hydrofracturing. For the two studied wells, their fractures could have propagated to distances of almost 175 m or more and developed maximum apertures of about 2.20 mm and hydraulic apertures close to 0.30 mm. Fracturing at this site appears to have expanded and propagated existing fractures and not created new fractures. Hydraulic apertures calculated from pumping test analyses closely matched the results obtained from the hydraulic fracturing model. As a result of this model, post-fracturing geometry and resulting post-fracturing well yield can be estimated before the actual hydrofracturing. Copyright © 2010 The Author(s). Journal compilation © 2010 National Ground Water Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ringler, Todd; Ju, Lili; Gunzburger, Max
2008-11-14
During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multiresolution schemes that are able, at least regionally, to faithfully simulate these fine-scale processes. Spherical centroidal Voronoimore » tessellations (SCVTs) offer one potential path toward the development of a robust, multiresolution climate system model components. SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function. In each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean–ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing, and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear, shallow water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multiresolution method and the challenges ahead.« less
Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization
Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050
Robust global identifiability theory using potentials--Application to compartmental models.
Wongvanich, N; Hann, C E; Sirisena, H R
2015-04-01
This paper presents a global practical identifiability theory for analyzing and identifying linear and nonlinear compartmental models. The compartmental system is prolonged onto the potential jet space to formulate a set of input-output equations that are integrals in terms of the measured data, which allows for robust identification of parameters without requiring any simulation of the model differential equations. Two classes of linear and non-linear compartmental models are considered. The theory is first applied to analyze the linear nitrous oxide (N2O) uptake model. The fitting accuracy of the identified models from differential jet space and potential jet space identifiability theories is compared with a realistic noise level of 3% which is derived from sensor noise data in the literature. The potential jet space approach gave a match that was well within the coefficient of variation. The differential jet space formulation was unstable and not suitable for parameter identification. The proposed theory is then applied to a nonlinear immunological model for mastitis in cows. In addition, the model formulation is extended to include an iterative method which allows initial conditions to be accurately identified. With up to 10% noise, the potential jet space theory predicts the normalized population concentration infected with pathogens, to within 9% of the true curve. Copyright © 2015 Elsevier Inc. All rights reserved.
Comparison Study of Three Different Image Reconstruction Algorithms for MAT-MI
Xia, Rongmin; Li, Xu
2010-01-01
We report a theoretical study on magnetoacoustic tomography with magnetic induction (MAT-MI). According to the description of signal generation mechanism using Green’s function, the acoustic dipole model was proposed to describe acoustic source excited by the Lorentz force. Using Green’s function, three kinds of reconstruction algorithms based on different models of acoustic source (potential energy, vectored acoustic pressure, and divergence of Lorenz force) are deduced and compared, and corresponding numerical simulations were conducted to compare these three kinds of reconstruction algorithms. The computer simulation results indicate that the potential energy method and vectored pressure method can directly reconstruct the Lorentz force distribution and give a more accurate reconstruction of electrical conductivity. PMID:19846363
NASA Astrophysics Data System (ADS)
Zhao, C. S.; Yang, S. T.; Liu, C. M.; Dou, T. W.; Yang, Z. L.; Yang, Z. Y.; Liu, X. L.; Xiang, H.; Nie, S. Y.; Zhang, J. L.; Mitrovic, S. M.; Yu, Q.; Lim, R. P.
2015-04-01
Aquatic ecological rehabilitation is increasingly attracting considerable public and research attention. An effective method that requires less data and expertise would help in the assessment of rehabilitation potential and in the monitoring of rehabilitation activities as complicated theories and excessive data requirements on assemblage information make many current assessment models expensive and limit their wide use. This paper presents an assessment model for restoration potential which successfully links hydrologic, physical and chemical habitat factors to fish assemblage attributes drawn from monitoring datasets on hydrology, water quality and fish assemblages at a total of 144 sites, where 5084 fish were sampled and tested. In this model three newly developed sub-models, integrated habitat index (IHSI), integrated ecological niche breadth (INB) and integrated ecological niche overlap (INO), are established to study spatial heterogeneity of the restoration potential of fish assemblages based on gradient methods of habitat suitability index and ecological niche models. To reduce uncertainties in the model, as many fish species as possible, including important native fish, were selected as dominant species with monitoring occurring over several seasons to comprehensively select key habitat factors. Furthermore, a detrended correspondence analysis (DCA) was employed prior to a canonical correspondence analysis (CCA) of the data to avoid the "arc effect" in the selection of key habitat factors. Application of the model to data collected at Jinan City, China proved effective reveals that three lower potential regions that should be targeted in future aquatic ecosystem rehabilitation programs. They were well validated by the distribution of two habitat parameters: river width and transparency. River width positively influenced and transparency negatively influenced fish assemblages. The model can be applied for monitoring the effects of fish assemblage restoration. This has large ramifications for the restoration of aquatic ecosystems and spatial heterogeneity of fish assemblages all over the world.
Modeling material interfaces with hybrid adhesion method
Brown, Nicholas Taylor; Qu, Jianmin; Martinez, Enrique
2017-01-27
A molecular dynamics simulation approach is presented to approximate layered material structures using discrete interatomic potentials through classical mechanics and the underlying principles of quantum mechanics. This method isolates the energetic contributions of the system into two pure material layers and an interfacial region used to simulate the adhesive properties of the diffused interface. The strength relationship of the adhesion contribution is calculated through small-scale separation calculations and applied to the molecular surfaces through an inter-layer bond criterion. By segregating the contributions into three regions and accounting for the interfacial excess energies through the adhesive surface bonds, it is possiblemore » to model each material with an independent potential while maintaining an acceptable level of accuracy in the calculation of mechanical properties. This method is intended for the atomistic study of the delamination mechanics, typically observed in thin-film applications. Therefore, the work presented in this paper focuses on mechanical tensile behaviors, with observations in the elastic modulus and the delamination failure mode. To introduce the hybrid adhesion method, we apply the approach to an ideal bulk copper sample, where an interface is created by disassociating the force potential in the middle of the structure. Various mechanical behaviors are compared to a standard EAM control model to demonstrate the adequacy of this approach in a simple setting. In addition, we demonstrate the robustness of this approach by applying it on (1) a Cu-Cu 2O interface with interactions between two atom types, and (2) an Al-Cu interface with two dissimilar FCC lattices. These additional examples are verified against EAM and COMB control models to demonstrate the accurate simulation of failure through delamination, and the formation and propagation of dislocations under loads. Finally, the results conclude that by modeling the energy contributions of an interface using hybrid adhesion bonds, we can provide an accurate approximation method for studies of large-scale mechanical properties, as well as the representation of various delamination phenomena at the atomic scale.« less
One-Dimensional Harmonic Model for Biomolecules
Krizan, John E.
1973-01-01
Following in spirit a paper by Rosen, we propose a one-dimensional harmonic model for biomolecules. Energy bands with gaps of the order of semi-conductor gaps are found. The method is discussed for general symmetric and periodic potential functions. PMID:4709518
Nondestructive testing methods to predict effect of degradation on wood : a critical assessment
J. Kaiserlik
1978-01-01
Results are reported for an assessment of methods for predicting strength of wood, wood-based, or related material. Research directly applicable to nondestructive strength prediction was very limited. In wood, strength prediction research is limited to vibration decay, wave attenuation, and multiparameter "degradation models." Nonwood methods with potential...
Fathead minnows are used as a model fish species for the characterization of the endocrine-disrupting potential of environmental contaminants. This research describes the development of a PCR method that can determine the genetic sex in this species. This method, when incorpora...
An analytical method for designing low noise helicopter transmissions
NASA Technical Reports Server (NTRS)
Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.
1978-01-01
The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
Artificial intelligence in the diagnosis of low back pain.
Mann, N H; Brown, M D
1991-04-01
Computerized methods are used to recognize the characteristics of patient pain drawings. Artificial neural network (ANN) models are compared with expert predictions and traditional statistical classification methods when placing the pain drawings of low back pain patients into one of five clinically significant categories. A discussion is undertaken outlining the differences in these classifiers and the potential benefits of the ANN model as an artificial intelligence technique.
Evaluating simplified methods for liquefaction assessment for loss estimation
NASA Astrophysics Data System (ADS)
Kongar, Indranil; Rossetto, Tiziana; Giovinazzi, Sonia
2017-06-01
Currently, some catastrophe models used by the insurance industry account for liquefaction by applying a simple factor to shaking-induced losses. The factor is based only on local liquefaction susceptibility and this highlights the need for a more sophisticated approach to incorporating the effects of liquefaction in loss models. This study compares 11 unique models, each based on one of three principal simplified liquefaction assessment methods: liquefaction potential index (LPI) calculated from shear-wave velocity, the HAZUS software method and a method created specifically to make use of USGS remote sensing data. Data from the September 2010 Darfield and February 2011 Christchurch earthquakes in New Zealand are used to compare observed liquefaction occurrences to forecasts from these models using binary classification performance measures. The analysis shows that the best-performing model is the LPI calculated using known shear-wave velocity profiles, which correctly forecasts 78 % of sites where liquefaction occurred and 80 % of sites where liquefaction did not occur, when the threshold is set at 7. However, these data may not always be available to insurers. The next best model is also based on LPI but uses shear-wave velocity profiles simulated from the combination of USGS VS30 data and empirical functions that relate VS30 to average shear-wave velocities at shallower depths. This model correctly forecasts 58 % of sites where liquefaction occurred and 84 % of sites where liquefaction did not occur, when the threshold is set at 4. These scores increase to 78 and 86 %, respectively, when forecasts are based on liquefaction probabilities that are empirically related to the same values of LPI. This model is potentially more useful for insurance since the input data are publicly available. HAZUS models, which are commonly used in studies where no local model is available, perform poorly and incorrectly forecast 87 % of sites where liquefaction occurred, even at optimal thresholds. This paper also considers two models (HAZUS and EPOLLS) for estimation of the scale of liquefaction in terms of permanent ground deformation but finds that both models perform poorly, with correlations between observations and forecasts lower than 0.4 in all cases. Therefore these models potentially provide negligible additional value to loss estimation analysis outside of the regions for which they have been developed.
A kernel regression approach to gene-gene interaction detection for case-control studies.
Larson, Nicholas B; Schaid, Daniel J
2013-11-01
Gene-gene interactions are increasingly being addressed as a potentially important contributor to the variability of complex traits. Consequently, attentions have moved beyond single locus analysis of association to more complex genetic models. Although several single-marker approaches toward interaction analysis have been developed, such methods suffer from very high testing dimensionality and do not take advantage of existing information, notably the definition of genes as functional units. Here, we propose a comprehensive family of gene-level score tests for identifying genetic elements of disease risk, in particular pairwise gene-gene interactions. Using kernel machine methods, we devise score-based variance component tests under a generalized linear mixed model framework. We conducted simulations based upon coalescent genetic models to evaluate the performance of our approach under a variety of disease models. These simulations indicate that our methods are generally higher powered than alternative gene-level approaches and at worst competitive with exhaustive SNP-level (where SNP is single-nucleotide polymorphism) analyses. Furthermore, we observe that simulated epistatic effects resulted in significant marginal testing results for the involved genes regardless of whether or not true main effects were present. We detail the benefits of our methods and discuss potential genome-wide analysis strategies for gene-gene interaction analysis in a case-control study design. © 2013 WILEY PERIODICALS, INC.
NASA Astrophysics Data System (ADS)
Galanti, Eli; Durante, Daniele; Finocchiaro, Stefano; Iess, Luciano; Kaspi, Yohai
2017-07-01
The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulated Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galanti, Eli; Kaspi, Yohai; Durante, Daniele
The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulatedmore » Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.« less
2012-01-01
The ORCHESTRA online questionnaire on “benefits and barriers to the use of QSAR methods” addressed the academic, consultant, regulatory and industry communities potentially interested by QSAR methods in the context of REACH. Replies from more than 60 stakeholders produced some insights on the actual application of QSAR methods, and how to improve their use. Respondents state in majority that they have used QSAR methods. All have some future plans to test or use QSAR methods in accordance with their stakeholder role. The stakeholder respondents cited a total of 28 models, methods or software that they have actually applied. The three most frequently cited suites, used moreover by all the stakeholder categories, are the OECD Toolbox, EPISuite and CAESAR; all are free tools. Results suggest that stereotyped assumptions about the barriers to application of QSAR may be incorrect. Economic costs (including potential delays) are not found to be a major barrier. And only one respondent “prefers” traditional, well-known and accepted toxicological assessment methods. Information and guidance may be the keys to reinforcing use of QSAR models. Regulators appear most interested in obtaining clear explanation of the basis of the models, to provide a solid basis for decisions. Scientists appear most interested in the exploration of the scientific capabilities of the QSAR approach. Industry shows interest in obtaining reassurance that appropriate uses of QSAR will be accepted by regulators. PMID:23244245
The electrostatic interaction is a critical component of intermolecular interactions in biological processes. Rapid methods for the computation and characterization of the molecular electrostatic potential (MEP) that segment the molecular charge distribution and replace this cont...
NASA Astrophysics Data System (ADS)
Moghim, S.; Hsu, K.; Bras, R. L.
2013-12-01
General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.
Modelling Coastal Cliff Recession Based on the GIM-DDD Method
NASA Astrophysics Data System (ADS)
Gong, Bin; Wang, Shanyong; Sloan, Scott William; Sheng, Daichao; Tang, Chun'an
2018-04-01
The unpredictable and instantaneous collapse behaviour of coastal rocky cliffs may cause damage that extends significantly beyond the area of failure. Gravitational movements that occur during coastal cliff recession involve two major stages: the small deformation stage and the large displacement stage. In this paper, a method of simulating the entire progressive failure process of coastal rocky cliffs is developed based on the gravity increase method (GIM), the rock failure process analysis method and the discontinuous deformation analysis method, and it is referred to as the GIM-DDD method. The small deformation stage, which includes crack initiation, propagation and coalescence processes, and the large displacement stage, which includes block translation and rotation processes during the rocky cliff collapse, are modelled using the GIM-DDD method. In addition, acoustic emissions, stress field variations, crack propagation and failure mode characteristics are further analysed to provide insights that can be used to predict, prevent and minimize potential economic losses and casualties. The calculation and analytical results are consistent with previous studies, which indicate that the developed method provides an effective and reliable approach for performing rocky cliff stability evaluations and coastal cliff recession analyses and has considerable potential for improving the safety and protection of seaside cliff areas.
Uncertainty quantification for environmental models
Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming
2012-01-01
Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear
Sharma, Nripen S.; Jindal, Rohit; Mitra, Bhaskar; Lee, Serom; Li, Lulu; Maguire, Tim J.; Schloss, Rene; Yarmush, Martin L.
2014-01-01
Skin sensitization remains a major environmental and occupational health hazard. Animal models have been used as the gold standard method of choice for estimating chemical sensitization potential. However, a growing international drive and consensus for minimizing animal usage have prompted the development of in vitro methods to assess chemical sensitivity. In this paper, we examine existing approaches including in silico models, cell and tissue based assays for distinguishing between sensitizers and irritants. The in silico approaches that have been discussed include Quantitative Structure Activity Relationships (QSAR) and QSAR based expert models that correlate chemical molecular structure with biological activity and mechanism based read-across models that incorporate compound electrophilicity. The cell and tissue based assays rely on an assortment of mono and co-culture cell systems in conjunction with 3D skin models. Given the complexity of allergen induced immune responses, and the limited ability of existing systems to capture the entire gamut of cellular and molecular events associated with these responses, we also introduce a microfabricated platform that can capture all the key steps involved in allergic contact sensitivity. Finally, we describe the development of an integrated testing strategy comprised of two or three tier systems for evaluating sensitization potential of chemicals. PMID:24741377
NASA Astrophysics Data System (ADS)
Lane, E. M.; Gillibrand, P. A.; Wang, X.; Power, W.
2013-09-01
Regional source tsunamis pose a potentially devastating hazard to communities and infrastructure on the New Zealand coast. But major events are very uncommon. This dichotomy of infrequent but potentially devastating hazards makes realistic assessment of the risk challenging. Here, we describe a method to determine a probabilistic assessment of the tsunami hazard by regional source tsunamis with an "Average Recurrence Interval" of 2,500-years. The method is applied to the east Auckland region of New Zealand. From an assessment of potential regional tsunamigenic events over 100,000 years, the inundation of the Auckland region from the worst 100 events is modelled using a hydrodynamic model and probabilistic inundation depths on a 2,500-year time scale were determined. Tidal effects on the potential inundation were included by coupling the predicted wave heights with the probability density function of tidal heights at the inundation site. Results show that the more exposed northern section of the east coast and outer islands in the Hauraki Gulf face the greatest hazard from regional tsunamis in the Auckland region. Incorporating tidal effects into predictions of inundation reduced the predicted hazard compared to modelling all the tsunamis arriving at high tide giving a more accurate hazard assessment on the specified time scale. This study presents the first probabilistic analysis of dynamic modelling of tsunami inundation for the New Zealand coast and as such provides the most comprehensive assessment of tsunami inundation of the Auckland region from regional source tsunamis available to date.
Modeling of diatomic molecule using the Morse potential and the Verlet algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidiani, Elok
Performing molecular modeling usually uses special software for Molecular Dynamics (MD) such as: GROMACS, NAMD, JMOL etc. Molecular dynamics is a computational method to calculate the time dependent behavior of a molecular system. In this work, MATLAB was used as numerical method for a simple modeling of some diatomic molecules: HCl, H{sub 2} and O{sub 2}. MATLAB is a matrix based numerical software, in order to do numerical analysis, all the functions and equations describing properties of atoms and molecules must be developed manually in MATLAB. In this work, a Morse potential was generated to describe the bond interaction betweenmore » the two atoms. In order to analyze the simultaneous motion of molecules, the Verlet Algorithm derived from Newton’s Equations of Motion (classical mechanics) was operated. Both the Morse potential and the Verlet algorithm were integrated using MATLAB to derive physical properties and the trajectory of the molecules. The data computed by MATLAB is always in the form of a matrix. To visualize it, Visualized Molecular Dynamics (VMD) was performed. Such method is useful for development and testing some types of interaction on a molecular scale. Besides, this can be very helpful for describing some basic principles of molecular interaction for educational purposes.« less
[Discovery of potential LXRβ agonists from Chinese herbs using molecular simulation methods].
Luo, Gang-Gang; Lu, Fang; Qiao, Lian-Sheng; Li, Yong; Zhang, Yan-Ling
2016-08-01
Liver X receptor β (LXRβ) has been a new target in the treatment of hyperlipemia, which was related to the cholesterol homeostasis. In this study, the quantitative pharmacophores were constructed by 3D-QSAR pharmacophore (Hypogen) method based on the LXRβ agonists. The optimal pharmacophore model containing one hydrogen bond acceptor, two hydrophobics and one ring aromatic was obtained based on five assessment indictors, including the correlation between predicted value and experimental value of the compounds in training set (correlation), Δcost of the models (Δcost), hit rate of active compounds (HRA), identification of effectiveness index (IEI) and comprehensive evaluation index (CAI). And the values of the five assessment indicators were 0.95, 128.65, 84.44%, 2.58 and 2.18 respectively. The best model as a query to screen the traditional Chinese medicine database (TCMD), a list of 309 compounds was obtained andwere then refined using Libdock program. Finally, based on the screening rules of the Libdock score of initial compound and the key interactions between initial compound and receptor, four compounds, demethoxycurcumin, isolicoflavonol, licochalcone E and silydianin, were selected as potential LXRβ agonists. The molecular simulation methods were high-efficiency and time-saving to obtainthe potential LXRβ agonists, which could provide assistance for further researchingnovel anti-hyperlipidemia drugs. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Sun, Shoutian; Ramu Ramachandran, Bala; Wick, Collin D.
2018-02-01
New interatomic potentials for pure Ti and Al, and binary TiAl were developed utilizing the second nearest neighbour modified embedded-atom method (MEAM) formalism. The potentials were parameterized to reproduce multiple properties spanning bulk solids, solid surfaces, solid/liquid phase changes, and liquid interfacial properties. This was carried out using a newly developed optimization procedure that combined the simple minimization of a fitness function with a genetic algorithm to efficiently span the parameter space. The resulting MEAM potentials gave good agreement with experimental and DFT solid and liquid properties, and reproduced the melting points for Ti, Al, and TiAl. However, the surface tensions from the model consistently underestimated experimental values. Liquid TiAl’s surface was found to be mostly covered with Al atoms, showing that Al has a significant propensity for the liquid/air interface.
Sun, Shoutian; Ramachandran, Bala Ramu; Wick, Collin D
2018-02-21
New interatomic potentials for pure Ti and Al, and binary TiAl were developed utilizing the second nearest neighbour modified embedded-atom method (MEAM) formalism. The potentials were parameterized to reproduce multiple properties spanning bulk solids, solid surfaces, solid/liquid phase changes, and liquid interfacial properties. This was carried out using a newly developed optimization procedure that combined the simple minimization of a fitness function with a genetic algorithm to efficiently span the parameter space. The resulting MEAM potentials gave good agreement with experimental and DFT solid and liquid properties, and reproduced the melting points for Ti, Al, and TiAl. However, the surface tensions from the model consistently underestimated experimental values. Liquid TiAl's surface was found to be mostly covered with Al atoms, showing that Al has a significant propensity for the liquid/air interface.
Background / Question / Methods As part of the ongoing northern spotted owl recovery planning effort, we evaluated a series of alternative potential critical habitat scenarios using a species-distribution model (MaxEnt), a conservation-planning model (Zonation), and an individua...
Redesign of Library Workflows: Experimental Models for Electronic Resource Description.
ERIC Educational Resources Information Center
Calhoun, Karen
This paper explores the potential for and progress of a gradual transition from a highly centralized model for cataloging to an iterative, collaborative, and broadly distributed model for electronic resource description. The purpose is to alert library managers to some experiments underway and to help them conceptualize new methods for defining,…
Smoothed Residual Plots for Generalized Linear Models. Technical Report #450.
ERIC Educational Resources Information Center
Brant, Rollin
Methods for examining the viability of assumptions underlying generalized linear models are considered. By appealing to the likelihood, a natural generalization of the raw residual plot for normal theory models is derived and is applied to investigating potential misspecification of the linear predictor. A smooth version of the plot is also…
New interatomic potential for Mg–Al–Zn alloys with specific application to dilute Mg-based alloys
NASA Astrophysics Data System (ADS)
Dickel, Doyl E.; Baskes, Michael I.; Aslam, Imran; Barrett, Christopher D.
2018-06-01
Because of its very large c/a ratio, zinc has proven to be a difficult element to model using semi-empirical classical potentials. It has been shown, in particular, that for the modified embedded atom method (MEAM), a potential cannot simultaneously have an hcp ground state and c/a ratio greater than ideal. As an alloying element, however, useful zinc potentials can be generated by relaxing the condition that hcp be the lowest energy structure. In this paper, we present a MEAM zinc potential, which gives accurate material properties for the pure state, as well as a MEAM ternary potential for the Mg–Al–Zn system which will allow the atomistic modeling of a wide class of alloys containing zinc. The effects of zinc in simple Mg–Zn for this potential is demonstrated and these results verify the accuracy for the new potential in these systems.
Analysis of an optimization-based atomistic-to-continuum coupling method for point defects
Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...
2015-11-16
Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.
Dierker, Lisa; Rose, Jennifer; Tan, Xianming; Li, Runze
2010-12-01
This paper describes and compares a selection of available modeling techniques for identifying homogeneous population subgroups in the interest of informing targeted substance use intervention. We present a nontechnical review of the common and unique features of three methods: (a) trajectory analysis, (b) functional hierarchical linear modeling (FHLM), and (c) decision tree methods. Differences among the techniques are described, including required data features, strengths and limitations in terms of the flexibility with which outcomes and predictors can be modeled, and the potential of each technique for helping to inform the selection of targets and timing of substance intervention programs.
NASA Astrophysics Data System (ADS)
Teodor, V. G.; Baroiu, N.; Susac, F.; Oancea, N.
2016-11-01
The modelling of a curl of surfaces associated with a pair of rolling centrodes, when it is known the profile of the rack-gear's teeth profile, by direct measuring, as a coordinate matrix, has as goal the determining of the generating quality for an imposed kinematics of the relative motion of tool regarding the blank. In this way, it is possible to determine the generating geometrical error, as a base of the total error. The generation modelling allows highlighting the potential errors of the generating tool, in order to correct its profile, previously to use the tool in machining process. A method developed in CATIA is proposed, based on a new method, namely the method of “relative generating trajectories”. They are presented the analytical foundation, as so as some application for knows models of rack-gear type tools used on Maag teething machines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denbleyker, Alan; Liu, Yuzhi; Meurice, Y.
We consider the sign problem for classical spin models at complexmore » $$\\beta =1/g_0^2$$ on $$L\\times L$$ lattices. We show that the tensor renormalization group method allows reliable calculations for larger Im$$\\beta$$ than the reweighting Monte Carlo method. For the Ising model with complex $$\\beta$$ we compare our results with the exact Onsager-Kaufman solution at finite volume. The Fisher zeros can be determined precisely with the TRG method. We check the convergence of the TRG method for the O(2) model on $$L\\times L$$ lattices when the number of states $$D_s$$ increases. We show that the finite size scaling of the calculated Fisher zeros agrees very well with the Kosterlitz-Thouless transition assumption and predict the locations for larger volume. The location of these zeros agree with Monte Carlo reweighting calculation for small volume. The application of the method for the O(2) model with a chemical potential is briefly discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avonto, Cristina; Chittiboyina, Amar G.; Rua, Diego
2015-12-01
Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization, integrated approaches combining different chemical, biological and in silico methods are recommended to replace conventional animal tests. Chemical methods are intended to characterize the potential of a sensitizer to induce earlier molecular initiating events. The presence of an electrophilic mechanistic domain is considered one of the essential chemical features to covalently bind to the biological target and induce further haptenation processes. Current in chemico assays rely on the quantification of unreacted model nucleophiles aftermore » incubation with the candidate sensitizer. In the current study, a new fluorescence-based method, ‘HTS-DCYA assay’, is proposed. The assay aims at the identification of reactive electrophiles based on their chemical reactivity toward a model fluorescent thiol. The reaction workflow enabled the development of a High Throughput Screening (HTS) method to directly quantify the reaction adducts. The reaction conditions have been optimized to minimize solubility issues, oxidative side reactions and increase the throughput of the assay while minimizing the reaction time, which are common issues with existing methods. Thirty-six chemicals previously classified with LLNA, DPRA or KeratinoSens™ were tested as a proof of concept. Preliminary results gave an estimated 82% accuracy, 78% sensitivity, 90% specificity, comparable to other in chemico methods such as Cys-DPRA. In addition to validated chemicals, six natural products were analyzed and a prediction of their sensitization potential is presented for the first time. - Highlights: • A novel fluorescence-based method to detect electrophilic sensitizers is proposed. • A model fluorescent thiol was used to directly quantify the reaction products. • A discussion of the reaction workflow and critical parameters is presented. • The method could provide a useful tool to complement existing chemical assays.« less
Diller, David J
2017-01-10
Here we present a new method for point charge calculation which we call Q ET (charges by electron transfer). The intent of this work is to develop a method that can be useful for studying charge transfer in large biological systems. It is based on the intuitive framework of the Q EQ method with the key difference being that the Q ET method tracks all pairwise electron transfers by augmenting the Q EQ pseudoenergy function with a distance dependent cost function for each electron transfer. This approach solves the key limitation of the Q EQ method which is its handling of formally charged groups. First, we parametrize the Q ET method by fitting to electrostatic potentials calculated using ab initio quantum mechanics on over 11,000 small molecules. On an external test set of over 2500 small molecules the Q ET method achieves a mean absolute error of 1.37 kcal/mol/electron when compared to the ab initio electrostatic potentials. Second, we examine the conformational dependence of the charges on over 2700 tripeptides. With the tripeptide data set, we show that the conformational effects account for approximately 0.4 kcal/mol/electron on the electrostatic potentials. Third, we test the Q ET method for its ability to reproduce the effects of polarization and electron transfer on 1000 water clusters. For the water clusters, we show that the Q ET method captures about 50% of the polarization and electron transfer effects. Finally, we examine the effects of electron transfer and polarizability on the electrostatic interaction between p38 and 94 small molecule ligands. When used in conjunction with the Generalized-Born continuum solvent model, polarization and electron transfer with the Q ET model lead to an average change of 17 kcal/mol on the calculated electrostatic component of ΔG.
Evaluating the compatibility of multi-functional and intensive urban land uses
NASA Astrophysics Data System (ADS)
Taleai, M.; Sharifi, A.; Sliuzas, R.; Mesgari, M.
2007-12-01
This research is aimed at developing a model for assessing land use compatibility in densely built-up urban areas. In this process, a new model was developed through the combination of a suite of existing methods and tools: geographical information system, Delphi methods and spatial decision support tools: namely multi-criteria evaluation analysis, analytical hierarchy process and ordered weighted average method. The developed model has the potential to calculate land use compatibility in both horizontal and vertical directions. Furthermore, the compatibility between the use of each floor in a building and its neighboring land uses can be evaluated. The method was tested in a built-up urban area located in Tehran, the capital city of Iran. The results show that the model is robust in clarifying different levels of physical compatibility between neighboring land uses. This paper describes the various steps and processes of developing the proposed land use compatibility evaluation model (CEM).
Multi-chain Markov chain Monte Carlo methods for computationally expensive models
NASA Astrophysics Data System (ADS)
Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.
2017-12-01
Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1983-01-01
Adequate computer methods, based on interactions between discrete particles, provide information leading to an atomic level understanding of various physical processes. The success of these simulation methods, however, is related to the accuracy of the potential energy function representing the interactions among the particles. The development of a potential energy function for crystalline SiO2 forms that can be employed in lengthy computer modelling procedures was investigated. In many of the simulation methods which deal with discrete particles, semiempirical two body potentials were employed to analyze energy and structure related properties of the system. Many body interactions are required for a proper representation of the total energy for many systems. Many body interactions for simulations based on discrete particles are discussed.
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework
Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-01-01
Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698
Fowler, Nicholas J.; Blanford, Christopher F.
2017-01-01
Abstract Blue copper proteins, such as azurin, show dramatic changes in Cu2+/Cu+ reduction potential upon mutation over the full physiological range. Hence, they have important functions in electron transfer and oxidation chemistry and have applications in industrial biotechnology. The details of what determines these reduction potential changes upon mutation are still unclear. Moreover, it has been difficult to model and predict the reduction potential of azurin mutants and currently no unique procedure or workflow pattern exists. Furthermore, high‐level computational methods can be accurate but are too time consuming for practical use. In this work, a novel approach for calculating reduction potentials of azurin mutants is shown, based on a combination of continuum electrostatics, density functional theory and empirical hydrophobicity factors. Our method accurately reproduces experimental reduction potential changes of 30 mutants with respect to wildtype within experimental error and highlights the factors contributing to the reduction potential change. Finally, reduction potentials are predicted for a series of 124 new mutants that have not yet been investigated experimentally. Several mutants are identified that are located well over 10 Å from the copper center that change the reduction potential by more than 85 mV. The work shows that secondary coordination sphere mutations mostly lead to long‐range electrostatic changes and hence can be modeled accurately with continuum electrostatics. PMID:28815759
Yu, Ling; Yang, Zhong-Zhi
2010-05-07
Structures, binding energies, and vibrational frequencies of (NH(3))(n) (n=2-5) isomers and dynamical properties of liquid ammonia have been explored using a transferable intermolecular potential eight point model including fluctuating charges and flexible body based on a combination of the atom-bond electronegativity equalization and molecular (ABEEM) mechanics (ABEEM ammonia-8P) in this paper. The important feature of this model is to divide the charge sites of one ammonia molecule into eight points region containing four atoms, three sigma bonds, and a lone pair, and allows the charges in system to fluctuate responding to the ambient environment. Due to the explicit descriptions of charges and special treatment of hydrogen bonds, the results of equilibrium geometries, dipole moments, cluster interaction energies, vibrational frequencies for the gas phase of small ammonia clusters, and radial distribution function for liquid ammonia calculated with the ABEEM ammonia-8P potential model are in good agreement with those measured by available experiments and those obtained from high level ab initio calculations. The properties of ammonia dimer are studied in detail involving the structure and one-dimensional, two-dimensional potential energy surface. As for interaction energies, the root mean square deviation is 0.27 kcal/mol, and the linear correlation coefficient reaches 0.994.
Oliveri, Paolo; López, M Isabel; Casolino, M Chiara; Ruisánchez, Itziar; Callao, M Pilar; Medini, Luca; Lanteri, Silvia
2014-12-03
A new class-modeling method, referred to as partial least squares density modeling (PLS-DM), is presented. The method is based on partial least squares (PLS), using a distance-based sample density measurement as the response variable. Potential function probability density is subsequently calculated on PLS scores and used, jointly with residual Q statistics, to develop efficient class models. The influence of adjustable model parameters on the resulting performances has been critically studied by means of cross-validation and application of the Pareto optimality criterion. The method has been applied to verify the authenticity of olives in brine from cultivar Taggiasca, based on near-infrared (NIR) spectra recorded on homogenized solid samples. Two independent test sets were used for model validation. The final optimal model was characterized by high efficiency and equilibrate balance between sensitivity and specificity values, if compared with those obtained by application of well-established class-modeling methods, such as soft independent modeling of class analogy (SIMCA) and unequal dispersed classes (UNEQ). Copyright © 2014 Elsevier B.V. All rights reserved.
Common Data Model for Neuroscience Data and Data Model Exchange
Gardner, Daniel; Knuth, Kevin H.; Abato, Michael; Erde, Steven M.; White, Thomas; DeBellis, Robert; Gardner, Esther P.
2001-01-01
Objective: Generalizing the data models underlying two prototype neurophysiology databases, the authors describe and propose the Common Data Model (CDM) as a framework for federating a broad spectrum of disparate neuroscience information resources. Design: Each component of the CDM derives from one of five superclasses—data, site, method, model, and reference—or from relations defined between them. A hierarchic attribute-value scheme for metadata enables interoperability with variable tree depth to serve specific intra- or broad inter-domain queries. To mediate data exchange between disparate systems, the authors propose a set of XML-derived schema for describing not only data sets but data models. These include biophysical description markup language (BDML), which mediates interoperability between data resources by providing a meta-description for the CDM. Results: The set of superclasses potentially spans data needs of contemporary neuroscience. Data elements abstracted from neurophysiology time series and histogram data represent data sets that differ in dimension and concordance. Site elements transcend neurons to describe subcellular compartments, circuits, regions, or slices; non-neuroanatomic sites include sequences to patients. Methods and models are highly domain-dependent. Conclusions: True federation of data resources requires explicit public description, in a metalanguage, of the contents, query methods, data formats, and data models of each data resource. Any data model that can be derived from the defined superclasses is potentially conformant and interoperability can be enabled by recognition of BDML-described compatibilities. Such metadescriptions can buffer technologic changes. PMID:11141510
Critical study of the dispersive n- 90Zr mean field by means of a new variational method
NASA Astrophysics Data System (ADS)
Mahaux, C.; Sartor, R.
1994-02-01
A new variational method is developed for the construction of the dispersive nucleon-nucleus mean field at negative and positive energies. Like the variational moment approach that we had previously proposed, the new method only uses phenomenological optical-model potentials as input. It is simpler and more flexible than the previous approach. It is applied to a critical investigation of the n- 90Zr mean field between -25 and +25 MeV. This system is of particular interest because conflicting results had recently been obtained by two different groups. While the imaginary parts of the phenomenological optical-model potentials provided by these two groups are similar, their real parts are quite different. Nevertheless, we demonstrate that these two sets of phenomenological optical-model potentials are both compatible with the dispersion relation which connects the real and imaginary parts of the mean field. Previous hints to the contrary, by one of the two other groups, are shown to be due to unjustified approximations. A striking outcome of the present study is that it is important to explicitly introduce volume absorption in the dispersion relation, although volume absorption is negligible in the energy domain investigated here. Because of the existence of two sets of phenomenological optical-model potentials, our variational method yields two dispersive mean fields whose real parts are quite different at small or negative energies. No preference for one of the two dispersive mean fields can be expressed on purely empirical grounds since they both yield fair agreement with the experimental cross sections as well as with the observed energies of the bound single-particle states. However, we argue that one of these two mean fields is physically more meaningful, because the radial shape of its Hartree-Fock type component is independent of energy, as expected on theoretical grounds. This preferred mean field is very close to the one which had been obtained by the Ohio University group by means of fits to experimental cross sections. It is also in good agreement with a recent determination of the p- 90Zr average potential.
General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models
Miller, David A.W.
2012-01-01
Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.
Potential impact of initialization on decadal predictions as assessed for CMIP5 models
NASA Astrophysics Data System (ADS)
Branstator, Grant; Teng, Haiyan
2012-06-01
To investigate the potential for initialization to improve decadal range predictions, we quantify the initial value predictability of upper 300 m temperature in the two northern ocean basins for 12 models from Coupled Model Intercomparison Project phase 5 (CMIP5), and we contrast it with the forced predictability in Representative Concentration Pathways (RCP) 4.5 climate change projections. We use a recently introduced method that produces predictability estimates from long control runs. Many initial states are considered, and we find on average 1) initialization has the potential to improve skill in the first 5 years in the North Pacific and the first 9 years in the North Atlantic, and 2) the impact from initialization becomes secondary compared to the impact of RCP4.5 forcing after 6 1/2 and 8 years in the two basins, respectively. Model-to-model and spatial variations in these limits are, however, substantial.
NASA Technical Reports Server (NTRS)
Karpoukhin, Mikhii G.; Kogan, Boris Y.; Karplus, Walter J.
1995-01-01
The simulation of heart arrhythmia and fibrillation are very important and challenging tasks. The solution of these problems using sophisticated mathematical models is beyond the capabilities of modern super computers. To overcome these difficulties it is proposed to break the whole simulation problem into two tightly coupled stages: generation of the action potential using sophisticated models. and propagation of the action potential using simplified models. The well known simplified models are compared and modified to bring the rate of depolarization and action potential duration restitution closer to reality. The modified method of lines is used to parallelize the computational process. The conditions for the appearance of 2D spiral waves after the application of a premature beat and the subsequent traveling of the spiral wave inside the simulated tissue are studied.
Evaluating Process Improvement Courses of Action Through Modeling and Simulation
2017-09-16
changes to a process is time consuming and has potential to overlook stochastic effects. By modeling a process as a Numerical Design Structure Matrix...13 Methods to Evaluate Process Performance ................................................................15 The Design Structure...Matrix ......................................................................................16 Numerical Design Structure Matrix
A spectral method for spatial downscaling
Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrat...
Multimodal electromechanical model of piezoelectric transformers by Hamilton's principle.
Nadal, Clement; Pigache, Francois
2009-11-01
This work deals with a general energetic approach to establish an accurate electromechanical model of a piezoelectric transformer (PT). Hamilton's principle is used to obtain the equations of motion for free vibrations. The modal characteristics (mass, stiffness, primary and secondary electromechanical conversion factors) are also deduced. Then, to illustrate this general electromechanical method, the variational principle is applied to both homogeneous and nonhomogeneous Rosen-type PT models. A comparison of modal parameters, mechanical displacements, and electrical potentials are presented for both models. Finally, the validity of the electrodynamical model of nonhomogeneous Rosen-type PT is confirmed by a numerical comparison based on a finite elements method and an experimental identification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleary, M.P.
This paper provides comments to a companion journal paper on predictive modeling of hydraulic fracturing patterns (N.R. Warpinski et. al., 1994). The former paper was designed to compare various modeling methods to demonstrate the most accurate methods under various geologic constraints. The comments of this paper are centered around potential deficiencies in the former authors paper which include: limited actual comparisons offered between models, the issues of matching predictive data with that from related field operations was lacking or undocumented, and the relevance/impact of accurate modeling on the overall hydraulic fracturing cost and production.
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; ...
2017-07-06
Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer
Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less
Transferable atomistic model to describe the energetics of zirconia
NASA Astrophysics Data System (ADS)
Wilson, Mark; Schönberger, Uwe; Finnis, Michael W.
1996-10-01
We have investigated the energies of a number of phases of ZrO2 using models of an increasing degree of sophistication: the simple ionic model, the polarizable ion model, the compressible ion model, and finally a model including quadrupole polarizability of the oxygen ions. The three structures which are observed with increasing temperatures are monoclinic, tetragonal, and cubic (fluorite). Besides these we have studied some hypothetical structures which certain potentials erroneously predict or which occur in other oxides with this stoichiometry, e.g., the α-PbO2 structure and rutile. We have also performed ab initio density functional calculations with the full-potential linear combination of muffin-tin orbitals method to investigate the cubic-tetragonal distortion. A detailed comparison is made between the results using classical potentials, the experimental data, and our own and other ab initio results. The factors which stabilize the various structure are analyzed. We find the only genuinely transferable model is the one including compressible ions and anion polarizability to the quadrupole level.
Dynamics in entangled polyethylene melts using coarse-grained models
NASA Astrophysics Data System (ADS)
Peters, Brandon L.; Grest, Gary S.; Salerno, K. Michael; Agrawal, Anupriya; Perahia, Dvora
Polymer dynamics creates distinctive viscoelastic behavior as a result of a coupled interplay of motion on multiple length scales. Capturing the broad time and length scales of polymeric motion however, remains a challenge. Using polyethylene (PE) as a model system, we probe the effects of the degree of coarse graining on polymer dynamics. Coarse-grained (CG) potentials are derived using iterative Boltzmann inversion (iBi) with 2-6 methyl groups per CG bead from all fully atomistic melt simulations for short chains. While the iBi methods produces non-bonded potentials which give excellent agreement for the atomistic and CG pair correlation functions, the pressure P = 100-500MPa for the CG model. Correcting for potential so P 0 leads to non-bonded models with slightly smaller effective diameter and much deeper minimum. However, both the pressure and non-pressure corrected CG models give similar results for mean squared displacement (MSD) and the stress auto correlation function G(t) for PE melts above the melting point. The time rescaling factor between CG and atomistic models is found to be nearly the same for both CG models. Transferability of potential for different temperatures was tested by comparing the MSD and G(t) for potentials generated at different temperatures.
Reconstruction of the action potential of ventricular myocardial fibres
Beeler, G. W.; Reuter, H.
1977-01-01
1. A mathematical model of membrane action potentials of mammalian ventricular myocardial fibres is described. The reconstruction model is based as closely as possible on ionic currents which have been measured by the voltage-clamp method. 2. Four individual components of ionic current were formulated mathematically in terms of Hodgkin—Huxley type equations. The model incorporates two voltage- and time-dependent inward currents, the excitatory inward sodium current, iNa, and a secondary or slow inward current, is, primarily carried by calcium ions. A time-independent outward potassium current, iK1, exhibiting inward-going rectification, and a voltage- and time-dependent outward current, ix1, primarily carried by potassium ions, are further elements of the model. 3. The iNa is primarily responsible for the rapid upstroke of the action potential, while the other current components determine the configuration of the plateau of the action potential and the re-polarization phase. The relative importance of inactivation of is and of activation of ix1 for termination of the plateau is evaluated by the model. 4. Experimental phenomena like slow recovery of the sodium system from inactivation, frequency dependence of the action potential duration, all-or-nothing re-polarization, membrane oscillations are adequately described by the model. 5. Possible inadequacies and shortcomings of the model are discussed. PMID:874889
Back-Projection Cortical Potential Imaging: Theory and Results.
Haor, Dror; Shavit, Reuven; Shapiro, Moshe; Geva, Amir B
2017-07-01
Electroencephalography (EEG) is the single brain monitoring technique that is non-invasive, portable, passive, exhibits high-temporal resolution, and gives a directmeasurement of the scalp electrical potential. Amajor disadvantage of the EEG is its low-spatial resolution, which is the result of the low-conductive skull that "smears" the currents coming from within the brain. Recording brain activity with both high temporal and spatial resolution is crucial for the localization of confined brain activations and the study of brainmechanismfunctionality, whichis then followed by diagnosis of brain-related diseases. In this paper, a new cortical potential imaging (CPI) method is presented. The new method gives an estimation of the electrical activity on the cortex surface and thus removes the "smearing effect" caused by the skull. The scalp potentials are back-projected CPI (BP-CPI) onto the cortex surface by building a well-posed problem to the Laplace equation that is solved by means of the finite elements method on a realistic head model. A unique solution to the CPI problem is obtained by introducing a cortical normal current estimation technique. The technique is based on the same mechanism used in the well-known surface Laplacian calculation, followed by a scalp-cortex back-projection routine. The BP-CPI passed four stages of validation, including validation on spherical and realistic head models, probabilistic analysis (Monte Carlo simulation), and noise sensitivity tests. In addition, the BP-CPI was compared with the minimum norm estimate CPI approach and found superior for multi-source cortical potential distributions with very good estimation results (CC >0.97) on a realistic head model in the regions of interest, for two representative cases. The BP-CPI can be easily incorporated in different monitoring tools and help researchers by maintaining an accurate estimation for the cortical potential of ongoing or event-related potentials in order to have better neurological inferences from the EEG.
The MIMIC Model as a Tool for Differential Bundle Functioning Detection
ERIC Educational Resources Information Center
Finch, W. Holmes
2012-01-01
Increasingly, researchers interested in identifying potentially biased test items are encouraged to use a confirmatory, rather than exploratory, approach. One such method for confirmatory testing is rooted in differential bundle functioning (DBF), where hypotheses regarding potential differential item functioning (DIF) for sets of items (bundles)…
Vandament, Lyndsey; Chintu, Naminga; Yano, Nanako; Mugurungi, Owen; Tambatamba, Bushimbwa; Ncube, Gertrude; Xaba, Sinokuthemba; Mpasela, Felton; Muguza, Edward; Mangono, Tichakunda; Madidi, Ngonidzashe; Samona, Alick; Tagar, Elva; Hatzold, Karin
2016-06-01
Results from recent costing studies have put into question potential Voluntary Medical Male Circumcision (VMMC) cost savings with the introduction of the PrePex device. We evaluated the cost drivers and the overall unit cost of VMMC for a variety of service delivery models providing either surgical VMMC or both PrePex and surgery using current program data in Zimbabwe and Zambia. In Zimbabwe, 3 hypothetical PrePex only models were also included. For all models, clients aged 18 years and older were assumed to be medically eligible for PrePex and uptake was based on current program data from sites providing both methods. Direct costs included costs for consumables, including surgical VMMC kits for the forceps-guided method, device (US $12), human resources, demand creation, supply chain, waste management, training, and transport. Results for both countries suggest limited potential for PrePex to generate cost savings when adding the device to current surgical service delivery models. However, results for the hypothetical rural Integrated PrePex model in Zimbabwe suggest the potential for material unit cost savings (US $35 per VMMC vs. US $65-69 for existing surgical models). This analysis illustrates that models designed to leverage PrePex's advantages, namely the potential for integrating services in rural clinics and less stringent infrastructure requirements, may present opportunities for improved cost efficiency and service integration. Countries seeking to scale up VMMC in rural settings might consider integrating PrePex only MC services at the primary health care level to reduce costs while also increasing VMMC access and coverage.
Estimation of potential impacts and natural resource damages of oil.
McCay, Deborah French; Rowe, Jill Jennings; Whittier, Nicole; Sankaranarayanan, Sankar; Etkin, Dagmar Schmidt
2004-02-27
Methods were developed to estimate the potential impacts and natural resource damages resulting from oil spills using probabilistic modeling techniques. The oil fates model uses wind data, current data, and transport and weathering algorithms to calculate mass balance of fuel components in various environmental compartments (water surface, shoreline, water column, atmosphere, sediments, etc.), oil pathway over time (trajectory), surface distribution, shoreline oiling, and concentrations of the fuel components in water and sediments. Exposure of aquatic habitats and organisms to whole oil and toxic components is estimated in the biological model, followed by estimation of resulting acute mortality and ecological losses. Natural resource damages are based on estimated costs to restore equivalent resources and/or ecological services, using Habitat Equivalency Analysis (HEA) and Resource Equivalency Analysis (REA) methods. Oil spill modeling was performed for two spill sites in central San Francisco Bay, three spill sizes (20th, 50th, and 95th percentile volumes from tankers and larger freight vessels, based on an analysis of likely spill volumes given a spill has occurred) and four oil types (gasoline, diesel, heavy fuel oil, and crude oil). The scenarios were run in stochastic mode to determine the frequency distribution, mean and standard deviation of fates, impacts, and damages. This work is significant as it demonstrates a statistically quantifiable method for estimating potential impacts and financial consequences that may be used in ecological risk assessment and cost-benefit analyses. The statistically-defined spill volumes and consequences provide an objective measure of the magnitude, range and variability of impacts to wildlife, aquatic organisms and shorelines for potential spills of four oil/fuel types, each having distinct environmental fates and effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kato, Tsuyoshi; Ide, Yoshihiro; Yamanouchi, Kaoru
We first calculate the ground-state molecular wave function of 1D model H{sub 2} molecule by solving the coupled equations of motion formulated in the extended multi-configuration time-dependent Hartree-Fock (MCTDHF) method by the imaginary time propagation. From the comparisons with the results obtained by the Born-Huang (BH) expansion method as well as with the exact wave function, we observe that the memory size required in the extended MCTDHF method is about two orders of magnitude smaller than in the BH expansion method to achieve the same accuracy for the total energy. Second, in order to provide a theoretical means to understandmore » dynamical behavior of the wave function, we propose to define effective adiabatic potential functions and compare them with the conventional adiabatic electronic potentials, although the notion of the adiabatic potentials is not used in the extended MCTDHF approach. From the comparison, we conclude that by calculating the effective potentials we may be able to predict the energy differences among electronic states even for a time-dependent system, e.g., time-dependent excitation energies, which would be difficult to be estimated within the BH expansion approach.« less
Wood, Jonathan S; Donnell, Eric T; Porter, Richard J
2015-02-01
A variety of different study designs and analysis methods have been used to evaluate the performance of traffic safety countermeasures. The most common study designs and methods include observational before-after studies using the empirical Bayes method and cross-sectional studies using regression models. The propensity scores-potential outcomes framework has recently been proposed as an alternative traffic safety countermeasure evaluation method to address the challenges associated with selection biases that can be part of cross-sectional studies. Crash modification factors derived from the application of all three methods have not yet been compared. This paper compares the results of retrospective, observational evaluations of a traffic safety countermeasure using both before-after and cross-sectional study designs. The paper describes the strengths and limitations of each method, focusing primarily on how each addresses site selection bias, which is a common issue in observational safety studies. The Safety Edge paving technique, which seeks to mitigate crashes related to roadway departure events, is the countermeasure used in the present study to compare the alternative evaluation methods. The results indicated that all three methods yielded results that were consistent with each other and with previous research. The empirical Bayes results had the smallest standard errors. It is concluded that the propensity scores with potential outcomes framework is a viable alternative analysis method to the empirical Bayes before-after study. It should be considered whenever a before-after study is not possible or practical. Copyright © 2014 Elsevier Ltd. All rights reserved.
Using HEC-RAS to Enhance Interpretive Capabilities of Geomorphic Assessments
NASA Astrophysics Data System (ADS)
Keefer, L. L.
2005-12-01
The purpose of a geomorphic assessment is to characterize and evaluate a fluvial system for determining the past watershed and channel conditions, current geomorphic character and potential future channel adjustments. The geomorphic assessment approach utilized by the Illinois State Water Survey assesses channel response to disturbance at multiple temporal and spatial scales to help identify the underlying factors and events which led to the existing channel morphology. This is accomplished through two phases of investigation that involve a historical and physical analysis of the watershed, disturbance history, and field work at increasing levels of detail. To infer future channel adjustments, the geomorphic assessment protocol combines two methods of analyses that are dependent on the quantity and detail of the available data. The first method is the compilation of multiple lines of evidence using qualitative information related to the dominant fluvial environment, channel gradient, stream power thresholds, and channel evolution models. The second method is the use of hydraulic models which provide additional interpretative skills to evaluate potential channel adjustments. The structured data collection framework of the geomorphic assessment approach is used for the development of a HEC-RAS model. The model results are then used as another tool to determine the influence of bridges and control structures on channel stability, stream power profiles to identify potential channel bed degradation zones, and provide data for physically-based bank stability models. This poster will demonstrate the advantages of using a hydraulic model, such as HEC-RAS, to expand the interpretive capabilities of geomorphic assessments. The results from applying this approach will be demonstrated for the Big Creek watershed of the Cache River Basin in southern Illinois.
The orbital PDF: general inference of the gravitational potential from steady-state tracers
NASA Astrophysics Data System (ADS)
Han, Jiaxin; Wang, Wenting; Cole, Shaun; Frenk, Carlos S.
2016-02-01
We develop two general methods to infer the gravitational potential of a system using steady-state tracers, I.e. tracers with a time-independent phase-space distribution. Combined with the phase-space continuity equation, the time independence implies a universal orbital probability density function (oPDF) dP(λ|orbit) ∝ dt, where λ is the coordinate of the particle along the orbit. The oPDF is equivalent to Jeans theorem, and is the key physical ingredient behind most dynamical modelling of steady-state tracers. In the case of a spherical potential, we develop a likelihood estimator that fits analytical potentials to the system and a non-parametric method (`phase-mark') that reconstructs the potential profile, both assuming only the oPDF. The methods involve no extra assumptions about the tracer distribution function and can be applied to tracers with any arbitrary distribution of orbits, with possible extension to non-spherical potentials. The methods are tested on Monte Carlo samples of steady-state tracers in dark matter haloes to show that they are unbiased as well as efficient. A fully documented C/PYTHON code implementing our method is freely available at a GitHub repository linked from http://icc.dur.ac.uk/data/#oPDF.
Controlling sign problems in spin models using tensor renormalization
NASA Astrophysics Data System (ADS)
Denbleyker, Alan; Liu, Yuzhi; Meurice, Y.; Qin, M. P.; Xiang, T.; Xie, Z. Y.; Yu, J. F.; Zou, Haiyuan
2014-01-01
We consider the sign problem for classical spin models at complex β =1/g02 on L ×L lattices. We show that the tensor renormalization group method allows reliable calculations for larger Imβ than the reweighting Monte Carlo method. For the Ising model with complex β we compare our results with the exact Onsager-Kaufman solution at finite volume. The Fisher zeros can be determined precisely with the tensor renormalization group method. We check the convergence of the tensor renormalization group method for the O(2) model on L×L lattices when the number of states Ds increases. We show that the finite size scaling of the calculated Fisher zeros agrees very well with the Kosterlitz-Thouless transition assumption and predict the locations for larger volume. The location of these zeros agree with Monte Carlo reweighting calculation for small volume. The application of the method for the O(2) model with a chemical potential is briefly discussed.
NASA Technical Reports Server (NTRS)
Noor, A. K.
1983-01-01
Advances in continuum modeling, progress in reduction methods, and analysis and modeling needs for large space structures are covered with specific attention given to repetitive lattice trusses. As far as continuum modeling is concerned, an effective and verified analysis capability exists for linear thermoelastic stress, birfurcation buckling, and free vibration problems of repetitive lattices. However, application of continuum modeling to nonlinear analysis needs more development. Reduction methods are very effective for bifurcation buckling and static (steady-state) nonlinear analysis. However, more work is needed to realize their full potential for nonlinear dynamic and time-dependent problems. As far as analysis and modeling needs are concerned, three areas are identified: loads determination, modeling and nonclassical behavior characteristics, and computational algorithms. The impact of new advances in computer hardware, software, integrated analysis, CAD/CAM stems, and materials technology is also discussed.
Prototyping of cerebral vasculature physical models
Khan, Imad S.; Kelly, Patrick D.; Singer, Robert J.
2014-01-01
Background: Prototyping of cerebral vasculature models through stereolithographic methods have the ability to accurately depict the 3D structures of complicated aneurysms with high accuracy. We describe the method to manufacture such a model and review some of its uses in the context of treatment planning, research, and surgical training. Methods: We prospectively used the data from the rotational angiography of a 40-year-old female who presented with an unruptured right paraclinoid aneurysm. The 3D virtual model was then converted to a physical life-sized model. Results: The model constructed was shown to be a very accurate depiction of the aneurysm and its associated vasculature. It was found to be useful, among other things, for surgical training and as a patient education tool. Conclusion: With improving and more widespread printing options, these models have the potential to become an important part of research and training modalities. PMID:24678427
Dorman, Emily; Perry, Brian; Polis, Chelsea B; Campo-Engelstein, Lisa; Shattuck, Dominick; Hamlin, Aaron; Aiken, Abigail; Trussell, James; Sokal, David
2018-01-01
We modeled the potential impact of novel male contraceptive methods on averting unintended pregnancies in the United States, South Africa, and Nigeria. We used an established methodology for calculating the number of couple-years of protection provided by a given contraceptive method mix. We compared a "current scenario" (reflecting current use of existing methods in each country) against "future scenarios," (reflecting whether a male oral pill or a reversible vas occlusion was introduced) in order to estimate the impact on unintended pregnancies averted. Where possible, we based our assumptions on acceptability data from studies on uptake of novel male contraceptive methods. Assuming that only 10% of interested men would take up a novel male method and that users would comprise both switchers (from existing methods) and brand-new users of contraception, the model estimated that introducing the male pill or reversible vas occlusion would decrease unintended pregnancies by 3.5% to 5.2% in the United States, by 3.2% to 5% in South Africa, and by 30.4% to 38% in Nigeria. Alternative model scenarios are presented assuming uptake as high as 15% and as low as 5% in each location. Model results were sensitive to assumptions regarding novel method uptake and proportion of switchers vs. new users. Even under conservative assumptions, the introduction of a male pill or temporary vas occlusion could meaningfully contribute to averting unintended pregnancies in a variety of contexts, especially in settings where current use of contraception is low. Novel male contraceptives could play a meaningful role in averting unintended pregnancies in a variety of contexts. The potential impact is especially great in settings where current use of contraception is low and if novel methods can attract new contraceptive users. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Energetics of protein-DNA interactions.
Donald, Jason E; Chen, William W; Shakhnovich, Eugene I
2007-01-01
Protein-DNA interactions are vital for many processes in living cells, especially transcriptional regulation and DNA modification. To further our understanding of these important processes on the microscopic level, it is necessary that theoretical models describe the macromolecular interaction energetics accurately. While several methods have been proposed, there has not been a careful comparison of how well the different methods are able to predict biologically important quantities such as the correct DNA binding sequence, total binding free energy and free energy changes caused by DNA mutation. In addition to carrying out the comparison, we present two important theoretical models developed initially in protein folding that have not yet been tried on protein-DNA interactions. In the process, we find that the results of these knowledge-based potentials show a strong dependence on the interaction distance and the derivation method. Finally, we present a knowledge-based potential that gives comparable or superior results to the best of the other methods, including the molecular mechanics force field AMBER99.
NASA Astrophysics Data System (ADS)
Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook
2004-04-01
We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Ruqiang; Chen, Xuefeng; Li, Weihua
Modern mathematics has commonly been utilized as an effective tool to model mechanical equipment so that their dynamic characteristics can be studied analytically. This will help identify potential failures of mechanical equipment by observing change in the equipment’s dynamic parameters. On the other hand, dynamic signals are also important and provide reliable information about the equipment’s working status. Modern mathematics has also provided us with a systematic way to design and implement various signal processing methods, which are used to analyze these dynamic signals, and to enhance intrinsic signal components that are directly related to machine failures. This special issuemore » is aimed at stimulating not only new insights on mathematical methods for modeling but also recently developed signal processing methods, such as sparse decomposition with potential applications in machine fault diagnosis. Finally, the papers included in this special issue provide a glimpse into some of the research and applications in the field of machine fault diagnosis through applications of the modern mathematical methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larriba, Carlos, E-mail: clarriba@umn.edu; Hogan, Christopher J.
2013-10-15
The structures of nanoparticles, macromolecules, and molecular clusters in gas phase environments are often studied via measurement of collision cross sections. To directly compare structure models to measurements, it is hence necessary to have computational techniques available to calculate the collision cross sections of structural models under conditions matching measurements. However, presently available collision cross section methods contain the underlying assumption that collision between gas molecules and structures are completely elastic (gas molecule translational energy conserving) and specular, while experimental evidence suggests that in the most commonly used background gases for measurements, air and molecular nitrogen, gas molecule reemission ismore » largely inelastic (with exchange of energy between vibrational, rotational, and translational modes) and should be treated as diffuse in computations with fixed structural models. In this work, we describe computational techniques to predict the free molecular collision cross sections for fixed structural models of gas phase entities where inelastic and non-specular gas molecule reemission rules can be invoked, and the long range ion-induced dipole (polarization) potential between gas molecules and a charged entity can be considered. Specifically, two calculation procedures are described detail: a diffuse hard sphere scattering (DHSS) method, in which structures are modeled as hard spheres and collision cross sections are calculated for rectilinear trajectories of gas molecules, and a diffuse trajectory method (DTM), in which the assumption of rectilinear trajectories is relaxed and the ion-induced dipole potential is considered. Collision cross section calculations using the DHSS and DTM methods are performed on spheres, models of quasifractal aggregates of varying fractal dimension, and fullerene like structures. Techniques to accelerate DTM calculations by assessing the contribution of grazing gas molecule collisions (gas molecules with altered trajectories by the potential interaction) without tracking grazing trajectories are further discussed. The presented calculation techniques should enable more accurate collision cross section predictions under experimentally relevant conditions than pre-existing approaches, and should enhance the ability of collision cross section measurement schemes to discern the structures of gas phase entities.« less
Terza, Joseph V; Bradford, W David; Dismuke, Clara E
2008-01-01
Objective To investigate potential bias in the use of the conventional linear instrumental variables (IV) method for the estimation of causal effects in inherently nonlinear regression settings. Data Sources Smoking Supplement to the 1979 National Health Interview Survey, National Longitudinal Alcohol Epidemiologic Survey, and simulated data. Study Design Potential bias from the use of the linear IV method in nonlinear models is assessed via simulation studies and real world data analyses in two commonly encountered regression setting: (1) models with a nonnegative outcome (e.g., a count) and a continuous endogenous regressor; and (2) models with a binary outcome and a binary endogenous regressor. Principle Findings The simulation analyses show that substantial bias in the estimation of causal effects can result from applying the conventional IV method in inherently nonlinear regression settings. Moreover, the bias is not attenuated as the sample size increases. This point is further illustrated in the survey data analyses in which IV-based estimates of the relevant causal effects diverge substantially from those obtained with appropriate nonlinear estimation methods. Conclusions We offer this research as a cautionary note to those who would opt for the use of linear specifications in inherently nonlinear settings involving endogeneity. PMID:18546544
Method for confining the magnetic field of the cross-tail current inside the magnetopause
NASA Technical Reports Server (NTRS)
Sotirelis, T.; Tsyganenko, N. A.; Stern, D. P.
1994-01-01
A method is presented for analytically representing the magnetic field due to the cross-tail current and its closure on the magnetopause. It is an extension of a method used by Tsyganenko (1989b) to confine the dipole field inside an ellipsoidal magnetopause using a scalar potential. Given a model of the cross-tail current, the implied net magnetic field is obtained by adding to the cross-tail current field a potential field B = - del gamma, which makes all field lines divide into two disjoint groups, separated by the magnetopause (i.e., the combined field is made to have zero normal component with the magnetopause). The magnetopause is assumed to be an ellipsoid of revolution (a prolate spheroid) as an approximation to observations (Sibeck et al., 1991). This assumption permits the potential gamma to be expressed in spheroidal coordinates, expanded in spheroidal harmonics and its terms evaluated by performing inversion integrals. Finally, the field outside the magnetopause is replaced by zero, resulting in a consistent current closure along the magnetopause. This procedure can also be used to confine the modeled field of any other interior magnetic source, though the model current must always flow in closed circuits. The method is demonstrated on the T87 cross-tail current, examples illustrate the effect of changing the size and shape of the prescribed magnetopause and a comparison is made to an independent numerical scheme based on the Biot-Savart equation.
An Extreme-Value Approach to Anomaly Vulnerability Identification
NASA Technical Reports Server (NTRS)
Everett, Chris; Maggio, Gaspare; Groen, Frank
2010-01-01
The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.
Klaas, Dua K S Y; Imteaz, Monzur Alam; Arulrajah, Arul
2017-10-01
Delineation of groundwater vulnerability zones based on a valid groundwater model is crucial towards an accurate design of management strategies. However, limited data often restrain the development of a robust groundwater model. This study presents a methodology to develop groundwater vulnerability zones in a data-scarce area. The Head-Guided Zonation (HGZ) method was applied on the recharge area of Oemau Spring in Rote Island, Indonesia, which is under potential risk of contamination from rapid land use changes. In this method the model domain is divided into zones of piecewise constant into which the values of subsurface properties are assigned in the parameterisation step. Using reverse particle-tracking simulation on the calibrated and validated groundwater model, the simulation results (travel time and pathline trajectory) were combined with the potential groundwater contamination risk from human activities (land use type and current practice) to develop three vulnerability zones. The corresponding preventive management strategies were proposed to protect the spring from contamination and to ensure provision of safe and good quality water from the spring. Copyright © 2017 Elsevier Ltd. All rights reserved.
Recent developments in skin mimic systems to predict transdermal permeation.
Waters, Laura J
2015-01-01
In recent years there has been a drive to create experimental techniques that can facilitate the accurate and precise prediction of transdermal permeation without the use of in vivo studies. This review considers why permeation data is essential, provides a brief summary as to how skin acts as a natural barrier to permeation and discusses why in vivo studies are undesirable. This is followed by an in-depth discussion on the extensive range of alternative methods that have been developed in recent years. All of the major 'skin mimic systems' are considered including: in vitro models using synthetic membranes, mathematical models including quantitative structure-permeability relationships (QSPRs), human skin equivalents and chromatographic based methods. All of these model based systems are ideally trying to achieve the same end-point, namely a reliable in vitro-in vivo correlation, i.e. matching non-in vivo obtained data with that from human clinical trials. It is only by achieving this aim, that any new method of obtaining permeation data can be acknowledged as a potential replacement for animal studies, for the determination of transdermal permeation. In this review, the relevance and potential applicability of the various models systems will also be discussed.
Enhanced angular overlap model for nonmetallic f -electron systems
NASA Astrophysics Data System (ADS)
Gajek, Z.
2005-07-01
An efficient method of interpretation of the crystal field effect in nonmetallic f -electron systems, the enhanced angular overlap model (EAOM), is presented. The method is established on the ground of perturbation expansion of the effective Hamiltonian for localized electrons and first-principles calculations related to available experimental data. The series of actinide compounds AO2 , oxychalcogenides AOX , and dichalcogenides UX2 where X=S ,Se,Te and A=U ,Np serve as probes of the effectiveness of the proposed method. An idea is to enhance the usual angular overlap model with ab initio calculations of those contributions to the crystal field potential, which cannot be represented by the usual angular overlap model (AOM). The enhancement leads to an improved fitting and makes the approach intrinsically coherent. In addition, the ab initio calculations of the main, AOM-consistent part of the crystal field potential allows one to fix the material-specific relations for the EAOM parameters in the effective Hamiltonian. Consequently, the electronic structure interpretation based on EAOM can be extended to systems of the lowest point symmetries or/and deficient experimental data. Several examples illustrating the promising capabilities of EAOM are given.
Strength-based Supervision: Frameworks, Current Practice, and Future Directions A Wu-wei Method.
ERIC Educational Resources Information Center
Edwards, Jeffrey K.; Chen, Mei-Whei
1999-01-01
Discusses a method of counseling supervision similar to the wu-wei practice in Zen and Taoism. Suggests that this strength-based method and an understanding of isomorphy in supervisory relationships are the preferred practice for the supervision of family counselors. States that this model of supervision potentiates the person-of-the-counselor.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cronin, Keith R.; Runge, Troy M.; Zhang, Xuesong
By modeling the life cycle of fuel pathways for cellulosic ethanol (CE) it can help identify logistical barriers and anticipated impacts for the emerging commercial CE industry. Such models contain high amounts of variability, primarily due to the varying nature of agricultural production but also because of limitations in the availability of data at the local scale, resulting in the typical practice of using average values. In this study, 12 spatially explicit, cradle-to-refinery gate CE pathways were developed that vary by feedstock (corn stover, switchgrass, and Miscanthus), nitrogen application rate (higher, lower), pretreatment method (ammonia fiber expansion [AFEX], dilute acid),more » and co-product treatment method (mass allocation, sub-division), in which feedstock production was modeled at the watershed scale over a nine-county area in Southwestern Michigan. When comparing feedstocks, the model showed that corn stover yielded higher global warming potential (GWP), acidification potential (AP), and eutrophication potential (EP) than the perennial feedstocks of switchgrass and Miscanthus, on an average per area basis. Full life cycle results per MJ of produced ethanol demonstrated more mixed results, with corn stover-derived CE scenarios that use sub-division as a co-product treatment method yielding similarly favorable outcomes as switchgrass- and Miscanthus-derived CE scenarios. Variability was found to be greater between feedstocks than watersheds. Additionally, scenarios using dilute acid pretreatment had more favorable results than those using AFEX pretreatment.« less
Cronin, Keith R.; Runge, Troy M.; Zhang, Xuesong; ...
2016-07-13
By modeling the life cycle of fuel pathways for cellulosic ethanol (CE) it can help identify logistical barriers and anticipated impacts for the emerging commercial CE industry. Such models contain high amounts of variability, primarily due to the varying nature of agricultural production but also because of limitations in the availability of data at the local scale, resulting in the typical practice of using average values. In this study, 12 spatially explicit, cradle-to-refinery gate CE pathways were developed that vary by feedstock (corn stover, switchgrass, and Miscanthus), nitrogen application rate (higher, lower), pretreatment method (ammonia fiber expansion [AFEX], dilute acid),more » and co-product treatment method (mass allocation, sub-division), in which feedstock production was modeled at the watershed scale over a nine-county area in Southwestern Michigan. When comparing feedstocks, the model showed that corn stover yielded higher global warming potential (GWP), acidification potential (AP), and eutrophication potential (EP) than the perennial feedstocks of switchgrass and Miscanthus, on an average per area basis. Full life cycle results per MJ of produced ethanol demonstrated more mixed results, with corn stover-derived CE scenarios that use sub-division as a co-product treatment method yielding similarly favorable outcomes as switchgrass- and Miscanthus-derived CE scenarios. Variability was found to be greater between feedstocks than watersheds. Additionally, scenarios using dilute acid pretreatment had more favorable results than those using AFEX pretreatment.« less
HYSOGs250m, global gridded hydrologic soil groups for curve-number-based runoff modeling.
Ross, C Wade; Prihodko, Lara; Anchang, Julius; Kumar, Sanath; Ji, Wenjie; Hanan, Niall P
2018-05-15
Hydrologic soil groups (HSGs) are a fundamental component of the USDA curve-number (CN) method for estimation of rainfall runoff; yet these data are not readily available in a format or spatial-resolution suitable for regional- and global-scale modeling applications. We developed a globally consistent, gridded dataset defining HSGs from soil texture, bedrock depth, and groundwater. The resulting data product-HYSOGs250m-represents runoff potential at 250 m spatial resolution. Our analysis indicates that the global distribution of soil is dominated by moderately high runoff potential, followed by moderately low, high, and low runoff potential. Low runoff potential, sandy soils are found primarily in parts of the Sahara and Arabian Deserts. High runoff potential soils occur predominantly within tropical and sub-tropical regions. No clear pattern could be discerned for moderately low runoff potential soils, as they occur in arid and humid environments and at both high and low elevations. Potential applications of this data include CN-based runoff modeling, flood risk assessment, and as a covariate for biogeographical analysis of vegetation distributions.
Organizational Culture and the Deployment of Agile Methods: The Competing Values Model View
NASA Astrophysics Data System (ADS)
Iivari, Juhani; Iivari, Netta
A number of researchers have identified organizational culture as a factor that potentially affects the deployment of agile systems development methods. Inspired by the study of Iivari and Huisman (2007), which focused on the deployment of traditional systems development methods, the present paper proposes a number of hypotheses about the influence of organizational culture on the deployment of agile methods.
Study of thermodynamic properties of liquid binary alloys by a pseudopotential method
NASA Astrophysics Data System (ADS)
Vora, Aditya M.
2010-11-01
On the basis of the Percus-Yevick hard-sphere model as a reference system and the Gibbs-Bogoliubov inequality, a thermodynamic perturbation method is applied with the use of the well-known model potential. By applying a variational method, the hard-core diameters are found which correspond to a minimum free energy. With this procedure, the thermodynamic properties such as the internal energy, entropy, Helmholtz free energy, entropy of mixing, and heat of mixing are computed for liquid NaK binary systems. The influence of the local-field correction functions of Hartree, Taylor, Ichimaru-Utsumi, Farid-Heine-Engel-Robertson, and Sarkar-Sen-Haldar-Roy is also investigated. The computed excess entropy is in agreement with available experimental data in the case of liquid alloys, whereas the agreement for the heat of mixing is poor. This may be due to the sensitivity of the latter to the potential parameters and dielectric function.
Role of the supersymmetric semiclassical approach in barrier penetration and heavy-ion fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sil, T.; Dutt, R.; Varshni, Y.P.
1994-11-01
The problem of heavy-ion fusion reactions in the one-dimensional barrier penetration model (BPM) has been reexamined in light of supersymmetry-inspired WKB (SWKB) method. Motivated by our recent work [Phys. Lett. A 184, 209 (1994)] describing the SWKB method for the computation of the transmission coefficient [ital T]([ital E]), we have performed similar calculations for a potential barrier that mimics the proximity potential obtained by fitting experimentally measured fusion cross section [sigma][sub [ital F
Complex absorbing potential based Lorentzian fitting scheme and time dependent quantum transport.
Xie, Hang; Kwok, Yanho; Jiang, Feng; Zheng, Xiao; Chen, GuanHua
2014-10-28
Based on the complex absorbing potential (CAP) method, a Lorentzian expansion scheme is developed to express the self-energy. The CAP-based Lorentzian expansion of self-energy is employed to solve efficiently the Liouville-von Neumann equation of one-electron density matrix. The resulting method is applicable for both tight-binding and first-principles models and is used to simulate the transient currents through graphene nanoribbons and a benzene molecule sandwiched between two carbon-atom chains.
Computational Design of DNA-Binding Proteins.
Thyme, Summer; Song, Yifan
2016-01-01
Predicting the outcome of engineered and naturally occurring sequence perturbations to protein-DNA interfaces requires accurate computational modeling technologies. It has been well established that computational design to accommodate small numbers of DNA target site substitutions is possible. This chapter details the basic method of design used in the Rosetta macromolecular modeling program that has been successfully used to modulate the specificity of DNA-binding proteins. More recently, combining computational design and directed evolution has become a common approach for increasing the success rate of protein engineering projects. The power of such high-throughput screening depends on computational methods producing multiple potential solutions. Therefore, this chapter describes several protocols for increasing the diversity of designed output. Lastly, we describe an approach for building comparative models of protein-DNA complexes in order to utilize information from homologous sequences. These models can be used to explore how nature modulates specificity of protein-DNA interfaces and potentially can even be used as starting templates for further engineering.
Cathodic Protection Measurement Through Inline Inspection Technology Uses and Observations
NASA Astrophysics Data System (ADS)
Ferguson, Briana Ley
This research supports the evaluation of an impressed current cathodic protection (CP) system of a buried coated steel pipeline through alternative technology and methods, via an inline inspection device (ILI, CP ILI tool, or tool), in order to prevent and mitigate external corrosion. This thesis investigates the ability to measure the current density of a pipeline's CP system from inside of a pipeline rather than manually from outside, and then convert that CP ILI tool reading into a pipe-to-soil potential as required by regulations and standards. This was demonstrated through a mathematical model that utilizes applications of Ohm's Law, circuit concepts, and attenuation principles in order to match the results of the ILI sample data by varying parameters of the model (i.e., values for over potential and coating resistivity). This research has not been conducted previously in order to determine if the protected potential range can be achieved with respect to the predicted current density from the CP ILI device. Kirchhoff's method was explored, but certain principals could not be used in the model as manual measurements were required. This research was based on circuit concepts which indirectly affected electrochemical processes. Through Ohm's law, the results show that a constant current density is possible in the protected potential range; therefore, indicates polarization of the pipeline, which leads to calcareous deposit development with respect to electrochemistry. Calcareous deposit is desirable in industry since it increases the resistance of the pipeline coating and lowers current, thus slowing the oxygen diffusion process. This research conveys that an alternative method for CP evaluation from inside of the pipeline is possible where the pipe-to-soil potential can be estimated (as required by regulations) from the ILI tool's current density measurement.
X ray absorption fine structure of systems in the anharmonic limit
NASA Astrophysics Data System (ADS)
Mustredeleon, J.; Conradson, S. D.; Batistic, I.; Bishop, A. R.; Raistrick, I.; Jackson, W. E.; Brown, G. E.
A new approach to the analysis of x-ray absorption fine structure (XAFS) data is presented. It is based on the use of radial distribution functions directly calculated from a single-particle ion Hamiltonian containing model potentials. The starting point of this approach is the statistical average of the XAFS for an atomic pair. This average can be computed using a radial distribution function (RDF), which can be expressed in terms of the eigenvalues and wavefunctions associated with the model potential. The pair potential describing the ionic motion is then expressed in terms of parameters that are determined by fitting this statistical average to the experimental XAFS spectrum. This approach allows the use of XAFS as a tool for mapping near-neighbor interatomic potentials, and allows the treatment of systems which exhibit strongly anharmonic potentials which can be treated by perturbative methods. Using this method we have analyzed the high temperature behavior of the oxygen contributions to the Fe K-edge XAFS in the ferrosilicate minerals andradite (Ca3Fe2Si3O12) and magnesiowustite (Mg(0.9)Fe(0.1)O). Using a temperature dependent anharmonic correction derived from these model compounds, we have found evidence for a local structural change in the Fe-O coordination environment upon melting of the geologically important mineral fayalite (Fe2SiO4). We have also employed this method to the study of the axial oxygen contributions to the polarized Cu K-edge XAFS on oriented samples of YBa2Cu3O7 and related compounds. From this study we find evidence for an axial oxygen-centered lattice distortion accompanying the superconducting phase transition and a correlation between this distortion and Tc. The relation of the observed lattice distortion to mechanisms of superconductivity is discussed.
NASA Astrophysics Data System (ADS)
Naghibi, Seyed Amir; Moghaddam, Davood Davoodi; Kalantar, Bahareh; Pradhan, Biswajeet; Kisi, Ozgur
2017-05-01
In recent years, application of ensemble models has been increased tremendously in various types of natural hazard assessment such as landslides and floods. However, application of this kind of robust models in groundwater potential mapping is relatively new. This study applied four data mining algorithms including AdaBoost, Bagging, generalized additive model (GAM), and Naive Bayes (NB) models to map groundwater potential. Then, a novel frequency ratio data mining ensemble model (FREM) was introduced and evaluated. For this purpose, eleven groundwater conditioning factors (GCFs), including altitude, slope aspect, slope angle, plan curvature, stream power index (SPI), river density, distance from rivers, topographic wetness index (TWI), land use, normalized difference vegetation index (NDVI), and lithology were mapped. About 281 well locations with high potential were selected. Wells were randomly partitioned into two classes for training the models (70% or 197) and validating them (30% or 84). AdaBoost, Bagging, GAM, and NB algorithms were employed to get groundwater potential maps (GPMs). The GPMs were categorized into potential classes using natural break method of classification scheme. In the next stage, frequency ratio (FR) value was calculated for the output of the four aforementioned models and were summed, and finally a GPM was produced using FREM. For validating the models, area under receiver operating characteristics (ROC) curve was calculated. The ROC curve for prediction dataset was 94.8, 93.5, 92.6, 92.0, and 84.4% for FREM, Bagging, AdaBoost, GAM, and NB models, respectively. The results indicated that FREM had the best performance among all the models. The better performance of the FREM model could be related to reduction of over fitting and possible errors. Other models such as AdaBoost, Bagging, GAM, and NB also produced acceptable performance in groundwater modelling. The GPMs produced in the current study may facilitate groundwater exploitation by determining high and very high groundwater potential zones.
A Bayesian Approach to Surrogacy Assessment Using Principal Stratification in Clinical Trials
Li, Yun; Taylor, Jeremy M.G.; Elliott, Michael R.
2011-01-01
Summary A surrogate marker (S) is a variable that can be measured earlier and often easier than the true endpoint (T) in a clinical trial. Most previous research has been devoted to developing surrogacy measures to quantify how well S can replace T or examining the use of S in predicting the effect of a treatment (Z). However, the research often requires one to fit models for the distribution of T given S and Z. It is well known that such models do not have causal interpretations because the models condition on a post-randomization variable S. In this paper, we directly model the relationship among T, S and Z using a potential outcomes framework introduced by Frangakis and Rubin (2002). We propose a Bayesian estimation method to evaluate the causal probabilities associated with the cross-classification of the potential outcomes of S and T when S and T are both binary. We use a log-linear model to directly model the association between the potential outcomes of S and T through the odds ratios. The quantities derived from this approach always have causal interpretations. However, this causal model is not identifiable from the data without additional assumptions. To reduce the non-identifiability problem and increase the precision of statistical inferences, we assume monotonicity and incorporate prior belief that is plausible in the surrogate context by using prior distributions. We also explore the relationship among the surrogacy measures based on traditional models and this counterfactual model. The method is applied to the data from a glaucoma treatment study. PMID:19673864
Ensemble ecosystem modeling for predicting ecosystem response to predator reintroduction.
Baker, Christopher M; Gordon, Ascelin; Bode, Michael
2017-04-01
Introducing a new or extirpated species to an ecosystem is risky, and managers need quantitative methods that can predict the consequences for the recipient ecosystem. Proponents of keystone predator reintroductions commonly argue that the presence of the predator will restore ecosystem function, but this has not always been the case, and mathematical modeling has an important role to play in predicting how reintroductions will likely play out. We devised an ensemble modeling method that integrates species interaction networks and dynamic community simulations and used it to describe the range of plausible consequences of 2 keystone-predator reintroductions: wolves (Canis lupus) to Yellowstone National Park and dingoes (Canis dingo) to a national park in Australia. Although previous methods for predicting ecosystem responses to such interventions focused on predicting changes around a given equilibrium, we used Lotka-Volterra equations to predict changing abundances through time. We applied our method to interaction networks for wolves in Yellowstone National Park and for dingoes in Australia. Our model replicated the observed dynamics in Yellowstone National Park and produced a larger range of potential outcomes for the dingo network. However, we also found that changes in small vertebrates or invertebrates gave a good indication about the potential future state of the system. Our method allowed us to predict when the systems were far from equilibrium. Our results showed that the method can also be used to predict which species may increase or decrease following a reintroduction and can identify species that are important to monitor (i.e., species whose changes in abundance give extra insight into broad changes in the system). Ensemble ecosystem modeling can also be applied to assess the ecosystem-wide implications of other types of interventions including assisted migration, biocontrol, and invasive species eradication. © 2016 Society for Conservation Biology.
Tomio, Ryosuke; Akiyama, Takenori; Ohira, Takayuki; Yoshida, Kazunari
2016-01-01
Intraoperative monitoring of motor evoked potentials by transcranial electric stimulation is popular in neurosurgery for monitoring motor function preservation. Some authors have reported that the peg-screw electrodes screwed into the skull can more effectively conduct current to the brain compared to subdermal cork-screw electrodes screwed into the skin. The aim of this study was to investigate the influence of electrode design on transcranial motor evoked potential monitoring. We estimated differences in effectiveness between the cork-screw electrode, peg-screw electrode, and cortical electrode to produce electric fields in the brain. We used the finite element method to visualize electric fields in the brain generated by transcranial electric stimulation using realistic three-dimensional head models developed from T1-weighted images. Surfaces from five layers of the head were separated as accurately as possible. We created the "cork-screws model," "1 peg-screw model," "peg-screws model," and "cortical electrode model". Electric fields in the brain radially diffused from the brain surface at a maximum just below the electrodes in coronal sections. The coronal sections and surface views of the brain showed higher electric field distributions under the peg-screw compared to the cork-screw. An extremely high electric field was observed under cortical electrodes. Our main finding was that the intensity of electric fields in the brain are higher in the peg-screw model than the cork-screw model.
Haueisen, J; Ramon, C; Eiselt, M; Brauer, H; Nowak, H
1997-08-01
Modeling in magnetoencephalography (MEG) and electroencephalography (EEG) requires knowledge of the in vivo tissue resistivities of the head. The aim of this paper is to examine the influence of tissue resistivity changes on the neuromagnetic field and the electric scalp potential. A high-resolution finite element method (FEM) model (452,162 elements, 2-mm resolution) of the human head with 13 different tissue types is employed for this purpose. Our main finding was that the magnetic fields are sensitive to changes in the tissue resistivity in the vicinity of the source. In comparison, the electric surface potentials are sensitive to changes in the tissue resistivity in the vicinity of the source and in the vicinity of the position of the electrodes. The magnitude (strength) of magnetic fields and electric surface potentials is strongly influenced by tissue resistivity changes, while the topography is not as strongly influenced. Therefore, an accurate modeling of magnetic field and electric potential strength requires accurate knowledge of tissue resistivities, while for source localization procedures this knowledge might not be a necessity.
NASA Astrophysics Data System (ADS)
Hancock, G. R.; Webb, A. A.; Turner, L.
2017-11-01
Sediment transport and soil erosion can be determined by a variety of field and modelling approaches. Computer based soil erosion and landscape evolution models (LEMs) offer the potential to be reliable assessment and prediction tools. An advantage of such models is that they provide both erosion and deposition patterns as well as total catchment sediment output. However, before use, like all models they require calibration and validation. In recent years LEMs have been used for a variety of both natural and disturbed landscape assessment. However, these models have not been evaluated for their reliability in steep forested catchments. Here, the SIBERIA LEM is calibrated and evaluated for its reliability for two steep forested catchments in south-eastern Australia. The model is independently calibrated using two methods. Firstly, hydrology and sediment transport parameters are inferred from catchment geomorphology and soil properties and secondly from catchment sediment transport and discharge data. The results demonstrate that both calibration methods provide similar parameters and reliable modelled sediment transport output. A sensitivity study of the input parameters demonstrates the model's sensitivity to correct parameterisation and also how the model could be used to assess potential timber harvesting as well as the removal of vegetation by fire.
Modeling of Hydraulic Fracturing on the Basis of the Particle Method
NASA Astrophysics Data System (ADS)
Berezhnoi, D. V.; Gabsalikova, N. F.; Izotov, V. G.; Miheev, V. V.
2018-01-01
A technique of calculating the deformation of the soil environment when it interacts with a liquid on the basis of the particle method a is realized. To describe the behavior of the solid and liquid phases of the soil, a classical two-parameter Lennard-Jones interaction potential and its modified version proposed by the authors were chosen. The model problem of deformation and partial destruction of a soil massif under strong pressure from the liquid pumped into it is solved. Analysis of the results shows that the use of the modified Lennard-Jones potential for describing the solid phase of the soil environment makes it possible to describe the process of formation of cracks in the soil during hydraulic fracturing of the formation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabacchi, G; Hutter, J; Mundy, C
2005-04-07
A combined linear response--frozen electron density model has been implemented in a molecular dynamics scheme derived from an extended Lagrangian formalism. This approach is based on a partition of the electronic charge distribution into a frozen region described by Kim-Gordon theory, and a response contribution determined by the instaneous ionic configuration of the system. The method is free from empirical pair-potentials and the parameterization protocol involves only calculations on properly chosen subsystems. They apply this method to a series of alkali halides in different physical phases and are able to reproduce experimental structural and thermodynamic properties with an accuracy comparablemore » to Kohn-Sham density functional calculations.« less
Acute and chronic animal models for the evaluation of anti-diabetic agents
2012-01-01
Diabetes mellitus is a potentially morbid condition with high prevalence worldwide thus being a major medical concern. Experimental induction of diabetes mellitus in animal models is essential for the advancement of our knowledge and understanding of the various aspects of its pathogenesis and ultimately finding new therapies and cure. Experimental diabetes mellitus is generally induced in laboratory animals by several methods that include: chemical, surgical and genetic (immunological) manipulations. Most of the experiments in diabetes are carried out in rodents, although some studies are still performed in larger animals. The present review highlights the various methods of inducing diabetes in experimental animals in order to test the newer drugs for their anti-diabetic potential. PMID:22257465
NASA Astrophysics Data System (ADS)
Sakai, Yoshiko; Miyoshi, Eisaku
1987-09-01
Electronic structures of MF6, MF-6, and MF2-6 (M=Cr, Mo, and W) were calculated using a model potential method in the Hartree-Fock-Roothaan scheme. Major relativistic effects were taken into account for the calculations on MoFq6 and WFq6 (q=0, -1, and -2). It is shown that the calculated electron affinities (EAs) are extremely high for all the MF6 molecules, and that the CrF-6 and MoF-6 anions also have positive EAs, whereas the WF-6 anion has a slightly negative EA. The behaviors of the EAs are interpreted with reference to the electronic structures of the MFq6 systems.
VoroMQA: Assessment of protein structure quality using interatomic contact areas.
Olechnovič, Kliment; Venclovas, Česlovas
2017-06-01
In the absence of experimentally determined protein structure many biological questions can be addressed using computational structural models. However, the utility of protein structural models depends on their quality. Therefore, the estimation of the quality of predicted structures is an important problem. One of the approaches to this problem is the use of knowledge-based statistical potentials. Such methods typically rely on the statistics of distances and angles of residue-residue or atom-atom interactions collected from experimentally determined structures. Here, we present VoroMQA (Voronoi tessellation-based Model Quality Assessment), a new method for the estimation of protein structure quality. Our method combines the idea of statistical potentials with the use of interatomic contact areas instead of distances. Contact areas, derived using Voronoi tessellation of protein structure, are used to describe and seamlessly integrate both explicit interactions between protein atoms and implicit interactions of protein atoms with solvent. VoroMQA produces scores at atomic, residue, and global levels, all in the fixed range from 0 to 1. The method was tested on the CASP data and compared to several other single-model quality assessment methods. VoroMQA showed strong performance in the recognition of the native structure and in the structural model selection tests, thus demonstrating the efficacy of interatomic contact areas in estimating protein structure quality. The software implementation of VoroMQA is freely available as a standalone application and as a web server at http://bioinformatics.lt/software/voromqa. Proteins 2017; 85:1131-1145. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Computational models for predicting interactions with membrane transporters.
Xu, Y; Shen, Q; Liu, X; Lu, J; Li, S; Luo, C; Gong, L; Luo, X; Zheng, M; Jiang, H
2013-01-01
Membrane transporters, including two members: ATP-binding cassette (ABC) transporters and solute carrier (SLC) transporters are proteins that play important roles to facilitate molecules into and out of cells. Consequently, these transporters can be major determinants of the therapeutic efficacy, toxicity and pharmacokinetics of a variety of drugs. Considering the time and expense of bio-experiments taking, research should be driven by evaluation of efficacy and safety. Computational methods arise to be a complementary choice. In this article, we provide an overview of the contribution that computational methods made in transporters field in the past decades. At the beginning, we present a brief introduction about the structure and function of major members of two families in transporters. In the second part, we focus on widely used computational methods in different aspects of transporters research. In the absence of a high-resolution structure of most of transporters, homology modeling is a useful tool to interpret experimental data and potentially guide experimental studies. We summarize reported homology modeling in this review. Researches in computational methods cover major members of transporters and a variety of topics including the classification of substrates and/or inhibitors, prediction of protein-ligand interactions, constitution of binding pocket, phenotype of non-synonymous single-nucleotide polymorphisms, and the conformation analysis that try to explain the mechanism of action. As an example, one of the most important transporters P-gp is elaborated to explain the differences and advantages of various computational models. In the third part, the challenges of developing computational methods to get reliable prediction, as well as the potential future directions in transporter related modeling are discussed.
{ITALIC AB INITIO} Large-Basis no-Core Shell Model and its Application to Light Nuclei
NASA Astrophysics Data System (ADS)
Barrett, Bruce R.; Navratil, Petr; Ormand, W. E.; Vary, James P.
2002-01-01
We discuss the {ITALIC ab initio} No-Core Shell Model (NCSM). In this method the effective Hamiltonians are derived microscopically from realistic nucleon-nucleon (NN) potentials, such as the CD-Bonn and the Argonne AV18 NN potentials, as a function of the finite Harmonic Oscillator (HO) basis space. We present converged results, i.e. , up to 50 Ω and 18 Ω HO excitations, respectively, for the A=3 and 4 nucleon systems. Our results for these light systems are in agreement with results obtained by other exact methods. We also calculate properties of 6Li and 6He in model spaces up to 10 Ω and of 12C up to 6 Ω. Binding energies, rms radii, excitation spectra and electromagnetic properties are discussed. The favorable comparison with available data is a consequence of the underlying NN interaction rather than a phenomenological fit.
Efficient two-component relativistic method for large systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakai, Hiromi; Research Institute for Science and Engineering, Waseda University, Tokyo 169-8555; CREST, Japan Science and Technology Agency, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012
This paper reviews a series of theoretical studies to develop efficient two-component (2c) relativistic method for large systems by the author’s group. The basic theory is the infinite-order Douglas-Kroll-Hess (IODKH) method for many-electron Dirac-Coulomb Hamiltonian. The local unitary transformation (LUT) scheme can effectively produce the 2c relativistic Hamiltonian, and the divide-and-conquer (DC) method can achieve linear-scaling of Hartree-Fock and electron correlation methods. The frozen core potential (FCP) theoretically connects model potential calculations with the all-electron ones. The accompanying coordinate expansion with a transfer recurrence relation (ACE-TRR) scheme accelerates the computations of electron repulsion integrals with high angular momenta and longmore » contractions.« less
Wang, Qilin; Sun, Jing; Zhang, Chang; Xie, Guo-Jun; Zhou, Xu; Qian, Jin; Yang, Guojing; Zeng, Guangming; Liu, Yiqi; Wang, Dongbo
2016-01-21
Anaerobic sludge digestion is the main technology for sludge reduction and stabilization prior to sludge disposal. Nevertheless, methane production from anaerobic digestion of waste activated sludge (WAS) is often restricted by the poor biochemical methane potential and slow hydrolysis rate of WAS. This work systematically investigated the effect of PHA levels of WAS on anaerobic methane production, using both experimental and mathematical modeling approaches. Biochemical methane potential tests showed that methane production increased with increased PHA levels in WAS. Model-based analysis suggested that the PHA-based method enhanced methane production by improving biochemical methane potential of WAS, with the highest enhancement being around 40% (from 192 to 274 L CH4/kg VS added; VS: volatile solid) when the PHA levels increased from 21 to 143 mg/g VS. In contrast, the hydrolysis rate (approximately 0.10 d(-1)) was not significantly affected by the PHA levels. Economic analysis suggested that the PHA-based method could save $1.2/PE/y (PE: population equivalent) in a typical wastewater treatment plant (WWTP). The PHA-based method can be easily integrated into the current WWTP to enhance methane production, thereby providing a strong support to the on-going paradigm shift in wastewater management from pollutant removal to resource recovery.
Lessening the Effects of Projection for Line-of-Sight Magnetic Field Data
NASA Astrophysics Data System (ADS)
Leka, K. D.; Barnes, Graham; Wagner, Eric
2016-05-01
A method for treating line-of-sight magnetic field data (Blos) is developed for the goal of reconstructing the radially-directed component (Br) of the solar photospheric magnetic field. The latter is generally the desired quantity for use as a boundary for modeling efforts and observational interpretation of the surface field, but the two are only equivalent where the viewing angle is exactly zero (μ=1.0). A common approximation known as the "μ-correction", which assumes all photospheric field to be radial, is compared to a method which invokes a potential field constructed to match the observed Blos (Alissandrakis 1981; Sakurai 1982), from which the potential field radial field component (Brpot) is recovered.We compare this treatment of Blos data to the radial component derived from SDO/HMI full-disk vector magnetograms as the "ground truth", and discuss the implications for data analysis and modeling efforts. In regions that are truly dominated by radial field, the μ-correction performs acceptably if not better than the potential-field approach. However, for any solar structure which includes horizontal fields, i.e. active regions, the potential-field method better recovers magnetic neutral line location and the inferred strength of the radial field.This work was made possible through contracts with NASA, NSF, and NOAA/SBIR.
Wang, Qilin; Sun, Jing; Zhang, Chang; Xie, Guo-Jun; Zhou, Xu; Qian, Jin; Yang, Guojing; Zeng, Guangming; Liu, Yiqi; Wang, Dongbo
2016-01-01
Anaerobic sludge digestion is the main technology for sludge reduction and stabilization prior to sludge disposal. Nevertheless, methane production from anaerobic digestion of waste activated sludge (WAS) is often restricted by the poor biochemical methane potential and slow hydrolysis rate of WAS. This work systematically investigated the effect of PHA levels of WAS on anaerobic methane production, using both experimental and mathematical modeling approaches. Biochemical methane potential tests showed that methane production increased with increased PHA levels in WAS. Model-based analysis suggested that the PHA-based method enhanced methane production by improving biochemical methane potential of WAS, with the highest enhancement being around 40% (from 192 to 274 L CH4/kg VS added; VS: volatile solid) when the PHA levels increased from 21 to 143 mg/g VS. In contrast, the hydrolysis rate (approximately 0.10 d−1) was not significantly affected by the PHA levels. Economic analysis suggested that the PHA-based method could save $1.2/PE/y (PE: population equivalent) in a typical wastewater treatment plant (WWTP). The PHA-based method can be easily integrated into the current WWTP to enhance methane production, thereby providing a strong support to the on-going paradigm shift in wastewater management from pollutant removal to resource recovery. PMID:26791952
NASA Astrophysics Data System (ADS)
Wang, Qilin; Sun, Jing; Zhang, Chang; Xie, Guo-Jun; Zhou, Xu; Qian, Jin; Yang, Guojing; Zeng, Guangming; Liu, Yiqi; Wang, Dongbo
2016-01-01
Anaerobic sludge digestion is the main technology for sludge reduction and stabilization prior to sludge disposal. Nevertheless, methane production from anaerobic digestion of waste activated sludge (WAS) is often restricted by the poor biochemical methane potential and slow hydrolysis rate of WAS. This work systematically investigated the effect of PHA levels of WAS on anaerobic methane production, using both experimental and mathematical modeling approaches. Biochemical methane potential tests showed that methane production increased with increased PHA levels in WAS. Model-based analysis suggested that the PHA-based method enhanced methane production by improving biochemical methane potential of WAS, with the highest enhancement being around 40% (from 192 to 274 L CH4/kg VS added; VS: volatile solid) when the PHA levels increased from 21 to 143 mg/g VS. In contrast, the hydrolysis rate (approximately 0.10 d-1) was not significantly affected by the PHA levels. Economic analysis suggested that the PHA-based method could save $1.2/PE/y (PE: population equivalent) in a typical wastewater treatment plant (WWTP). The PHA-based method can be easily integrated into the current WWTP to enhance methane production, thereby providing a strong support to the on-going paradigm shift in wastewater management from pollutant removal to resource recovery.
Guiding Conformation Space Search with an All-Atom Energy Potential
Brunette, TJ; Brock, Oliver
2009-01-01
The most significant impediment for protein structure prediction is the inadequacy of conformation space search. Conformation space is too large and the energy landscape too rugged for existing search methods to consistently find near-optimal minima. To alleviate this problem, we present model-based search, a novel conformation space search method. Model-based search uses highly accurate information obtained during search to build an approximate, partial model of the energy landscape. Model-based search aggregates information in the model as it progresses, and in turn uses this information to guide exploration towards regions most likely to contain a near-optimal minimum. We validate our method by predicting the structure of 32 proteins, ranging in length from 49 to 213 amino acids. Our results demonstrate that model-based search is more effective at finding low-energy conformations in high-dimensional conformation spaces than existing search methods. The reduction in energy translates into structure predictions of increased accuracy. PMID:18536015
Spatial modelling of disease using data- and knowledge-driven approaches.
Stevens, Kim B; Pfeiffer, Dirk U
2011-09-01
The purpose of spatial modelling in animal and public health is three-fold: describing existing spatial patterns of risk, attempting to understand the biological mechanisms that lead to disease occurrence and predicting what will happen in the medium to long-term future (temporal prediction) or in different geographical areas (spatial prediction). Traditional methods for temporal and spatial predictions include general and generalized linear models (GLM), generalized additive models (GAM) and Bayesian estimation methods. However, such models require both disease presence and absence data which are not always easy to obtain. Novel spatial modelling methods such as maximum entropy (MAXENT) and the genetic algorithm for rule set production (GARP) require only disease presence data and have been used extensively in the fields of ecology and conservation, to model species distribution and habitat suitability. Other methods, such as multicriteria decision analysis (MCDA), use knowledge of the causal factors of disease occurrence to identify areas potentially suitable for disease. In addition to their less restrictive data requirements, some of these novel methods have been shown to outperform traditional statistical methods in predictive ability (Elith et al., 2006). This review paper provides details of some of these novel methods for mapping disease distribution, highlights their advantages and limitations, and identifies studies which have used the methods to model various aspects of disease distribution. Copyright © 2011. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Monteiller, Vadim; Chevrot, Sébastien; Komatitsch, Dimitri; Wang, Yi
2015-08-01
We present a method for high-resolution imaging of lithospheric structures based on full waveform inversion of teleseismic waveforms. We model the propagation of seismic waves using our recently developed direct solution method/spectral-element method hybrid technique, which allows us to simulate the propagation of short-period teleseismic waves through a regional 3-D model. We implement an iterative quasi-Newton method based upon the L-BFGS algorithm, where the gradient of the misfit function is computed using the adjoint-state method. Compared to gradient or conjugate-gradient methods, the L-BFGS algorithm has a much faster convergence rate. We illustrate the potential of this method on a synthetic test case that consists of a crustal model with a crustal discontinuity at 25 km depth and a sharp Moho jump. This model contains short- and long-wavelength heterogeneities along the lateral and vertical directions. The iterative inversion starts from a smooth 1-D model derived from the IASP91 reference Earth model. We invert both radial and vertical component waveforms, starting from long-period signals filtered at 10 s and gradually decreasing the cut-off period down to 1.25 s. This multiscale algorithm quickly converges towards a model that is very close to the true model, in contrast to inversions involving short-period waveforms only, which always get trapped into a local minimum of the cost function.
Model uncertainties do not affect observed patterns of species richness in the Amazon.
Sales, Lilian Patrícia; Neves, Olívia Viana; De Marco, Paulo; Loyola, Rafael
2017-01-01
Climate change is arguably a major threat to biodiversity conservation and there are several methods to assess its impacts on species potential distribution. Yet the extent to which different approaches on species distribution modeling affect species richness patterns at biogeographical scale is however unaddressed in literature. In this paper, we verified if the expected responses to climate change in biogeographical scale-patterns of species richness and species vulnerability to climate change-are affected by the inputs used to model and project species distribution. We modeled the distribution of 288 vertebrate species (amphibians, birds and mammals), all endemic to the Amazon basin, using different combinations of the following inputs known to affect the outcome of species distribution models (SDMs): 1) biological data type, 2) modeling methods, 3) greenhouse gas emission scenarios and 4) climate forecasts. We calculated uncertainty with a hierarchical ANOVA in which those different inputs were considered factors. The greatest source of variation was the modeling method. Model performance interacted with data type and modeling method. Absolute values of variation on suitable climate area were not equal among predictions, but some biological patterns were still consistent. All models predicted losses on the area that is climatically suitable for species, especially for amphibians and primates. All models also indicated a current East-western gradient on endemic species richness, from the Andes foot downstream the Amazon river. Again, all models predicted future movements of species upwards the Andes mountains and overall species richness losses. From a methodological perspective, our work highlights that SDMs are a useful tool for assessing impacts of climate change on biodiversity. Uncertainty exists but biological patterns are still evident at large spatial scales. As modeling methods are the greatest source of variation, choosing the appropriate statistics according to the study objective is also essential for estimating the impacts of climate change on species distribution. Yet from a conservation perspective, we show that Amazon endemic fauna is potentially vulnerable to climate change, due to expected reductions on suitable climate area. Climate-driven faunal movements are predicted towards the Andes mountains, which might work as climate refugia for migrating species.
Hamiltonian and potentials in derivative pricing models: exact results and lattice simulations
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani
2004-03-01
The pricing of options, warrants and other derivative securities is one of the great success of financial economics. These financial products can be modeled and simulated using quantum mechanical instruments based on a Hamiltonian formulation. We show here some applications of these methods for various potentials, which we have simulated via lattice Langevin and Monte Carlo algorithms, to the pricing of options. We focus on barrier or path dependent options, showing in some detail the computational strategies involved.
Logic-Based Models for the Analysis of Cell Signaling Networks†
2010-01-01
Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. PMID:20225868
2013-01-01
Animal models of disease states are valuable tools for developing new treatments and investigating underlying mechanisms. They should mimic the symptoms and pathology of the disease and importantly be predictive of effective treatments. Fibromyalgia is characterized by chronic widespread pain with associated co-morbid symptoms that include fatigue, depression, anxiety and sleep dysfunction. In this review, we present different animal models that mimic the signs and symptoms of fibromyalgia. These models are induced by a wide variety of methods that include repeated muscle insults, depletion of biogenic amines, and stress. All potential models produce widespread and long-lasting hyperalgesia without overt peripheral tissue damage and thus mimic the clinical presentation of fibromyalgia. We describe the methods for induction of the model, pathophysiological mechanisms for each model, and treatment profiles. PMID:24314231
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
Bergues Pupo, Ana E; Reyes, Juan Bory; Bergues Cabrales, Luis E; Bergues Cabrales, Jesús M
2011-09-24
Electrotherapy is a relatively well established and efficient method of tumor treatment. In this paper we focus on analytical and numerical calculations of the potential and electric field distributions inside a tumor tissue in a two-dimensional model (2D-model) generated by means of electrode arrays with shapes of different conic sections (ellipse, parabola and hyperbola). Analytical calculations of the potential and electric field distributions based on 2D-models for different electrode arrays are performed by solving the Laplace equation, meanwhile the numerical solution is solved by means of finite element method in two dimensions. Both analytical and numerical solutions reveal significant differences between the electric field distributions generated by electrode arrays with shapes of circle and different conic sections (elliptic, parabolic and hyperbolic). Electrode arrays with circular, elliptical and hyperbolic shapes have the advantage of concentrating the electric field lines in the tumor. The mathematical approach presented in this study provides a useful tool for the design of electrode arrays with different shapes of conic sections by means of the use of the unifying principle. At the same time, we verify the good correspondence between the analytical and numerical solutions for the potential and electric field distributions generated by the electrode array with different conic sections.
A case study for the integration of predictive mineral potential maps
NASA Astrophysics Data System (ADS)
Lee, Saro; Oh, Hyun-Joo; Heo, Chul-Ho; Park, Inhye
2014-09-01
This study aims to elaborate on the mineral potential maps using various models and verify the accuracy for the epithermal gold (Au) — silver (Ag) deposits in a Geographic Information System (GIS) environment assuming that all deposits shared a common genesis. The maps of potential Au and Ag deposits were produced by geological data in Taebaeksan mineralized area, Korea. The methodological framework consists of three main steps: 1) identification of spatial relationships 2) quantification of such relationships and 3) combination of multiple quantified relationships. A spatial database containing 46 Au-Ag deposits was constructed using GIS. The spatial association between training deposits and 26 related factors were identified and quantified by probabilistic and statistical modelling. The mineral potential maps were generated by integrating all factors using the overlay method and recombined afterwards using the likelihood ratio model. They were verified by comparison with test mineral deposit locations. The verification revealed that the combined mineral potential map had the greatest accuracy (83.97%), whereas it was 72.24%, 65.85%, 72.23% and 71.02% for the likelihood ratio, weight of evidence, logistic regression and artificial neural network models, respectively. The mineral potential map can provide useful information for the mineral resource development.
Retrieving hydrological connectivity from empirical causality in karst systems
NASA Astrophysics Data System (ADS)
Delforge, Damien; Vanclooster, Marnik; Van Camp, Michel; Poulain, Amaël; Watlet, Arnaud; Hallet, Vincent; Kaufmann, Olivier; Francis, Olivier
2017-04-01
Because of their complexity, karst systems exhibit nonlinear dynamics. Moreover, if one attempts to model a karst, the hidden behavior complicates the choice of the most suitable model. Therefore, both intense investigation methods and nonlinear data analysis are needed to reveal the underlying hydrological connectivity as a prior for a consistent physically based modelling approach. Convergent Cross Mapping (CCM), a recent method, promises to identify causal relationships between time series belonging to the same dynamical systems. The method is based on phase space reconstruction and is suitable for nonlinear dynamics. As an empirical causation detection method, it could be used to highlight the hidden complexity of a karst system by revealing its inner hydrological and dynamical connectivity. Hence, if one can link causal relationships to physical processes, the method should show great potential to support physically based model structure selection. We present the results of numerical experiments using karst model blocks combined in different structures to generate time series from actual rainfall series. CCM is applied between the time series to investigate if the empirical causation detection is consistent with the hydrological connectivity suggested by the karst model.
Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.
Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi
2015-04-22
Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.
Lattice Boltzmann methods for global linear instability analysis
NASA Astrophysics Data System (ADS)
Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis
2017-12-01
Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.
Fire spread estimation on forest wildfire using ensemble kalman filter
NASA Astrophysics Data System (ADS)
Syarifah, Wardatus; Apriliani, Erna
2018-04-01
Wildfire is one of the most frequent disasters in the world, for example forest wildfire, causing population of forest decrease. Forest wildfire, whether naturally occurring or prescribed, are potential risks for ecosystems and human settlements. These risks can be managed by monitoring the weather, prescribing fires to limit available fuel, and creating firebreaks. With computer simulations we can predict and explore how fires may spread. The model of fire spread on forest wildfire was established to determine the fire properties. The fire spread model is prepared based on the equation of the diffusion reaction model. There are many methods to estimate the spread of fire. The Kalman Filter Ensemble Method is a modified estimation method of the Kalman Filter algorithm that can be used to estimate linear and non-linear system models. In this research will apply Ensemble Kalman Filter (EnKF) method to estimate the spread of fire on forest wildfire. Before applying the EnKF method, the fire spread model will be discreted using finite difference method. At the end, the analysis obtained illustrated by numerical simulation using software. The simulation results show that the Ensemble Kalman Filter method is closer to the system model when the ensemble value is greater, while the covariance value of the system model and the smaller the measurement.
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Three-dimensional electrical impedance tomography: a topology optimization approach.
Mello, Luís Augusto Motta; de Lima, Cícero Ribeiro; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez; Silva, Emílio Carlos Nelli
2008-02-01
Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.
Simplified modelling and analysis of a rotating Euler-Bernoulli beam with a single cracked edge
NASA Astrophysics Data System (ADS)
Yashar, Ahmed; Ferguson, Neil; Ghandchi-Tehrani, Maryam
2018-04-01
The natural frequencies and mode shapes of the flapwise and chordwise vibrations of a rotating cracked Euler-Bernoulli beam are investigated using a simplified method. This approach is based on obtaining the lateral deflection of the cracked rotating beam by subtracting the potential energy of a rotating massless spring, which represents the crack, from the total potential energy of the intact rotating beam. With this new method, it is assumed that the admissible function which satisfies the geometric boundary conditions of an intact beam is valid even in the presence of a crack. Furthermore, the centrifugal stiffness due to rotation is considered as an additional stiffness, which is obtained from the rotational speed and the geometry of the beam. Finally, the Rayleigh-Ritz method is utilised to solve the eigenvalue problem. The validity of the results is confirmed at different rotational speeds, crack depth and location by comparison with solid and beam finite element model simulations. Furthermore, the mode shapes are compared with those obtained from finite element models using a Modal Assurance Criterion (MAC).
NASA Technical Reports Server (NTRS)
Lawson, John W.; Daw, Murray S.; Squire, Thomas H.; Bauschlicher, Charles W.
2012-01-01
We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.
Computational State Space Models for Activity and Intention Recognition. A Feasibility Study
Krüger, Frank; Nyolt, Martin; Yordanova, Kristina; Hein, Albert; Kirste, Thomas
2014-01-01
Background Computational state space models (CSSMs) enable the knowledge-based construction of Bayesian filters for recognizing intentions and reconstructing activities of human protagonists in application domains such as smart environments, assisted living, or security. Computational, i. e., algorithmic, representations allow the construction of increasingly complex human behaviour models. However, the symbolic models used in CSSMs potentially suffer from combinatorial explosion, rendering inference intractable outside of the limited experimental settings investigated in present research. The objective of this study was to obtain data on the feasibility of CSSM-based inference in domains of realistic complexity. Methods A typical instrumental activity of daily living was used as a trial scenario. As primary sensor modality, wearable inertial measurement units were employed. The results achievable by CSSM methods were evaluated by comparison with those obtained from established training-based methods (hidden Markov models, HMMs) using Wilcoxon signed rank tests. The influence of modeling factors on CSSM performance was analyzed via repeated measures analysis of variance. Results The symbolic domain model was found to have more than states, exceeding the complexity of models considered in previous research by at least three orders of magnitude. Nevertheless, if factors and procedures governing the inference process were suitably chosen, CSSMs outperformed HMMs. Specifically, inference methods used in previous studies (particle filters) were found to perform substantially inferior in comparison to a marginal filtering procedure. Conclusions Our results suggest that the combinatorial explosion caused by rich CSSM models does not inevitably lead to intractable inference or inferior performance. This means that the potential benefits of CSSM models (knowledge-based model construction, model reusability, reduced need for training data) are available without performance penalty. However, our results also show that research on CSSMs needs to consider sufficiently complex domains in order to understand the effects of design decisions such as choice of heuristics or inference procedure on performance. PMID:25372138
Improved Model Fitting for the Empirical Green's Function Approach Using Hierarchical Models
NASA Astrophysics Data System (ADS)
Van Houtte, Chris; Denolle, Marine
2018-04-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study examines a variety of model-fitting methods and shows that the choice of method can explain some of the discrepancy. The preferred method is Bayesian hierarchical modeling, which can reduce bias, better quantify uncertainties, and allow additional effects to be resolved. Two case study earthquakes are examined, the 2016 MW7.1 Kumamoto, Japan earthquake and a MW5.3 aftershock of the 2016 MW7.8 Kaikōura earthquake. By using hierarchical models, the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be retrieved without overfitting the data. Other methods commonly used to calculate corner frequencies may give substantial biases. In particular, if fc was calculated for the Kumamoto earthquake using an ω-square model, the obtained fc could be twice as large as a realistic value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peloquin, R.A.; McKenzie, D.H.
1994-10-01
A compartmental model has been implemented on a microcomputer as an aid in the analysis of alternative solutions to a problem. The model, entitled Smolt Survival Simulator, simulates the survival of juvenile salmon during their downstream migration and passage of hydroelectric dams in the Columbia River. The model is designed to function in a workshop environment where resource managers and fisheries biologists can study alternative measures that may potentially increase juvenile anadromous fish survival during downriver migration. The potential application of the model has placed several requirements on the implementing software. It must be available for use in workshop settings.more » The software must be easily to use with minimal computer knowledge. Scenarios must be created and executed quickly and efficiently. Results must be immediately available. Software design emphasis vas placed on the user interface because of these requirements. The discussion focuses on methods used in the development of the SSS software user interface. These methods should reduce user stress and alloy thorough and easy parameter modification.« less
Animal models of post-ischemic forced use rehabilitation: methods, considerations, and limitations
2013-01-01
Many survivors of stroke experience arm impairments, which can severely impact their quality of life. Forcing use of the impaired arm appears to improve functional recovery in post-stroke hemiplegic patients, however the mechanisms underlying improved recovery remain unclear. Animal models of post-stroke rehabilitation could prove critical to investigating such mechanisms, however modeling forced use in animals has proven challenging. Potential problems associated with reported experimental models include variability between stroke methods, rehabilitation paradigms, and reported outcome measures. Herein, we provide an overview of commonly used stroke models, including advantages and disadvantages of each with respect to studying rehabilitation. We then review various forced use rehabilitation paradigms, and highlight potential difficulties and translational problems. Lastly, we discuss the variety of functional outcome measures described by experimental researchers. To conclude, we outline ongoing challenges faced by researchers, and the importance of translational communication. Many stroke patients rely critically on rehabilitation of post-stroke impairments, and continued effort toward progression of rehabilitative techniques is warranted to ensure best possible treatment of the devastating effects of stroke. PMID:23343500
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1994-01-01
The primary accomplishments of the project are as follows: (1) Using the transonic small perturbation equation as a flowfield model, the project demonstrated that the quasi-analytical method could be used to obtain aerodynamic sensitivity coefficients for airfoils at subsonic, transonic, and supersonic conditions for design variables such as Mach number, airfoil thickness, maximum camber, angle of attack, and location of maximum camber. It was established that the quasi-analytical approach was an accurate method for obtaining aerodynamic sensitivity derivatives for airfoils at transonic conditions and usually more efficient than the finite difference approach. (2) The usage of symbolic manipulation software to determine the appropriate expressions and computer coding associated with the quasi-analytical method for sensitivity derivatives was investigated. Using the three dimensional fully conservative full potential flowfield model, it was determined that symbolic manipulation along with a chain rule approach was extremely useful in developing a combined flowfield and quasi-analytical sensitivity derivative code capable of considering a large number of realistic design variables. (3) Using the three dimensional fully conservative full potential flowfield model, the quasi-analytical method was applied to swept wings (i.e. three dimensional) at transonic flow conditions. (4) The incremental iterative technique has been applied to the three dimensional transonic nonlinear small perturbation flowfield formulation, an equivalent plate deflection model, and the associated aerodynamic and structural discipline sensitivity equations; and coupled aeroelastic results for an aspect ratio three wing in transonic flow have been obtained.
The Effects of Hydrogen on the Potential-Energy Surface of Amorphous Silicon
NASA Astrophysics Data System (ADS)
Joly, Jean-Francois; Mousseau, Normand
2012-02-01
Hydrogenated amorphous silicon (a-Si:H) is an important semiconducting material used in many applications from solar cells to transistors. In 2010, Houssem et al. [1], using the open-ended saddle-point search method, ART nouveau, studied the characteristics of the potential energy landscape of a-Si as a function of relaxation. Here, we extend this study and follow the impact of hydrogen doping on the same a-Si models as a function of doping level. Hydrogen atoms are first attached to dangling bonds, then are positioned to relieve strained bonds of fivefold coordinated silicon atoms. Once these sites are saturated, further doping is achieved with a Monte-Carlo bond switching method that preserves coordination and reduces stress [2]. Bonded interactions are described with a modified Stillinger-Weber potential and non-bonded Si-H and H-H interactions with an adapted Slater-Buckingham potential. Large series of ART nouveau searches are initiated on each model, resulting in an extended catalogue of events that characterize the evolution of potential energy surface as a function of H-doping. [4pt] [1] Houssem et al., Phys Rev. Lett., 105, 045503 (2010)[0pt] [2] Mousseau et al., Phys Rev. B, 41, 3702 (1990)
A full potential inverse method based on a density linearization scheme for wing design
NASA Technical Reports Server (NTRS)
Shankar, V.
1982-01-01
A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.
Ensemble modelling and structured decision-making to support Emergency Disease Management.
Webb, Colleen T; Ferrari, Matthew; Lindström, Tom; Carpenter, Tim; Dürr, Salome; Garner, Graeme; Jewell, Chris; Stevenson, Mark; Ward, Michael P; Werkman, Marleen; Backer, Jantien; Tildesley, Michael
2017-03-01
Epidemiological models in animal health are commonly used as decision-support tools to understand the impact of various control actions on infection spread in susceptible populations. Different models contain different assumptions and parameterizations, and policy decisions might be improved by considering outputs from multiple models. However, a transparent decision-support framework to integrate outputs from multiple models is nascent in epidemiology. Ensemble modelling and structured decision-making integrate the outputs of multiple models, compare policy actions and support policy decision-making. We briefly review the epidemiological application of ensemble modelling and structured decision-making and illustrate the potential of these methods using foot and mouth disease (FMD) models. In case study one, we apply structured decision-making to compare five possible control actions across three FMD models and show which control actions and outbreak costs are robustly supported and which are impacted by model uncertainty. In case study two, we develop a methodology for weighting the outputs of different models and show how different weighting schemes may impact the choice of control action. Using these case studies, we broadly illustrate the potential of ensemble modelling and structured decision-making in epidemiology to provide better information for decision-making and outline necessary development of these methods for their further application. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Optimal Experimental Design for Model Discrimination
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983
High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME
NASA Astrophysics Data System (ADS)
Otis, Richard A.; Liu, Zi-Kui
2017-05-01
One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.
Reconstruction of electrocardiogram using ionic current models for heart muscles.
Yamanaka, A; Okazaki, K; Urushibara, S; Kawato, M; Suzuki, R
1986-11-01
A digital computer model is presented for the simulation of the electrocardiogram during ventricular activation and repolarization (QRS-T waves). The part of the ventricular septum and the left ventricular free wall of the heart are represented by a two dimensional array of 730 homogeneous functional units. Ionic currents models are used to determine the spatial distribution of the electrical activities of these units at each instant of time during simulated cardiac cycle. In order to reconstruct the electrocardiogram, the model is expanded three-dimensionally with equipotential assumption along the third axis and then the surface potentials are calculated using solid angle method. Our digital computer model can be used to improve the understanding of the relationship between body surface potentials and intracellular electrical events.
NASA Technical Reports Server (NTRS)
Batchelor, David; Zukor, Dorothy (Technical Monitor)
2001-01-01
New semiclassical models of virtual antiparticle pairs are used to compute the pair lifetimes, and good agreement with the Heisenberg lifetimes from quantum field theory (QFT) is found. The modeling method applies to both the electromagnetic and color forces. Evaluation of the action integral of potential field fluctuation for each interaction potential yields approximately Planck's constant/2 for both electromagnetic and color fluctuations, in agreement with QFT. Thus each model is a quantized semiclassical representation for such virtual antiparticle pairs, to good approximation. When the results of the new models and QFT are combined, formulae for e and alpha(sub s)(q) are derived in terms of only Planck's constant and c.
NASA Astrophysics Data System (ADS)
Pascuet, M. I.; Castin, N.; Becquart, C. S.; Malerba, L.
2011-05-01
An atomistic kinetic Monte Carlo (AKMC) method has been applied to study the stability and mobility of copper-vacancy clusters in Fe. This information, which cannot be obtained directly from experimental measurements, is needed to parameterise models describing the nanostructure evolution under irradiation of Fe alloys (e.g. model alloys for reactor pressure vessel steels). The physical reliability of the AKMC method has been improved by employing artificial intelligence techniques for the regression of the activation energies required by the model as input. These energies are calculated allowing for the effects of local chemistry and relaxation, using an interatomic potential fitted to reproduce them as accurately as possible and the nudged-elastic-band method. The model validation was based on comparison with available ab initio calculations for verification of the used cohesive model, as well as with other models and theories.
Development of DPD coarse-grained models: From bulk to interfacial properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solano Canchaya, José G.; Dequidt, Alain, E-mail: alain.dequidt@univ-bpclermont.fr; Goujon, Florent
2016-08-07
A new Bayesian method was recently introduced for developing coarse-grain (CG) force fields for molecular dynamics. The CG models designed for dissipative particle dynamics (DPD) are optimized based on trajectory matching. Here we extend this method to improve transferability across thermodynamic conditions. We demonstrate the capability of the method by developing a CG model of n-pentane from constant-NPT atomistic simulations of bulk liquid phases and we apply the CG-DPD model to the calculation of the surface tension of the liquid-vapor interface over a large range of temperatures. The coexisting densities, vapor pressures, and surface tensions calculated with different CG andmore » atomistic models are compared to experiments. Depending on the database used for the development of the potentials, it is possible to build a CG model which performs very well in the reproduction of the surface tension on the orthobaric curve.« less
How much can we trust a geological model underlying a subsurface hydrological investigation?
NASA Astrophysics Data System (ADS)
Wellmann, Florian; de la Varga, Miguel; Schaaf, Alexander; Burs, David
2017-04-01
Geological models often provide an important basis for subsequent hydrological investigations. As these models are generally built with a limited amount of information, they can contain significant uncertainties - and it is reasonable to assume that these uncertainties can potentially influence subsequent hydrological simulations. However, the investigation of uncertainties in geological models is not straightforward - and, even though recent advances have been made in the field, there is no out-of-the-box implementation to analyze uncertainties in a standard geological modeling package. We present here results of recent developments to address this problem with an efficient implementation of a geological modeling method for complex structural models, integrated in a Bayesian inference framework. The implemented geological modeling approach is based on a full 3-D implicit interpolation that directly respects interface positions and orientation measurements, as well as the influence of faults. In combination, the approach allows us to generate ensembles of geological model realizations, constrained by additional information in the form of likelihood functions to ensure consistency with additional geological aspects (e.g. sequence continuity, topology, fault network consistency), and we demonstrate the potential of the method in an exemplified case study. With this approach, we aim to contribute to a better understanding of the influence of geological uncertainties on subsurface hydrological investigations.
Animal Models of Resistance Exercise and their Application to Neuroscience Research
Strickland, Justin C.; Smith, Mark A.
2016-01-01
Background Numerous studies have demonstrated that participation in regular resistance exercise (e.g., strength training) is associated with improvements in mental health, memory, and cognition. However, less is known about the neurobiological mechanisms mediating these effects. The goal of this mini-review is to describe and evaluate the available animal models of resistance exercise that may prove useful for examining CNS activity. New Method Various models have been developed to examine resistance exercise in laboratory animals. Comparison with Existing Methods Resistance exercise models vary in how the resistance manipulation is applied, either through direct stimulation of the muscle (e.g., in situ models) or through behavior maintained by operant contingencies (e.g., whole organism models). Each model presents distinct advantages and disadvantages for examining central nervous system (CNS) activity, and consideration of these attributes is essential for the future investigation of underlying neurobiological substrates. Results Potential neurobiological mechanisms mediating the effects of resistance exercise on pain, anxiety, memory, and drug use have been efficiently and effectively investigated using resistance exercise models that minimize stress and maximize the relative contribution of resistance over aerobic factors. Conclusions Whole organism resistance exercise models that (1) limit the use of potentially stressful stimuli and (2) minimize the contribution of aerobic factors will be critical for examining resistance exercise and CNS function. PMID:27498037
Saddle point localization of molecular wavefunctions.
Mellau, Georg Ch; Kyuberis, Alexandra A; Polyansky, Oleg L; Zobov, Nikolai; Field, Robert W
2016-09-15
The quantum mechanical description of isomerization is based on bound eigenstates of the molecular potential energy surface. For the near-minimum regions there is a textbook-based relationship between the potential and eigenenergies. Here we show how the saddle point region that connects the two minima is encoded in the eigenstates of the model quartic potential and in the energy levels of the [H, C, N] potential energy surface. We model the spacing of the eigenenergies with the energy dependent classical oscillation frequency decreasing to zero at the saddle point. The eigenstates with the smallest spacing are localized at the saddle point. The analysis of the HCN ↔ HNC isomerization states shows that the eigenstates with small energy spacing relative to the effective (v1, v3, ℓ) bending potentials are highly localized in the bending coordinate at the transition state. These spectroscopically detectable states represent a chemical marker of the transition state in the eigenenergy spectrum. The method developed here provides a basis for modeling characteristic patterns in the eigenenergy spectrum of bound states.
Devine, Helen; Patani, Rickie
2017-04-01
The induced pluripotent state represents a decade-old Nobel prize-winning discovery. Human-induced pluripotent stem cells (hiPSCs) are generated by the nuclear reprogramming of any somatic cell using a variety of established but evolving methods. This approach offers medical science unparalleled experimental opportunity to model an individual patient's disease "in a dish." HiPSCs permit developmentally rationalized directed differentiation into any cell type, which express donor cell mutation(s) at pathophysiological levels and thus hold considerable potential for disease modeling, drug discovery, and potentially cell-based therapies. This review will focus on the translational potential of hiPSCs in clinical neurology and the importance of integrating this approach with complementary model systems to increase the translational yield of preclinical testing for the benefit of patients. This strategy is particularly important given the expected increase in prevalence of neurodegenerative disease, which poses a major burden to global health over the coming decades.
Biomass utilization modeling on the Bitterroot National Forest
Robin P. Silverstein; Dan Loeffler; J. Greg Jones; Dave E. Calkin; Hans R. Zuuring; Martin Twer
2006-01-01
Utilization of small-sized wood (biomass) from forests as a potential source of renewable energy is an increasingly important aspect of fuels management on public lands as an alternative to traditional disposal methods (open burning). The potential for biomass utilization to enhance the economics of treating hazardous forest fuels was examined on the Bitterroot...
The potential influence of rain on airfoil performance
NASA Technical Reports Server (NTRS)
Dunham, R. Earl, Jr.
1987-01-01
The potential influence of heavy rain on airfoil performance is discussed. Experimental methods for evaluating rain effects are reviewed. Important scaling considerations for extrapolating model data are presented. It is shown that considerable additional effort, both analytical and experimental, is necessary to understand the degree of hazard associated with flight operations in rain.
NASA Astrophysics Data System (ADS)
Andrianov, A. A.; Cannata, F.; Kamenshchik, A. Yu.
2012-11-01
We show that the simple extension of the method of obtaining the general exact solution for the cosmological model with the exponential scalar-field potential to the case when the dust is present fails, and we discuss the reasons of this puzzling phenomenon.
OBJECTIVE: Fine particulate matter <2.5 microm (PM2.5) has been implicated in vasoconstriction and potentiation of hypertension in humans. We investigated the effects of short-term exposure to PM2.5 in the angiotensin II (AII) infusion model. METHODS AND RESULTS: Sprague-Dawley r...
Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data
Zhao, Xin; Cheung, Leo Wang-Kit
2007-01-01
Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP) is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make Bayesian inferences. Simulation studies showed that, even without any knowledge of the underlying generative model, the KIGP performed very close to the theoretical Bayesian bound not only in the case with a linear Bayesian classifier but also in the case with a very non-linear Bayesian classifier. This sheds light on its broader usability to microarray data analysis problems, especially to those that linear methods work awkwardly. The KIGP was also applied to four published microarray datasets, and the results showed that the KIGP performed better than or at least as well as any of the referred state-of-the-art methods did in all of these cases. Conclusion Mathematically built on the kernel-induced feature space concept under a Bayesian framework, the KIGP method presented in this paper provides a unified machine learning approach to explore both the linear and the possibly non-linear underlying relationship between the target features of a given binary disease classification problem and the related explanatory gene expression data. More importantly, it incorporates the model parameter tuning into the framework. The model selection problem is addressed in the form of selecting a proper kernel type. The KIGP method also gives Bayesian probabilistic predictions for disease classification. These properties and features are beneficial to most real-world applications. The algorithm is naturally robust in numerical computation. The simulation studies and the published data studies demonstrated that the proposed KIGP performs satisfactorily and consistently. PMID:17328811
MOVES-Matrix and distributed computing for microscale line source dispersion analysis.
Liu, Haobing; Xu, Xiaodan; Rodgers, Michael O; Xu, Yanzhi Ann; Guensler, Randall L
2017-07-01
MOVES and AERMOD are the U.S. Environmental Protection Agency's recommended models for use in project-level transportation conformity and hot-spot analysis. However, the structure and algorithms involved in running MOVES make analyses cumbersome and time-consuming. Likewise, the modeling setup process, including extensive data requirements and required input formats, in AERMOD lead to a high potential for analysis error in dispersion modeling. This study presents a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix, a high-performance emission modeling tool, with the microscale dispersion models CALINE4 and AERMOD. MOVES-Matrix was prepared by iteratively running MOVES across all possible iterations of vehicle source-type, fuel, operating conditions, and environmental parameters to create a huge multi-dimensional emission rate lookup matrix. AERMOD and CALINE4 are connected with MOVES-Matrix in a distributed computing cluster using a series of Python scripts. This streamlined system built on MOVES-Matrix generates exactly the same emission rates and concentration results as using MOVES with AERMOD and CALINE4, but the approach is more than 200 times faster than using the MOVES graphical user interface. Because AERMOD requires detailed meteorological input, which is difficult to obtain, this study also recommends using CALINE4 as a screening tool for identifying the potential area that may exceed air quality standards before using AERMOD (and identifying areas that are exceedingly unlikely to exceed air quality standards). CALINE4 worst case method yields consistently higher concentration results than AERMOD for all comparisons in this paper, as expected given the nature of the meteorological data employed. The paper demonstrates a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix with the CALINE4 and AERMOD. This streamlined system generates exactly the same emission rates and concentration results as traditional way to use MOVES with AERMOD and CALINE4, which are regulatory models approved by the U.S. EPA for conformity analysis, but the approach is more than 200 times faster than implementing the MOVES model. We highlighted the potentially significant benefit of using CALINE4 as screening tool for identifying potential area that may exceeds air quality standards before using AERMOD, which requires much more meteorology input than CALINE4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulze-Halberg, Axel, E-mail: axgeschu@iun.edu; Department of Physics, Indiana University Northwest, 3400 Broadway, Gary IN 46408; Roy, Pinaki, E-mail: pinaki@isical.ac.in
We construct energy-dependent potentials for which the Schrödinger equations admit solutions in terms of exceptional orthogonal polynomials. Our method of construction is based on certain point transformations, applied to the equations of exceptional Hermite, Jacobi and Laguerre polynomials. We present several examples of boundary-value problems with energy-dependent potentials that admit a discrete spectrum and the corresponding normalizable solutions in closed form.
Maxwell's second- and third-order equations of transfer for non-Maxwellian gases
NASA Technical Reports Server (NTRS)
Baganoff, D.
1992-01-01
Condensed algebraic forms for Maxwell's second- and third-order equations of transfer are developed for the case of molecules described by either elastic hard spheres, inverse-power potentials, or by Bird's variable hard-sphere model. These hardly reduced, yet exact, equations provide a new point of origin, when using the moment method, in seeking approximate solutions in the kinetic theory of gases for molecular models that are physically more realistic than that provided by the Maxwell model. An important by-product of the analysis when using these second- and third-order relations is that a clear mathematical connection develops between Bird's variable hard-sphere model and that for the inverse-power potential.
Wu, Sheng-Nan
2004-03-31
The purpose of this study was to develop a method to simulate the cardiac action potential using a Microsoft Excel spreadsheet. The mathematical model contained voltage-gated ionic currents that were modeled using either Beeler-Reuter (B-R) or Luo-Rudy (L-R) phase 1 kinetics. The simulation protocol involves the use of in-cell formulas directly typed into a spreadsheet. The capability of spreadsheet iteration was used in these simulations. It does not require any prior knowledge of computer programming, although the use of the macro language can speed up the calculation. The normal configuration of the cardiac ventricular action potential can be well simulated in the B-R model that is defined by four individual ionic currents, each representing the diffusion of ions through channels in the membrane. The contribution of Na+ inward current to the rate of depolarization is reproduced in this model. After removal of Na+ current from the model, a constant current stimulus elicits an oscillatory change in membrane potential. In the L-R phase 1 model where six types of ionic currents were defined, the effect of extracellular K+ concentration on changes both in the time course of repolarization and in the time-independent K+ current can be demonstrated, when the solutions are implemented in Excel. Using the simulation protocols described here, the users can readily study and graphically display the underlying properties of ionic currents to see how changes in these properties determine the behavior of the heart cell. The method employed in these simulation protocols may also be extended or modified to other biological simulation programs.
The role of modelling in prioritising and planning clinical trials.
Chilcott, J; Brennan, A; Booth, A; Karnon, J; Tappenden, P
2003-01-01
To identify the role of modelling in planning and prioritising trials. The review focuses on modelling methods used in the construction of disease models and on methods for their analysis and interpretation. Searches were initially developed in MEDLINE and then translated into other databases. Systematic reviews of the methodological and case study literature were undertaken. Search strategies focused on the intersection between three domains: modelling, health technology assessment and prioritisation. The review found that modelling can extend the validity of trials by: generalising from trial populations to specific target groups; generalising to other settings and countries; extrapolating trial outcomes to the longer term; linking intermediate outcome measures to final outcomes; extending analysis to the relevant comparators; adjusting for prognostic factors in trials; and synthesising research results. The review suggested that modelling may offer greatest benefits where the impact of a technology occurs over a long duration, where disease/technology characteristics are not observable, where there are long lead times in research, or for rapidly changing technologies. It was also found that modelling can inform the key parameters for research: sample size, trial duration and population characteristics. One-way, multi-way and threshold sensitivity analysis have been used in informing these aspects but are flawed. The payback approach has been piloted and while there have been weaknesses in its implementation, the approach does have potential. Expected value of information analysis is the only existing methodology that has been applied in practice and can address all these issues. The potential benefit of this methodology is that the value of research is directly related to its impact on technology commissioning decisions, and is demonstrated in real and absolute rather than relative terms; it assesses the technical efficiency of different types of research. Modelling is not a substitute for data collection. However, modelling can identify trial designs of low priority in informing health technology commissioning decisions. Good practice in undertaking and reporting economic modelling studies requires further dissemination and support, specifically in sensitivity analyses, model validation and the reporting of assumptions. Case studies of the payback approach using stochastic sensitivity analyses should be developed. Use of overall expected value of perfect information should be encouraged in modelling studies seeking to inform prioritisation and planning of health technology assessments. Research is required to assess if the potential benefits of value of information analysis can be realised in practice; on the definition of an adequate objective function; on methods for analysing computationally expensive models; and on methods for updating prior probability distributions.
Cheminformatics-aided pharmacovigilance: application to Stevens-Johnson Syndrome
Low, Yen S; Caster, Ola; Bergvall, Tomas; Fourches, Denis; Zang, Xiaoling; Norén, G Niklas; Rusyn, Ivan; Edwards, Ralph
2016-01-01
Objective Quantitative Structure-Activity Relationship (QSAR) models can predict adverse drug reactions (ADRs), and thus provide early warnings of potential hazards. Timely identification of potential safety concerns could protect patients and aid early diagnosis of ADRs among the exposed. Our objective was to determine whether global spontaneous reporting patterns might allow chemical substructures associated with Stevens-Johnson Syndrome (SJS) to be identified and utilized for ADR prediction by QSAR models. Materials and Methods Using a reference set of 364 drugs having positive or negative reporting correlations with SJS in the VigiBase global repository of individual case safety reports (Uppsala Monitoring Center, Uppsala, Sweden), chemical descriptors were computed from drug molecular structures. Random Forest and Support Vector Machines methods were used to develop QSAR models, which were validated by external 5-fold cross validation. Models were employed for virtual screening of DrugBank to predict SJS actives and inactives, which were corroborated using knowledge bases like VigiBase, ChemoText, and MicroMedex (Truven Health Analytics Inc, Ann Arbor, Michigan). Results We developed QSAR models that could accurately predict if drugs were associated with SJS (area under the curve of 75%–81%). Our 10 most active and inactive predictions were substantiated by SJS reports (or lack thereof) in the literature. Discussion Interpretation of QSAR models in terms of significant chemical descriptors suggested novel SJS structural alerts. Conclusions We have demonstrated that QSAR models can accurately identify SJS active and inactive drugs. Requiring chemical structures only, QSAR models provide effective computational means to flag potentially harmful drugs for subsequent targeted surveillance and pharmacoepidemiologic investigations. PMID:26499102
System and Method for Providing Model-Based Alerting of Spatial Disorientation to a Pilot
NASA Technical Reports Server (NTRS)
Johnson, Steve (Inventor); Conner, Kevin J (Inventor); Mathan, Santosh (Inventor)
2015-01-01
A system and method monitor aircraft state parameters, for example, aircraft movement and flight parameters, applies those inputs to a spatial disorientation model, and makes a prediction of when pilot may become spatially disoriented. Once the system predicts a potentially disoriented pilot, the sensitivity for alerting the pilot to conditions exceeding a threshold can be increased and allow for an earlier alert to mitigate the possibility of an incorrect control input.
Control of spiral waves and turbulent states in a cardiac model by travelling-wave perturbations
NASA Astrophysics Data System (ADS)
Wang, Peng-Ye; Xie, Ping; Yin, Hua-Wei
2003-06-01
We propose a travelling-wave perturbation method to control the spatiotemporal dynamics in a cardiac model. It is numerically demonstrated that the method can successfully suppress the wave instability (alternans in action potential duration) in the one-dimensional case and convert spiral waves and turbulent states to the normal travelling wave states in the two-dimensional case. An experimental scheme is suggested which may provide a new design for a cardiac defibrillator.
Why learning and development can lead to poorer recognition memory.
Hayes, Brett K; Heit, Evan
2004-08-01
Current models of inductive reasoning in children and adults assume a central role for categorical knowledge. A recent paper by Sloutsky and Fisher challenges this assumption, showing that children are more likely than adults to rely on perceptual similarity as a basis for induction, and introduces a more direct method for examining the representations activated during induction. This method has the potential to constrain models of induction in novel ways, although there are still important challenges.
Realistic Gamow shell model for resonance and continuum in atomic nuclei
NASA Astrophysics Data System (ADS)
Xu, F. R.; Sun, Z. H.; Wu, Q.; Hu, B. S.; Dai, S. J.
2018-02-01
The Gamow shell model can describe resonance and continuum for atomic nuclei. The model is established in the complex-moment (complex-k) plane of the Berggren coordinates in which bound, resonant and continuum states are treated on equal footing self-consistently. In the present work, the realistic nuclear force, CD Bonn, has been used. We have developed the full \\hat{Q}-box folded-diagram method to derive the realistic effective interaction in the model space which is nondegenerate and contains resonance and continuum channels. The CD-Bonn potential is renormalized using the V low-k method. With choosing 16O as the inert core, we have applied the Gamow shell model to oxygen isotopes.
Quaternion-valued single-phase model for three-phase power system
NASA Astrophysics Data System (ADS)
Gou, Xiaoming; Liu, Zhiwen; Liu, Wei; Xu, Yougen; Wang, Jiabin
2018-03-01
In this work, a quaternion-valued model is proposed in lieu of the Clarke's α, β transformation to convert three-phase quantities to a hypercomplex single-phase signal. The concatenated signal can be used for harmonic distortion detection in three-phase power systems. In particular, the proposed model maps all the harmonic frequencies into frequencies in the quaternion domain, while the Clarke's transformation-based methods will fail to detect the zero sequence voltages. Based on the quaternion-valued model, the Fourier transform, the minimum variance distortionless response (MVDR) algorithm and the multiple signal classification (MUSIC) algorithm are presented as examples to detect harmonic distortion. Simulations are provided to demonstrate the potentials of this new modeling method.
Proposed method for hazard mapping of landslide propagation zone
NASA Astrophysics Data System (ADS)
Serbulea, Manole-Stelian; Gogu, Radu; Manoli, Daniel-Marcel; Gaitanaru, Dragos Stefan; Priceputu, Adrian; Andronic, Adrian; Anghel, Alexandra; Liviu Bugea, Adrian; Ungureanu, Constantin; Niculescu, Alexandru
2013-04-01
Sustainable development of communities situated in areas with landslide potential requires a fully understanding of the mechanisms that govern the triggering of the phenomenon as well as the propagation of the sliding mass, with catastrophic consequences on the nearby inhabitants and environment. Modern analysis methods for areas affected by the movement of the soil bodies are presented in this work, as well as a new procedure to assess the landslide hazard. Classical soil mechanics offer sufficient numeric models to assess the landslide triggering zone, such as Limit Equilibrium Methods (Fellenius, Janbu, Morgenstern-Price, Bishop, Spencer etc.), blocks model or progressive mobilization models, Lagrange-based finite element method etc. The computation methods for assessing the propagation zones are quite recent and have high computational requirements, thus not being sufficiently used in practice to confirm their feasibility. The proposed procedure aims to assess not only the landslide hazard factor, but also the affected areas, by means of simple mathematical operations. The method can easily be employed in GIS software, without requiring engineering training. The result is obtained by computing the first and second derivative of the digital terrain model (slope and curvature maps). Using the curvature maps, it is shown that one can assess the areas most likely to be affected by the propagation of the sliding masses. The procedure is first applied on a simple theoretical model and then used on a representative section of a high exposure area in Romania. The method is described by comparison with Romanian legislation for risk and vulnerability assessment, which specifies that the landslide hazard is to be assessed, using an average hazard factor Km, obtained from various other factors. Following the employed example, it is observed that using the Km factor there is an inconsistent distribution of the polygonal surfaces corresponding to different landslide potential. For small values of Km (0.00..0.10) the polygonal surfaces have reduced dimensions along the slopes belonging to main rivers. This can be corrected by including in the analysis the potential areas to be affected by soil instability. Finally, it is shown that the proposed procedure can be used to better assess these areas and to produce more reliable landslide hazard maps. This work was supported by a grant of the Romanian National Authority for Scientific Research, Program for research - Space Technology and Advanced Research - STAR, project number 30/2012.
Optical model potentials for 6He+64Zn from 63Cu(7Li,6He)64Zn reactions
NASA Astrophysics Data System (ADS)
Yang, L.; Lin, C. J.; Jia, H. M.; Wang, D. X.; Sun, L. J.; Ma, N. R.; Yang, F.; Wu, Z. D.; Xu, X. X.; Zhang, H. Q.; Liu, Z. H.; Bao, P. F.
2017-03-01
Angular distributions of the transfer reaction 63Cu(7Li,6He )64Zn were measured at Elab(7Li) =12.67 , 15.21, 16.33, 23.30, 27.30, and 30.96 MeV. With the interaction potentials of the entrance channel 7Li+63Cu obtained from elastic scattering data as input, the optical potentials of the halo nuclear system 6He+64Zn in the exit channel were extracted by fitting the experimental data with the distorted-wave Born approximation (DWBA) and coupled reaction channels (CRC) methods, respectively. The results show that the threshold anomaly presents in the weakly bound system of 7Li+63Cu and the dispersion relation can be adopted to describe the connection between the real and imaginary potentials, while both the real and imaginary potentials nearly keep constant within the researched energy region for the halo system of 6He+64Zn . Moreover, calculations by the potentials extracted from the CRC method can reproduce the experimental elastic scattering of the 6He+64Zn system rather well, but those by the potentials from the DWBA method cannot, where the couplings between 7Li and 6He are absent. This work verifies the validity of the transfer method in the medium-mass target region and lays a solid foundation for the further study of optical potentials for exotic nuclear systems.
Generalized second-order slip boundary condition for nonequilibrium gas flows
NASA Astrophysics Data System (ADS)
Guo, Zhaoli; Qin, Jishun; Zheng, Chuguang
2014-01-01
It is a challenging task to model nonequilibrium gas flows within a continuum-fluid framework. Recently some extended hydrodynamic models in the Navier-Stokes formulation have been developed for such flows. A key problem in the application of such models is that suitable boundary conditions must be specified. In the present work, a generalized second-order slip boundary condition is developed in which an effective mean-free path considering the wall effect is used. By combining this slip scheme with certain extended Navier-Stokes constitutive relation models, we obtained a method for nonequilibrium gas flows with solid boundaries. The method is applied to several rarefied gas flows involving planar or curved walls, including the Kramers' problem, the planar Poiseuille flow, the cylindrical Couette flow, and the low speed flow over a sphere. The results show that the proposed method is able to give satisfied predictions, indicating the good potential of the method for nonequilibrium flows.
Modeling Complex Biological Flows in Multi-Scale Systems using the APDEC Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trebotich, D
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA-laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscousmore » flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.« less
Modeling complex biological flows in multi-scale systems using the APDEC framework
NASA Astrophysics Data System (ADS)
Trebotich, David
2006-09-01
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.
Approach to Modeling Boundary Layer Ingestion Using a Fully Coupled Propulsion-RANS Model
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Mader, Charles A.; Kenway, Gaetan K. W.; Martins, Joaquim R. R. A.
2017-01-01
Airframe-propulsion integration concepts that use boundary layer ingestion have the potential to reduce aircraft fuel burn. One concept that has been recently explored is NASA's Starc-ABL aircraft configuration, which offers the potential for 12% mission fuel burn reduction by using a turbo-electric propulsion system with an aft-mounted electrically driven boundary layer ingestion propulsor. This large potential for improved performance motivates a more detailed study of the boundary layer ingestion propulsor design, but to date, analyses of boundary layer ingestion have used uncoupled methods. These methods account for only aerodynamic effects on the propulsion system or propulsion system effects on the aerodynamics, but not both simultaneously. This work presents a new approach for building fully coupled propulsive-aerodynamic models of boundary layer ingestion propulsion systems. A 1D thermodynamic cycle analysis is coupled to a RANS simulation to model the Starc-ABL aft propulsor at a cruise condition and the effects variation in propulsor design on performance are examined. The results indicates that both propulsion and aerodynamic effects contribute equally toward the overall performance and that the fully coupled model yields substantially different results compared to uncoupled. The most significant finding is that boundary layer ingestion, while offering substantial fuel burn savings, introduces throttle dependent aerodynamics effects that need to be accounted for. This work represents a first step toward the multidisciplinary design optimization of boundary layer ingestion propulsion systems.
Ardham, Vikram Reddy; Deichmann, Gregor; van der Vegt, Nico F A; Leroy, Frédéric
2015-12-28
We address the question of how reducing the number of degrees of freedom modifies the interfacial thermodynamic properties of heterogeneous solid-liquid systems. We consider the example of n-hexane interacting with multi-layer graphene which we model both with fully atomistic and coarse-grained (CG) models. The CG models are obtained by means of the conditional reversible work (CRW) method. The interfacial thermodynamics of these models is characterized by the solid-liquid work of adhesion WSL calculated by means of the dry-surface methodology through molecular dynamics simulations. We find that the CRW potentials lead to values of WSL that are larger than the atomistic ones. Clear understanding of the relationship between the structure of n-hexane in the vicinity of the surface and WSL is elucidated through a detailed study of the energy and entropy components of WSL. We highlight the crucial role played by the solid-liquid energy fluctuations. Our approach suggests that CG potentials should be designed in such a way that they preserve the range of solid-liquid interaction energies, but also their fluctuations in order to preserve the reference atomistic value of WSL. Our study thus opens perspectives into deriving CG interaction potentials that preserve the thermodynamics of solid-liquid contacts and will find application in studies that intend to address materials driven by interfaces.
APPLE - An aeroelastic analysis system for turbomachines and propfans
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral
1992-01-01
This paper reviews aeroelastic analysis methods for propulsion elements (advanced propellers, compressors and turbines) being developed and used at NASA Lewis Research Center. These aeroelastic models include both structural and aerodynamic components. The structural models include the typical section model, the beam model with and without disk flexibility, and the finite element blade model with plate bending elements. The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation for a cascade to the three-dimensional Euler equations for multi-blade configurations. Typical results are presented for each aeroelastic model. Suggestions for further research are indicated. All the available aeroelastic models and analysis methods are being incorporated into a unified computer program named APPLE (Aeroelasticity Program for Propulsion at LEwis).
Blackman, Arne V.; Grabuschnig, Stefan; Legenstein, Robert; Sjöström, P. Jesper
2014-01-01
Accurate 3D reconstruction of neurons is vital for applications linking anatomy and physiology. Reconstructions are typically created using Neurolucida after biocytin histology (BH). An alternative inexpensive and fast method is to use freeware such as Neuromantic to reconstruct from fluorescence imaging (FI) stacks acquired using 2-photon laser-scanning microscopy during physiological recording. We compare these two methods with respect to morphometry, cell classification, and multicompartmental modeling in the NEURON simulation environment. Quantitative morphological analysis of the same cells reconstructed using both methods reveals that whilst biocytin reconstructions facilitate tracing of more distal collaterals, both methods are comparable in representing the overall morphology: automated clustering of reconstructions from both methods successfully separates neocortical basket cells from pyramidal cells but not BH from FI reconstructions. BH reconstructions suffer more from tissue shrinkage and compression artifacts than FI reconstructions do. FI reconstructions, on the other hand, consistently have larger process diameters. Consequently, significant differences in NEURON modeling of excitatory post-synaptic potential (EPSP) forward propagation are seen between the two methods, with FI reconstructions exhibiting smaller depolarizations. Simulated action potential backpropagation (bAP), however, is indistinguishable between reconstructions obtained with the two methods. In our hands, BH reconstructions are necessary for NEURON modeling and detailed morphological tracing, and thus remain state of the art, although they are more labor intensive, more expensive, and suffer from a higher failure rate due to the occasional poor outcome of histological processing. However, for a subset of anatomical applications such as cell type identification, FI reconstructions are superior, because of indistinguishable classification performance with greater ease of use, essentially 100% success rate, and lower cost. PMID:25071470
Jarnevich, Catherine S.; Young, Nicholas E; Sheffels, Trevor R.; Carter, Jacoby; Systma, Mark D.; Talbert, Colin
2017-01-01
Invasive species provide a unique opportunity to evaluate factors controlling biogeographic distributions; we can consider introduction success as an experiment testing suitability of environmental conditions. Predicting potential distributions of spreading species is not easy, and forecasting potential distributions with changing climate is even more difficult. Using the globally invasive coypu (Myocastor coypus [Molina, 1782]), we evaluate and compare the utility of a simplistic ecophysiological based model and a correlative model to predict current and future distribution. The ecophysiological model was based on winter temperature relationships with nutria survival. We developed correlative statistical models using the Software for Assisted Habitat Modeling and biologically relevant climate data with a global extent. We applied the ecophysiological based model to several global circulation model (GCM) predictions for mid-century. We used global coypu introduction data to evaluate these models and to explore a hypothesized physiological limitation, finding general agreement with known coypu distribution locally and globally and support for an upper thermal tolerance threshold. Global circulation model based model results showed variability in coypu predicted distribution among GCMs, but had general agreement of increasing suitable area in the USA. Our methods highlighted the dynamic nature of the edges of the coypu distribution due to climate non-equilibrium, and uncertainty associated with forecasting future distributions. Areas deemed suitable habitat, especially those on the edge of the current known range, could be used for early detection of the spread of coypu populations for management purposes. Combining approaches can be beneficial to predicting potential distributions of invasive species now and in the future and in exploring hypotheses of factors controlling distributions.
Model selection for logistic regression models
NASA Astrophysics Data System (ADS)
Duller, Christine
2012-09-01
Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.
Estimation of selection intensity under overdominance by Bayesian methods.
Buzbas, Erkan Ozge; Joyce, Paul; Abdo, Zaid
2009-01-01
A balanced pattern in the allele frequencies of polymorphic loci is a potential sign of selection, particularly of overdominance. Although this type of selection is of some interest in population genetics, there exists no likelihood based approaches specifically tailored to make inference on selection intensity. To fill this gap, we present Bayesian methods to estimate selection intensity under k-allele models with overdominance. Our model allows for an arbitrary number of loci and alleles within a locus. The neutral and selected variability within each locus are modeled with corresponding k-allele models. To estimate the posterior distribution of the mean selection intensity in a multilocus region, a hierarchical setup between loci is used. The methods are demonstrated with data at the Human Leukocyte Antigen loci from world-wide populations.
The importance of the external potential on group electronegativity.
Leyssens, Tom; Geerlings, Paul; Peeters, Daniel
2005-11-03
The electronegativity of groups placed in a molecular environment is obtained using CCSD calculations of the electron affinity and ionization energy. A point charge model is used as an approximation of the molecular environment. The electronegativity values obtained in the presence of a point charge model are compared to the isolated group property to estimate the importance of the external potential on the group's electronegativity. The validity of the "group in molecule" electronegativities is verified by comparing EEM (electronegativity equalization method) charge transfer values to the explicitly calculated natural population analysis (NPA) ones, as well as by comparing the variation in electronegativity between the isolated functional group and the functional group in the presence of a modeled environment with the variation based on a perturbation expansion of the chemical potential.
Li, Xin; Yang, Zhong-Zhi
2005-05-12
We present a potential model for Li(+)-water clusters based on a combination of the atom-bond electronegativity equalization and molecular mechanics (ABEEM/MM) that is to take ABEEM charges of the cation and all atoms, bonds, and lone pairs of water molecules into the intermolecular electrostatic interaction term in molecular mechanics. The model allows point charges on cationic site and seven sites of an ABEEM-7P water molecule to fluctuate responding to the cluster geometry. The water molecules in the first sphere of Li(+) are strongly structured and there is obvious charge transfer between the cation and the water molecules; therefore, the charge constraint on the ionic cluster includes the charged constraint on the Li(+) and the first-shell water molecules and the charge neutrality constraint on each water molecule in the external hydration shells. The newly constructed potential model based on ABEEM/MM is first applied to ionic clusters and reproduces gas-phase state properties of Li(+)(H(2)O)(n) (n = 1-6 and 8) including optimized geometries, ABEEM charges, binding energies, frequencies, and so on, which are in fair agreement with those measured by available experiments and calculated by ab initio methods. Prospects and benefits introduced by this potential model are pointed out.
Fowler, Nicholas J; Blanford, Christopher F; Warwicker, Jim; de Visser, Sam P
2017-11-02
Blue copper proteins, such as azurin, show dramatic changes in Cu 2+ /Cu + reduction potential upon mutation over the full physiological range. Hence, they have important functions in electron transfer and oxidation chemistry and have applications in industrial biotechnology. The details of what determines these reduction potential changes upon mutation are still unclear. Moreover, it has been difficult to model and predict the reduction potential of azurin mutants and currently no unique procedure or workflow pattern exists. Furthermore, high-level computational methods can be accurate but are too time consuming for practical use. In this work, a novel approach for calculating reduction potentials of azurin mutants is shown, based on a combination of continuum electrostatics, density functional theory and empirical hydrophobicity factors. Our method accurately reproduces experimental reduction potential changes of 30 mutants with respect to wildtype within experimental error and highlights the factors contributing to the reduction potential change. Finally, reduction potentials are predicted for a series of 124 new mutants that have not yet been investigated experimentally. Several mutants are identified that are located well over 10 Å from the copper center that change the reduction potential by more than 85 mV. The work shows that secondary coordination sphere mutations mostly lead to long-range electrostatic changes and hence can be modeled accurately with continuum electrostatics. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Interactions of molecules and the properties of crystals
NASA Astrophysics Data System (ADS)
McConnell, Thomas Daniel Leigh
In this thesis the basic theory of the lattice dynamics of molecular crystals is considered, with particular reference to the specific case of linear molecules. The objective is to carry out a critical investigation of a number of empirical potentials as models for real systems. Suitable coordinates are introduced, in particular vibrational coordinates which are used to describe the translational and rotational modes of the free molecule. The Taylor expansion of the intermolecular potential is introduced and its terms considered, in particular the (first-order) equilibrium conditions for such a system and the (second-order) lattice vibrations. The elastic properties are also considered, in particular with reference to the specific case of rhombohedral crystals. The compressibility and a number of conditions for elastic stability are introduced. The total intermolecular interaction potential is divided into three components using perturbation methods, the electrostatic energy, the repulsion energy and the dispersion energy. A number of models are introduced for these various components. The induction energy is neglected. The electrostatic interaction is represented by atomic multipole and molecular multipole models. The repulsion and dispersion energies are modelled together in a central interaction potential, either the Lennard-Jones atom-atom potential or the anisotropic Berne-Pechukas molecule-molecule potential. In each case, the Taylor expansion coefficients, used to calculate the various molecular properties, are determined. An algorithm is described which provides a relatively simple method for calculating cartesian tensors, which are found in the Taylor expansion coefficients of the multipolar potentials. This proves to be particularly useful from a computational viewpoint, both in terms of programming and calculating efficiency. The model system carbonyl sulphide is introduced and its lattice properties are described. Suitable parameters for potentials used to model the system are discussed and the simplifications to the Taylor expansion coefficients due to crystal symmetry are detailed. Four potential parameters are chosen to be fitted to four lattice properties, representing zero, first and second order Taylor expansion coefficients. The supplementary tests of a given fitted potential are detailed. A number of forms for the electrostatic interaction of carbonyl sulphide are considered, each combined with a standard atom-atom potential. The success of the molecular octupole model is considered and the inability of more complex electrostatic potentials to improve on this simple model is noted. The anisotropic Berne-Pechukas potential, which provides an increased estimate of the compressibility is considered as being an improvement on the various atom-atom potentials. The effect of varying the exponents in the atom-atom (or molecule-molecule) potential, representing a systematic variation of the repulsion and dispersion energy models, is examined and a potential which is able to reproduce all of the given lattice properties for carbonyl sulphide is obtained. The molecular crystal of cyanogen iodide is investigated. Superficially it is similar to the crystal of carbonyl sulphide and the potentials used with success for the latter are applied to cyanogen iodide to determine whether they are equally as effective models for this molecule. These potentials are found to be far less successful, in all cases yielding a number of unrealistic results. Reasons for the failure of the model are considered, in particular the 3 differences between the electrostatic properties of the two molecules are discussed. It is concluded that some of the simplifications which proved satisfactory for carbonyl sulphide are invalid for simple extension to the case of cyanogen iodide. A first estimate of the differences in the electrostatic properties is attempted, calculating the induction energies of the two molecules. The assumption that the induction energy may be neglected is justified for the case of carbonyl sulphide but found to be far less satisfactory for cyanogen iodide. Finally details of ab initio calculations are outlined. The amount of experimental data available for the electrostatic properties of the two molecules under consideration is relatively small and the experimental data which is available is supplemented by values obtained from these calculations.
Campbell, J Q; Petrella, A J
2016-09-06
Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mukherjee, Sumanta; Bhattacharyya, Chiranjib; Chandra, Nagasuma
2016-08-01
T-cell epitopes serve as molecular keys to initiate adaptive immune responses. Identification of T-cell epitopes is also a key step in rational vaccine design. Most available methods are driven by informatics and are critically dependent on experimentally obtained training data. Analysis of a training set from Immune Epitope Database (IEDB) for several alleles indicates that the sampling of the peptide space is extremely sparse covering a tiny fraction of the possible nonamer space, and also heavily skewed, thus restricting the range of epitope prediction. We present a new epitope prediction method that has four distinct computational modules: (i) structural modelling, estimating statistical pair-potentials and constraint derivation, (ii) implicit modelling and interaction profiling, (iii) feature representation and binding affinity prediction and (iv) use of graphical models to extract peptide sequence signatures to predict epitopes for HLA class I alleles. HLaffy is a novel and efficient epitope prediction method that predicts epitopes for any Class-1 HLA allele, by estimating the binding strengths of peptide-HLA complexes which is achieved through learning pair-potentials important for peptide binding. It relies on the strength of the mechanistic understanding of peptide-HLA recognition and provides an estimate of the total ligand space for each allele. The performance of HLaffy is seen to be superior to the currently available methods. The method is made accessible through a webserver http://proline.biochem.iisc.ernet.in/HLaffy : nchandra@biochem.iisc.ernet.in Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A statistical nanomechanism of biomolecular patterning actuated by surface potential
NASA Astrophysics Data System (ADS)
Lin, Chih-Ting; Lin, Chih-Hao
2011-02-01
Biomolecular patterning on a nanoscale/microscale on chip surfaces is one of the most important techniques used in vitro biochip technologies. Here, we report upon a stochastic mechanics model we have developed for biomolecular patterning controlled by surface potential. The probabilistic biomolecular surface adsorption behavior can be modeled by considering the potential difference between the binding and nonbinding states. To verify our model, we experimentally implemented a method of electroactivated biomolecular patterning technology and the resulting fluorescence intensity matched the prediction of the developed model quite well. Based on this result, we also experimentally demonstrated the creation of a bovine serum albumin pattern with a width of 200 nm in 5 min operations. This submicron noncovalent-binding biomolecular pattern can be maintained for hours after removing the applied electrical voltage. These stochastic understandings and experimental results not only prove the feasibility of submicron biomolecular patterns on chips but also pave the way for nanoscale interfacial-bioelectrical engineering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christiansen, P.L.; Scott, A.C.; Muto, V.
In recent years the possibility that anharmonic excitations could play a role in the dynamics of SNA has been considered by several authors. It has been suggested that solitons may be generated thermally at biological temperatures. The denaturation of the DNA double helix has been investigated by statistical mechanics methods and by dynamical simulations. Here the potential for the hydrogen bond in each base pair is approximated by a Morse potential. In the present paper we describe the Toda lattice model of DNA. Temperature enters via the initial conditions and through a perturbation of the dynamical equations. The model ismore » refined by introduction of transversal motion of the Toda lattice and by transversal coupling of two lattices in the hydrogen bonds present in the base pairs. Using Lennard-Jones potentials to model these bonds we are able to obtain results concerning the open states of DNA at biological temperatures. 39 refs., 7 figs.« less
NASA Astrophysics Data System (ADS)
Chen, Wen; Wang, Fajie
Based on the implicit calculus equation modeling approach, this paper proposes a speculative concept of the potential and wave operators on negative dimensionality. Unlike the standard partial differential equation (PDE) modeling, the implicit calculus modeling approach does not require the explicit expression of the PDE governing equation. Instead the fundamental solution of physical problem is used to implicitly define the differential operator and to implement simulation in conjunction with the appropriate boundary conditions. In this study, we conjecture an extension of the fundamental solution of the standard Laplace and Helmholtz equations to negative dimensionality. And then by using the singular boundary method, a recent boundary discretization technique, we investigate the potential and wave problems using the fundamental solution on negative dimensionality. Numerical experiments reveal that the physics behaviors on negative dimensionality may differ on positive dimensionality. This speculative study might open an unexplored territory in research.
Jang, Won-hee; Jung, Kyoung-mi; Yang, Hye-ri; Lee, Miri; Jung, Haeng-Sun; Lee, Su-Hyon; Park, Miyoung; Lim, Kyung-Min
2015-01-01
The eye irritation potential of drug candidates or pharmaceutical ingredients should be evaluated if there is a possibility of ocular exposure. Traditionally, the ocular irritation has been evaluated by the rabbit Draize test. However, rabbit eyes are more sensitive to irritants than human eyes, therefore substantial level of false positives are unavoidable. To resolve this species difference, several three-dimensional human corneal epithelial (HCE) models have been developed as alternative eye irritation test methods. Recently, we introduced a new HCE model, MCTT HCETM which is reconstructed with non-transformed human corneal cells from limbal tissues. Here, we examined if MCTT HCETM can be employed to evaluate eye irritation potential of solid substances. Through optimization of washing method and exposure time, treatment time was established as 10 min and washing procedure was set up as 4 times of washing with 10 mL of PBS and shaking in 30 mL of PBS in a beaker. With the established eye irritation test protocol, 11 solid substances (5 non-irritants, 6 irritants) were evaluated which demonstrated an excellent predictive capacity (100% accuracy, 100% specificity and 100% sensitivity). We also compared the performance of our test method with rabbit Draize test results and in vitro cytotoxicity test with 2D human corneal epithelial cell lines. PMID:26157556