Accurate theoretical chemistry with coupled pair models.
Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan
2009-05-19
Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now
Anatomically accurate individual face modeling.
Zhang, Yu; Prakash, Edmond C; Sung, Eric
2003-01-01
This paper presents a new 3D face model of a specific person constructed from the anatomical perspective. By exploiting the laser range data, a 3D facial mesh precisely representing the skin geometry is reconstructed. Based on the geometric facial mesh, we develop a deformable multi-layer skin model. It takes into account the nonlinear stress-strain relationship and dynamically simulates the non-homogenous behavior of the real skin. The face model also incorporates a set of anatomically-motivated facial muscle actuators and underlying skull structure. Lagrangian mechanics governs the facial motion dynamics, dictating the dynamic deformation of facial skin in response to the muscle contraction. PMID:15455936
Accurate mask model for advanced nodes
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle
2014-07-01
Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.
Pre-Modeling Ensures Accurate Solid Models
ERIC Educational Resources Information Center
Gow, George
2010-01-01
Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
Isodesmic reaction for accurate theoretical pKa calculations of amino acids and peptides.
Sastre, S; Casasnovas, R; Muñoz, F; Frau, J
2016-04-20
Theoretical and quantitative prediction of pKa values at low computational cost is a current challenge in computational chemistry. We report that the isodesmic reaction scheme provides semi-quantitative predictions (i.e. mean absolute errors of 0.5-1.0 pKa unit) for the pKa1 (α-carboxyl), pKa2 (α-amino) and pKa3 (sidechain groups) of a broad set of amino acids and peptides. This method fills the gaps of thermodynamic cycles for the computational pKa calculation of molecules that are unstable in the gas phase or undergo proton transfer reactions or large conformational changes from solution to the gas phase. We also report the key criteria to choose a reference species to make accurate predictions. This method is computationally inexpensive and makes use of standard density functional theory (DFT) and continuum solvent models. It is also conceptually simple and easy to use for researchers not specialized in theoretical chemistry methods. PMID:27052591
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.
Accurate spectral modeling for infrared radiation
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Gupta, S. K.
1977-01-01
Direct line-by-line integration and quasi-random band model techniques are employed to calculate the spectral transmittance and total band absorptance of 4.7 micron CO, 4.3 micron CO2, 15 micron CO2, and 5.35 micron NO bands. Results are obtained for different pressures, temperatures, and path lengths. These are compared with available theoretical and experimental investigations. For each gas, extensive tabulations of results are presented for comparative purposes. In almost all cases, line-by-line results are found to be in excellent agreement with the experimental values. The range of validity of other models and correlations are discussed.
The importance of accurate atmospheric modeling
NASA Astrophysics Data System (ADS)
Payne, Dylan; Schroeder, John; Liang, Pang
2014-11-01
This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.
Accurate method of modeling cluster scaling relations in modified gravity
NASA Astrophysics Data System (ADS)
He, Jian-hua; Li, Baojiu
2016-06-01
We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241
Theoretical Foundation for Weld Modeling
NASA Technical Reports Server (NTRS)
Traugott, S.
1986-01-01
Differential equations describe physics of tungsten/inert-gas and plasma-arc welding in aluminum. Report collects and describes necessary theoretical foundation upon which numerical welding model is constructed for tungsten/inert gas or plasma-arc welding in aluminum without keyhole. Governing partial differential equations for flow of heat, metal, and current given, together with boundary conditions relevant to welding process. Numerical estimates for relative importance of various phenomena and required properties of 2219 aluminum included
Theoretical Models of Generalized Quasispecies.
Wagner, Nathaniel; Atsmon-Raz, Yoav; Ashkenasy, Gonen
2016-01-01
Theoretical modeling of quasispecies has progressed in several directions. In this chapter, we review the works of Emmanuel Tannenbaum, who, together with Eugene Shakhnovich at Harvard University and later with colleagues and students at Ben-Gurion University in Beersheva, implemented one of the more useful approaches, by progressively setting up various formulations for the quasispecies model and solving them analytically. Our review will focus on these papers that have explored new models, assumed the relevant mathematical approximations, and proceeded to analytically solve for the steady-state solutions and run stochastic simulations . When applicable, these models were related to real-life problems and situations, including changing environments, presence of chemical mutagens, evolution of cancer and tumor cells , mutations in Escherichia coli, stem cells , chromosomal instability (CIN), propagation of antibiotic drug resistance , dynamics of bacteria with plasmids , DNA proofreading mechanisms, and more. PMID:26373410
Accurate astronomical atmospheric dispersion models in ZEMAX
NASA Astrophysics Data System (ADS)
Spanò, P.
2014-07-01
ZEMAX provides a standard built-in atmospheric model to simulate atmospheric refraction and dispersion. This model has been compared with other ones to assess its intrinsic accuracy, critical for very demanding application like ADCs for AO-assisted extremely large telescopes. A revised simple model, based on updated published data of the air refractivity, is proposed by using the "Gradient 5" surface of Zemax. At large zenith angles (65 deg), discrepancies up to 100 mas in the differential refraction are expected near the UV atmospheric transmission cutoff. When high-accuracy modeling is required, the latter model should be preferred.
Theoretical model of ``fuzz'' growth
NASA Astrophysics Data System (ADS)
Krasheninnikov, Sergei; Smirnov, Roman
2012-10-01
Recent more detailed experiments on tungsten irradiation with low energy helium plasma, relevant to the near-wall plasma conditions in magnetic fusion reactor like ITER, demonstrated (e.g. see Ref. 1) a very dramatic change in both surface morphology and near surface material structure of the samples. In particular, it was shown that a long (mm-scale) and thin (nm-scale) fiber-like structures filled with nano-bubbles, so-called ``fuzz,'' start to grow. In this work theoretical model of ``fuzz'' growth [2] describing the main features observed in experiments is presented. This model, based on the assumption of enhancement of creep of tungsten containing significant fraction of helium atoms and clusters. The results of the MD simulations [3] support this idea and demonstrate a strong reduction of the yield strength for all temperature range. They also show that the ``flow'' of tungsten strongly facilitates coagulation of helium clusters and the formation of nano-bubbles.[4pt] [1] M. J. Baldwin, et al., J. Nucl. Mater. 390-391 (2009) 885;[0pt] [2] S. I. Krasheninnikov, Physica Scripta T145 (2011) 014040;[0pt] [3] R. D. Smirnov and S. I. Krasheninnikov, submitted to J. Nucl. Materials.
NASA Technical Reports Server (NTRS)
Mcgrath, W. R.; Richards, P. L.; Face, D. W.; Prober, D. E.; Lloyd, F. L.
1988-01-01
A systematic study of the gain and noise in superconductor-insulator-superconductor mixers employing Ta based, Nb based, and Pb-alloy based tunnel junctions was made. These junctions displayed both weak and strong quantum effects at a signal frequency of 33 GHz. The effects of energy gap sharpness and subgap current were investigated and are quantitatively related to mixer performance. Detailed comparisons are made of the mixing results with the predictions of a three-port model approximation to the Tucker theory. Mixer performance was measured with a novel test apparatus which is accurate enough to allow for the first quantitative tests of theoretical noise predictions. It is found that the three-port model of the Tucker theory underestimates the mixer noise temperature by a factor of about 2 for all of the mixers. In addition, predicted values of available mixer gain are in reasonable agreement with experiment when quantum effects are weak. However, as quantum effects become strong, the predicted available gain diverges to infinity, which is in sharp contrast to the experimental results. Predictions of coupled gain do not always show such divergences.
Theoretical Models and QSRR in Retention Modeling of Eight Aminopyridines.
Tumpa, Anja; Kalinić, Marko; Jovanović, Predrag; Erić, Slavica; Rakić, Tijana; Jančić-Stojanović, Biljana; Medenica, Mirjana
2016-03-01
In this article, retention modeling of eight aminopyridines (synthesized and characterized at the Faculty of Pharmacy) in reversed-phase high performance liquid chromatography (RP-HPLC) was performed. No data related to their retention in the RP-HPLC system were found. Knowing that, it was recognized as very important to describe their retention behavior. The influences of pH of the mobile phase and the organic modifier content on the retention factors were investigated. Two theoretical models for the dependence of retention factor of organic modifier content were tested. Then, the most reliable and accurate prediction of log k was created, testing multiple linear regression model-quantitative structure-retention relationships (MLR-QSRR) and support vector regression machine-quantitative structure-retention relationships (SVM-QSRR). Initially, 400 descriptors were calculated, but four of them (POM, log D, M-SZX/RZX and m-RPCG) were included in the models. SVM-QSRR performed significantly better than the MLR model. Apart from aminopyridines, four structurally similar substances (indapamide, gliclazide, sulfamethoxazole and furosemide) were followed in the same chromatographic system. They were used as external validation set for the QSRR model (it performed well within its applicability domain, which was defined using a bounding box approach). After having described retention of eight aminopyridines with both theoretical and QSRR models, further investigations in this field can be conducted. PMID:26590237
APPRENTICESHIP--A THEORETICAL MODEL.
ERIC Educational Resources Information Center
DUFTY, NORMAN F.
AN INQUIRY INTO RECRUITMENT OF APPRENTICES TO SKILLED TRADES IN WESTERN AUSTRALIA INDICATED LITTLE CORRELATION BETWEEN THE NUMBER OF NEW APPRENTICES AND THE LEVEL OF INDUSTRIAL EMPLOYMENT OR THE TOTAL NUMBER OF APPRENTICES. THIS ARTICLE ATTEMPTS TO OUTLINE A MATHEMATICAL MODEL OF AN APPRENTICESHIP SYSTEM AND DISCUSS ITS IMPLICATIONS. THE MODEL, A…
Theoretical Modelling of Hot Stars
NASA Astrophysics Data System (ADS)
Najarro, F.; Hillier, D. J.; Figer, D. F.; Geballe, T. R.
1999-06-01
Recent progress towards model atmospheres for hot stars is discussed. A new generation of NLTE wind blanketed models, together with high S/N spectra of the hot star population in the central parsec, which are currently being obtained, will allow metal abundance determinations (Fe, Si, Mg, Na, etc). Metallicity studies of hot stars in the IR will provide major constraints not only on the theory of evolution of massive stars but also on our efforts to solve the puzzle of the central parsecs of the Galaxy. Preliminary results suggest that the metallicity of the Pistol Star is 3 times solar, thus indicating strong chemical enrichment of the gas in the Galactic Center.
Theoretical aspects of an electricity marginal cost model
Oyama, T.
1986-01-01
A separable programming model has been built to estimate electricity marginal costs. The model can be solved by applying linear programming techniques, hence marginal costs are obtained from shadow prices of model's optimal solution. In order to obtain more accurate and more detailed composition of electricity marginal costs, shadow prices are mathematically explained rigorously from model's structural points of view. Theoretical aspects of our electricity marginal cost model are investigated by applying theory of linear programming. Furthermore, various types of mathematical expression are also shown with their interpretation in the real power system.
Theoretical aspects of an electricity marginal cost model
Oyama, T.
1987-05-01
A separable programming model has been built to estimate electricity marginal costs. The model can be solved by applying linear programming techniques, hence marginal costs are obtained from shadow prices of model's optimal solution. In order to obtain more accurate and more detailed composition of electricity marginal costs, shadow prices are mathematically explained rigorously from model's structural points of view. Theoretical aspects of our electricity marginal cost model are investigated by applying theory of linear programming. Furthermore, various types of mathematical expression are also shown with their interpretation in the real power system.
Water wave model with accurate dispersion and vertical vorticity
NASA Astrophysics Data System (ADS)
Bokhove, Onno
2010-05-01
Cotter and Bokhove (Journal of Engineering Mathematics 2010) derived a variational water wave model with accurate dispersion and vertical vorticity. In one limit, it leads to Luke's variational principle for potential flow water waves. In the another limit it leads to the depth-averaged shallow water equations including vertical vorticity. Presently, focus will be put on the Hamiltonian formulation of the variational model and its boundary conditions.
Theoretical Modeling of Interstellar Chemistry
NASA Technical Reports Server (NTRS)
Charnley, Steven
2009-01-01
The chemistry of complex interstellar organic molecules will be described. Gas phase processes that may build large carbon-chain species in cold molecular clouds will be summarized. Catalytic reactions on grain surfaces can lead to a large variety of organic species, and models of molecule formation by atom additions to multiply-bonded molecules will be presented. The subsequent desorption of these mixed molecular ices can initiate a distinctive organic chemistry in hot molecular cores. The general ion-molecule pathways leading to even larger organics will be outlined. The predictions of this theory will be compared with observations to show how possible organic formation pathways in the interstellar medium may be constrained. In particular, the success of the theory in explaining trends in the known interstellar organics, in predicting recently-detected interstellar molecules, and, just as importantly, non-detections, will be discussed.
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
Theoretical models of helicopter rotor noise
NASA Technical Reports Server (NTRS)
Hawkings, D. L.
1978-01-01
For low speed rotors, it is shown that unsteady load models are only partially successful in predicting experimental levels. A theoretical model is presented which leads to the concept of unsteady thickness noise. This gives better agreement with test results. For high speed rotors, it is argued that present models are incomplete and that other mechanisms are at work. Some possibilities are briefly discussed.
Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2001-01-01
A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.
Dimensions of Black Suicide: A Theoretical Model.
ERIC Educational Resources Information Center
Davis, Robert; Short, James F., Jr.
This paper develops a theoretical model of sucide, based on the theory of "external restraints" proposed by previous researchers, A.F. Henry and J.F. Short, Jr., and applies the model to a study of black suicides in Orleans Parish, Louisiana. The focus of the study is on the complexity of relationships between dimensions of black suicide and the…
More-Accurate Model of Flows in Rocket Injectors
NASA Technical Reports Server (NTRS)
Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford
2011-01-01
An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.
On the importance of having accurate data for astrophysical modelling
NASA Astrophysics Data System (ADS)
Lique, Francois
2016-06-01
The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.
Anisotropic Turbulence Modeling for Accurate Rod Bundle Simulations
Baglietto, Emilio
2006-07-01
An improved anisotropic eddy viscosity model has been developed for accurate predictions of the thermal hydraulic performances of nuclear reactor fuel assemblies. The proposed model adopts a non-linear formulation of the stress-strain relationship in order to include the reproduction of the anisotropic phenomena, and in combination with an optimized low-Reynolds-number formulation based on Direct Numerical Simulation (DNS) to produce correct damping of the turbulent viscosity in the near wall region. This work underlines the importance of accurate anisotropic modeling to faithfully reproduce the scale of the turbulence driven secondary flows inside the bundle subchannels, by comparison with various isothermal and heated experimental cases. The very low scale secondary motion is responsible for the increased turbulence transport which produces a noticeable homogenization of the velocity distribution and consequently of the circumferential cladding temperature distribution, which is of main interest in bundle design. Various fully developed bare bundles test cases are shown for different geometrical and flow conditions, where the proposed model shows clearly improved predictions, in close agreement with experimental findings, for regular as well as distorted geometries. Finally the applicability of the model for practical bundle calculations is evaluated through its application in the high-Reynolds form on coarse grids, with excellent results. (author)
On the accurate theoretical determination of the static hyperpolarizability of trans-butadiene
NASA Astrophysics Data System (ADS)
Maroulis, George
1999-07-01
Finite-field many-body perturbation theory and coupled cluster calculations are reported for the static second dipole hyperpolarizability γαβγδ of trans-butadiene. A very large basis set of [9s6p4d1f/6s3p1d] size (336 contracted Gaussian-type functions) should lead to self-consistent field (SCF) values of near-Hartree-Fock quality. We report γxxxx=6.19, γxxxz=-0.44, γxxyy=3.42, γzzxx=2.07, γxyyz=-0.50, γxzzz=1.73, γyyyy=14.72, γyyzz=8.46, γzzzz=24.10 and γ¯=14.58 for 10-3×γαβγδ/e4a04Eh-3 at the experimental geometry (molecule on the xz plane with z as the main axis). γ¯=(14.6±0.4)×103e4a04Eh-3 should be a very reliable estimate of the Hartree-Fock limit of the mean hyperpolarizability. Keeping all other molecular geometry parameters constant, we find that near the Hartree-Fock limit the mean hyperpolarizability varies with the C=C bond length as 10-3×γ¯(RC=C)/e4a04Eh-3=14.93+31.78ΔR+30.88ΔR2-2.96ΔR3 and with the C-C bond length as 10-3×γ¯(RC-C)/e4a04Eh-3=14.93-7.20ΔR+3.04ΔR2, where ΔR/a0 is the displacement from the respective experimental value. The dependence of the components of γαβγδ on the molecular geometry parameters is not uniform. Electron correlation corrections have been calculated at various molecular geometries at the coupled-cluster single, double and perturbatively linked triple excitations level of theory for all independent components of γαβγδ. In absolute terms, electron correlation affects strongly the γzzzz, less strongly the γxxxx, and even less strongly the out-of-plane component γyyyy. The present analysis suggests a conservative estimate of (3.0±0.6)×103e4a04Eh-3 for the electron correlation correction to γ¯ at the experimental molecular geometry. Most of this value is appropriate to γzzzz. A static limit of γ¯=(17.6±1.0)×103e4a04Eh-3 is advanced (neglecting vibrational averaging). Even if a crude theoretical estimate of the dispersion of γ¯ at 1064 nm is added to this value, the
Hybrid quantum teleportation: A theoretical model
Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; Yoshikawa, Jun-ichi; Yonezawa, Hidehiro; Furusawa, Akira
2014-12-04
Hybrid quantum teleportation – continuous-variable teleportation of qubits – is a promising approach for deterministically teleporting photonic qubits. We propose how to implement it with current technology. Our theoretical model shows that faithful qubit transfer can be achieved for this teleportation by choosing an optimal gain for the teleporter’s classical channel.
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Mouse models of human AML accurately predict chemotherapy response.
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W
2009-04-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
An accurate model potential for alkali neon systems.
Zanuttini, D; Jacquet, E; Giglio, E; Douady, J; Gervais, B
2009-12-01
We present a detailed investigation of the ground and lowest excited states of M-Ne dimers, for M=Li, Na, and K. We show that the potential energy curves of these Van der Waals dimers can be obtained accurately by considering the alkali neon systems as one-electron systems. Following previous authors, the model describes the evolution of the alkali valence electron in the combined potentials of the alkali and neon cores by means of core polarization pseudopotentials. The key parameter for an accurate model is the M(+)-Ne potential energy curve, which was obtained by means of ab initio CCSD(T) calculation using a large basis set. For each MNe dimer, a systematic comparison with ab initio computation of the potential energy curve for the X, A, and B states shows the remarkable accuracy of the model. The vibrational analysis and the comparison with existing experimental data strengthens this conclusion and allows for a precise assignment of the vibrational levels. PMID:19968334
Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng
Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.
Hybrid rocket engine, theoretical model and experiment
NASA Astrophysics Data System (ADS)
Chelaru, Teodor-Viorel; Mingireanu, Florin
2011-06-01
The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.
Theoretical models of neural circuit development.
Simpson, Hugh D; Mortimer, Duncan; Goodhill, Geoffrey J
2009-01-01
Proper wiring up of the nervous system is critical to the development of organisms capable of complex and adaptable behaviors. Besides the many experimental advances in determining the cellular and molecular machinery that carries out this remarkable task precisely and robustly, theoretical approaches have also proven to be useful tools in analyzing this machinery. A quantitative understanding of these processes can allow us to make predictions, test hypotheses, and appraise established concepts in a new light. Three areas that have been fruitful in this regard are axon guidance, retinotectal mapping, and activity-dependent development. This chapter reviews some of the contributions made by mathematical modeling in these areas, illustrated by important examples of models in each section. For axon guidance, we discuss models of how growth cones respond to their environment, and how this environment can place constraints on growth cone behavior. Retinotectal mapping looks at computational models for how topography can be generated in populations of neurons based on molecular gradients and other mechanisms such as competition. In activity-dependent development, we discuss theoretical approaches largely based on Hebbian synaptic plasticity rules, and how they can generate maps in the visual cortex very similar to those seen in vivo. We show how theoretical approaches have substantially contributed to the advancement of developmental neuroscience, and discuss future directions for mathematical modeling in the field. PMID:19427515
Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.
Wu, Tim; Hung, Alice; Mithraratne, Kumar
2014-11-01
This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data. PMID:26355331
Simple theoretical models for composite rotor blades
NASA Technical Reports Server (NTRS)
Valisetty, R. R.; Rehfield, L. W.
1984-01-01
The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.
Theoretical modeling for the stereo mission
NASA Astrophysics Data System (ADS)
Aschwanden, Markus J.; Burlaga, L. F.; Kaiser, M. L.; Ng, C. K.; Reames, D. V.; Reiner, M. J.; Gombosi, T. I.; Lugaz, N.; Manchester, W.; Roussev, I. I.; Zurbuchen, T. H.; Farrugia, C. J.; Galvin, A. B.; Lee, M. A.; Linker, J. A.; Mikić, Z.; Riley, P.; Alexander, D.; Sandman, A. W.; Cook, J. W.; Howard, R. A.; Odstrčil, D.; Pizzo, V. J.; Kóta, J.; Liewer, P. C.; Luhmann, J. G.; Inhester, B.; Schwenn, R. W.; Solanki, S. K.; Vasyliunas, V. M.; Wiegelmann, T.; Blush, L.; Bochsler, P.; Cairns, I. H.; Robinson, P. A.; Bothmer, V.; Kecskemety, K.; Llebaria, A.; Maksimovic, M.; Scholer, M.; Wimmer-Schweingruber, R. F.
2008-04-01
We summarize the theory and modeling efforts for the STEREO mission, which will be used to interpret the data of both the remote-sensing (SECCHI, SWAVES) and in-situ instruments (IMPACT, PLASTIC). The modeling includes the coronal plasma, in both open and closed magnetic structures, and the solar wind and its expansion outwards from the Sun, which defines the heliosphere. Particular emphasis is given to modeling of dynamic phenomena associated with the initiation and propagation of coronal mass ejections (CMEs). The modeling of the CME initiation includes magnetic shearing, kink instability, filament eruption, and magnetic reconnection in the flaring lower corona. The modeling of CME propagation entails interplanetary shocks, interplanetary particle beams, solar energetic particles (SEPs), geoeffective connections, and space weather. This review describes mostly existing models of groups that have committed their work to the STEREO mission, but is by no means exhaustive or comprehensive regarding alternative theoretical approaches.
Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations
NASA Astrophysics Data System (ADS)
Bowman, J.; Jensen, S.; McDonald, Mark
2010-10-01
High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.
Accurate, low-cost 3D-models of gullies
NASA Astrophysics Data System (ADS)
Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine
2015-04-01
Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we
Towards Accurate Molecular Modeling of Plastic Bonded Explosives
NASA Astrophysics Data System (ADS)
Chantawansri, T. L.; Andzelm, J.; Taylor, D.; Byrd, E.; Rice, B.
2010-03-01
There is substantial interest in identifying the controlling factors that influence the susceptibility of polymer bonded explosives (PBXs) to accidental initiation. Numerous Molecular Dynamics (MD) simulations of PBXs using the COMPASS force field have been reported in recent years, where the validity of the force field in modeling the solid EM fill has been judged solely on its ability to reproduce lattice parameters, which is an insufficient metric. Performance of the COMPASS force field in modeling EMs and the polymeric binder has been assessed by calculating structural, thermal, and mechanical properties, where only fair agreement with experimental data is obtained. We performed MD simulations using the COMPASS force field for the polymer binder hydroxyl-terminated polybutadiene and five EMs: cyclotrimethylenetrinitramine, 1,3,5,7-tetranitro-1,3,5,7-tetra-azacyclo-octane, 2,4,6,8,10,12-hexantirohexaazazisowurzitane, 2,4,6-trinitro-1,3,5-benzenetriamine, and pentaerythritol tetranitate. Predicted EM crystallographic and molecular structural parameters, as well as calculated properties for the binder will be compared with experimental results for different simulation conditions. We also present novel simulation protocols, which improve agreement between experimental and computation results thus leading to the accurate modeling of PBXs.
Propagation studies using a theoretical ionosphere model
NASA Technical Reports Server (NTRS)
Lee, M.
1973-01-01
The mid-latitude ionospheric and neutral atmospheric models are coupled with an advanced three dimensional ray tracing program to see what success would be obtained in predicting the wave propagation conditions and to study to what extent the use of theoretical ionospheric models is practical. The Penn State MK 1 ionospheric model, the Mitra-Rowe D region model, and the Groves' neutral atmospheric model are used throughout this work to represent the real electron densities and collision frequencies. The Faraday rotation and differential Doppler velocities from satellites, the propagation modes for long distance high frequency propagation, the group delays for each mode, the ionospheric absorption, and the spatial loss are all predicted.
Theoretical models for polarimetric radar clutter
NASA Technical Reports Server (NTRS)
Borgeaud, M.; Shin, R. T.; Kong, J. A.
1987-01-01
The Mueller matrix and polarization covariance matrix are described for polarimetric radar systems. The clutter is modeled by a layer of random permittivity, described by a three-dimensional correlation function, with variance, and horizontal and vertical correlation lengths. This model is applied, using the wave theory with Born approximations carried to the second order, to find the backscattering elements of the polarimetric matrices. It is found that 8 out of 16 elements of the Mueller matrix are identically zero, corresponding to a covariance matrix with four zero elements. Theoretical predictions are matched with experimental data for vegetation fields.
Leidenfrost effect: accurate drop shape modeling and new scaling laws
NASA Astrophysics Data System (ADS)
Sobac, Benjamin; Rednikov, Alexey; Dorbolo, Stéphane; Colinet, Pierre
2014-11-01
In this study, we theoretically investigate the shape of a drop in a Leidenfrost state, focusing on the geometry of the vapor layer. The drop geometry is modeled by numerically matching the solution of the hydrostatic shape of a superhydrophobic drop (for the upper part) with the solution of the lubrication equation of the vapor flow underlying the drop (for the bottom part). The results highlight that the vapor layer, fed by evaporation, forms a concave depression in the drop interface that becomes increasingly marked with the drop size. The vapor layer then consists of a gas pocket in the center and a thin annular neck surrounding it. The film thickness increases with the size of the drop, and the thickness at the neck appears to be of the order of 10--100 μm in the case of water. The model is compared to recent experimental results [Burton et al., Phys. Rev. Lett., 074301 (2012)] and shows an excellent agreement, without any fitting parameter. New scaling laws also emerge from this model. The geometry of the vapor pocket is only weakly dependent on the superheat (and thus on the evaporation rate), this weak dependence being more pronounced in the neck region. In turn, the vapor layer characteristics strongly depend on the drop size.
An accurate and simple quantum model for liquid water.
Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A
2006-11-14
The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics
Theoretical models for supercritical fluid extraction.
Huang, Zhen; Shi, Xiao-Han; Jiang, Wei-Juan
2012-08-10
For the proper design of supercritical fluid extraction processes, it is essential to have a sound knowledge of the mass transfer mechanism of the extraction process and the appropriate mathematical representation. In this paper, the advances and applications of kinetic models for describing supercritical fluid extraction from various solid matrices have been presented. The theoretical models overviewed here include the hot ball diffusion, broken and intact cell, shrinking core and some relatively simple models. Mathematical representations of these models have been in detail interpreted as well as their assumptions, parameter identifications and application examples. Extraction process of the analyte solute from the solid matrix by means of supercritical fluid includes the dissolution of the analyte from the solid, the analyte diffusion in the matrix and its transport to the bulk supercritical fluid. Mechanisms involved in a mass transfer model are discussed in terms of external mass transfer resistance, internal mass transfer resistance, solute-solid interactions and axial dispersion. The correlations of the external mass transfer coefficient and axial dispersion coefficient with certain dimensionless numbers are also discussed. Among these models, the broken and intact cell model seems to be the most relevant mathematical model as it is able to provide realistic description of the plant material structure for better understanding the mass-transfer kinetics and thus it has been widely employed for modeling supercritical fluid extraction of natural matters. PMID:22560346
A Theoretical Model of Water and Trade
NASA Astrophysics Data System (ADS)
Dang, Q.; Konar, M.; Reimer, J.; Di Baldassarre, G.; Lin, X.; Zeng, R.
2015-12-01
Water is an essential factor of agricultural production. Agriculture, in turn, is globalized through the trade of food commodities. In this paper, we develop a theoretical model of a small open economy that explicitly incorporates water resources. The model emphasizes three tradeoffs involving water decision-making that are important yet not always considered within the existing literature. One tradeoff focuses on competition for water among different sectors when there is a shock to one of the sectors only, such as trade liberalization and consequent higher demand for the product. A second tradeoff concerns the possibility that there may or may not be substitutes for water, such as increased use of sophisticated irrigation technology as a means to increase crop output in the absence of higher water availability. A third tradeoff explores the possibility that the rest of the world can be a source of supply or demand for a country's water-using products. A number of propositions are proven. For example, while trade liberalization tends to increase water use, increased pressure on water supplies can be moderated by way of a tax that is derivable with observable economic phenomena. Another example is that increased riskiness of water availability tends to cause water users to use less water than would be the case under profit maximization. These theoretical model results generate hypotheses that can be tested empirically in future work.
Requirements for theoretical models of outflows
NASA Technical Reports Server (NTRS)
Linsky, Jeffrey L.
1988-01-01
Recent observational and theoretical investigations of astrophysical mass outflows are reviewed, with a focus on the basic physical principles. Specific limitations on the observational data and their interpretation are listed and discussed. Modeling problems considered include the role of the critical point in determining the mass-loss rate and terminal velocity, the physical processes controlling density at the critical point, the possible coexistence of multiple mass-loss mechanisms, time scales, instabilities and phase changes, multiphase atmospheres and winds, the definition of geometries, the role of the environment, explosive transient events, stochastic phenomena, mode-mode coupling and damping processes, departures from ionization equilibrium, and nonthermal phenomena.
A theoretical model of water and trade
NASA Astrophysics Data System (ADS)
Dang, Qian; Konar, Megan; Reimer, Jeffrey J.; Di Baldassarre, Giuliano; Lin, Xiaowen; Zeng, Ruijie
2016-03-01
Water is an essential input for agricultural production. Agriculture, in turn, is globalized through the trade of agricultural commodities. In this paper, we develop a theoretical model that emphasizes four tradeoffs involving water-use decision-making that are important yet not always considered in a consistent framework. One tradeoff focuses on competition for water among different economic sectors. A second tradeoff examines the possibility that certain types of agricultural investments can offset water use. A third tradeoff explores the possibility that the rest of the world can be a source of supply or demand for a country's water-using commodities. The fourth tradeoff concerns how variability in water supplies influences farmer decision-making. We show conditions under which trade liberalization affect water use. Two policy scenarios to reduce water use are evaluated. First, we derive a target tax that reduces water use without offsetting the gains from trade liberalization, although important tradeoffs exist between economic performance and resource use. Second, we show how subsidization of water-saving technologies can allow producers to use less water without reducing agricultural production, making such subsidization an indirect means of influencing water use decision-making. Finally, we outline conditions under which riskiness of water availability affects water use. These theoretical model results generate hypotheses that can be tested empirically in future work.
Theoretical Models of the Galactic Bulge
NASA Astrophysics Data System (ADS)
Shen, Juntai; Li, Zhao-Yu
Near infrared images from the COBE satellite presented the first clear evidence that our Milky Way galaxy contains a boxy shaped bulge. Recent years have witnessed a gradual paradigm shift in the formation and evolution of the Galactic bulge. Bulges were commonly believed to form in the dynamical violence of galaxy mergers. However, it has become increasingly clear that the main body of the Milky Way bulge is not a classical bulge made by previous major mergers, instead it appears to be a bar seen somewhat end-on. The Milky Way bar can form naturally from a precursor disc and thicken vertically by the internal firehose/buckling instability, giving rise to the boxy appearance. This picture is supported by many lines of evidence, including the asymmetric parallelogram shape, the strong cylindrical rotation (i.e., nearly constant rotation regardless of the height above the disc plane), the existence of an intriguing X-shaped structure in the bulge, and perhaps the metallicity gradients. We review the major theoretical models and techniques to understand the Milky Way bulge. Despite the progresses in recent theoretical attempts, a complete bulge formation model that explains the full kinematics and metallicity distribution is still not fully understood. Upcoming large surveys are expected to shed new light on the formation history of the Galactic bulge.
A Theoretical Model of Water and Trade
NASA Astrophysics Data System (ADS)
Dang, Qian; Zeng, Ruije; Ling, Xiaowen; Di Baldassarre, Giuliano; Konar, Megan
2014-05-01
Water is an essential factor of agricultural production. Agriculture, in turn, is globalized through the trade of food commodities. There is an extensive literature detailing the direct and local relationships between water and agricultural production. Here, we expand upon this important literature to understand how the globalized food economy interacts with water resources. In particular, we seek to understand the following questions: What is the impact of agricultural trade on water resources? How do water resources impact agricultural trade? Thus, we aim to explore the bidirectional feedbacks between water resources and food trade, using a socio-hydrologic framework. To do this, we develop a theoretical model of international trade that explicitly incorporates water resources.
Explaining Facial Imitation: A Theoretical Model
Meltzoff, Andrew N.; Moore, M. Keith
2013-01-01
A long-standing puzzle in developmental psychology is how infants imitate gestures they cannot see themselves perform (facial gestures). Two critical issues are: (a) the metric infants use to detect cross-modal equivalences in human acts and (b) the process by which they correct their imitative errors. We address these issues in a detailed model of the mechanisms underlying facial imitation. The model can be extended to encompass other types of imitation. The model capitalizes on three new theoretical concepts. First, organ identification is the means by which infants relate parts of their own bodies to corresponding ones of the adult’s. Second, body babbling (infants’ movement practice gained through self-generated activity) provides experience mapping movements to the resulting body configurations. Third, organ relations provide the metric by which infant and adult acts are perceived in commensurate terms. In imitating, infants attempt to match the organ relations they see exhibited by the adults with those they feel themselves make. We show how development restructures the meaning and function of early imitation. We argue that important aspects of later social cognition are rooted in the initial cross-modal equivalence between self and other found in newborns. PMID:24634574
A Method for Accurate in silico modeling of Ultrasound Transducer Arrays
Guenther, Drake A.; Walker, William F.
2009-01-01
This paper presents a new approach to improve the in silico modeling of ultrasound transducer arrays. While current simulation tools accurately predict the theoretical element spatio-temporal pressure response, transducers do not always behave as theorized. In practice, using the probe's physical dimensions and published specifications in silico, often results in unsatisfactory agreement between simulation and experiment. We describe a general optimization procedure used to maximize the correlation between the observed and simulated spatio-temporal response of a pulsed single element in a commercial ultrasound probe. A linear systems approach is employed to model element angular sensitivity, lens effects, and diffraction phenomena. A numerical deconvolution method is described to characterize the intrinsic electro-mechanical impulse response of the element. Once the response of the element and optimal element characteristics are known, prediction of the pressure response for arbitrary apertures and excitation signals is performed through direct convolution using available tools. We achieve a correlation of 0.846 between the experimental emitted waveform and simulated waveform when using the probe's physical specifications in silico. A far superior correlation of 0.988 is achieved when using the optimized in silico model. Electronic noise appears to be the main effect preventing the realization of higher correlation coefficients. More accurate in silico modeling will improve the evaluation and design of ultrasound transducers as well as aid in the development of sophisticated beamforming strategies. PMID:19041997
Information-Theoretic Perspectives on Geophysical Models
NASA Astrophysics Data System (ADS)
Nearing, Grey
2016-04-01
practice of science (except by Gong et al., 2013, whose fundamental insight is the basis for this talk), and here I offer two examples of practical methods that scientists might use to approximately measure ontological information. I place this practical discussion in the context of several recent and high-profile experiments that have found that simple out-of-sample statistical models typically (vastly) outperform our most sophisticated terrestrial hydrology models. I offer some perspective on several open questions about how to use these findings to improve our models and understanding of these systems. Cartwright, N. (1983) How the Laws of Physics Lie. New York, NY: Cambridge Univ Press. Clark, M. P., Kavetski, D. and Fenicia, F. (2011) 'Pursuing the method of multiple working hypotheses for hydrological modeling', Water Resources Research, 47(9). Cover, T. M. and Thomas, J. A. (1991) Elements of Information Theory. New York, NY: Wiley-Interscience. Cox, R. T. (1946) 'Probability, frequency and reasonable expectation', American Journal of Physics, 14, pp. 1-13. Csiszár, I. (1972) 'A Class of Measures of Informativity of Observation Channels', Periodica Mathematica Hungarica, 2(1), pp. 191-213. Davies, P. C. W. (1990) 'Why is the physical world so comprehensible', Complexity, entropy and the physics of information, pp. 61-70. Gong, W., Gupta, H. V., Yang, D., Sricharan, K. and Hero, A. O. (2013) 'Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach', Water Resources Research, 49(4), pp. 2253-2273. Jaynes, E. T. (2003) Probability Theory: The Logic of Science. New York, NY: Cambridge University Press. Nearing, G. S. and Gupta, H. V. (2015) 'The quantity and quality of information in hydrologic models', Water Resources Research, 51(1), pp. 524-538. Popper, K. R. (2002) The Logic of Scientific Discovery. New York: Routledge. Van Horn, K. S. (2003) 'Constructing a logic of plausible inference: a guide to cox's theorem
NASA Astrophysics Data System (ADS)
Mead, A. J.; Heymans, C.; Lombriser, L.; Peacock, J. A.; Steele, O. I.; Winther, H. A.
2016-06-01
We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead et al. We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases, we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo-model method can predict the non-linear matter power spectrum measured from simulations of parametrized w(a) dark energy models at the few per cent level for k < 10 h Mpc-1, and we present theoretically motivated extensions to cover non-minimally coupled scalar fields, massive neutrinos and Vainshtein screened modified gravity models that result in few per cent accurate power spectra for k < 10 h Mpc-1. For chameleon screened models, we achieve only 10 per cent accuracy for the same range of scales. Finally, we use our halo model to investigate degeneracies between different extensions to the standard cosmological model, finding that the impact of baryonic feedback on the non-linear matter power spectrum can be considered independently of modified gravity or massive neutrino extensions. In contrast, considering the impact of modified gravity and massive neutrinos independently results in biased estimates of power at the level of 5 per cent at scales k > 0.5 h Mpc-1. An updated version of our publicly available HMCODE can be found at https://github.com/alexander-mead/hmcode.
Towards more accurate numerical modeling of impedance based high frequency harmonic vibration
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2014-03-01
The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.
Algal productivity modeling: a step toward accurate assessments of full-scale algal cultivation.
Béchet, Quentin; Chambonnière, Paul; Shilton, Andy; Guizard, Guillaume; Guieysse, Benoit
2015-05-01
A new biomass productivity model was parameterized for Chlorella vulgaris using short-term (<30 min) oxygen productivities from algal microcosms exposed to 6 light intensities (20-420 W/m(2)) and 6 temperatures (5-42 °C). The model was then validated against experimental biomass productivities recorded in bench-scale photobioreactors operated under 4 light intensities (30.6-74.3 W/m(2)) and 4 temperatures (10-30 °C), yielding an accuracy of ± 15% over 163 days of cultivation. This modeling approach addresses major challenges associated with the accurate prediction of algal productivity at full-scale. Firstly, while most prior modeling approaches have only considered the impact of light intensity on algal productivity, the model herein validated also accounts for the critical impact of temperature. Secondly, this study validates a theoretical approach to convert short-term oxygen productivities into long-term biomass productivities. Thirdly, the experimental methodology used has the practical advantage of only requiring one day of experimental work for complete model parameterization. The validation of this new modeling approach is therefore an important step for refining feasibility assessments of algae biotechnologies. PMID:25502920
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-03-01
SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.
New process model proves accurate in tests on catalytic reformer
Aguilar-Rodriguez, E.; Ancheyta-Juarez, J. )
1994-07-25
A mathematical model has been devised to represent the process that takes place in a fixed-bed, tubular, adiabatic catalytic reforming reactor. Since its development, the model has been applied to the simulation of a commercial semiregenerative reformer. The development of mass and energy balances for this reformer led to a model that predicts both concentration and temperature profiles along the reactor. A comparison of the model's results with experimental data illustrates its accuracy at predicting product profiles. Simple steps show how the model can be applied to simulate any fixed-bed catalytic reformer.
Assessing a Theoretical Model on EFL College Students
ERIC Educational Resources Information Center
Chang, Yu-Ping
2011-01-01
This study aimed to (1) integrate relevant language learning models and theories, (2) construct a theoretical model of college students' English learning performance, and (3) assess the model fit between empirically observed data and the theoretical model proposed by the researchers of this study. Subjects of this study were 1,129 Taiwanese EFL…
Coupling Efforts to the Accurate and Efficient Tsunami Modelling System
NASA Astrophysics Data System (ADS)
Son, S.
2015-12-01
In the present study, we couple two different types of tsunami models, i.e., nondispersive shallow water model of characteristic form(MOST ver.4) and dispersive Boussinesq model of non-characteristic form(Son et al. (2011)) in an attempt to improve modelling accuracy and efficiency. Since each model deals with different type of primary variables, additional care on matching boundary condition is required. Using an absorbing-generating boundary condition developed by Van Dongeren and Svendsen(1997), model coupling and integration is achieved. Characteristic variables(i.e., Riemann invariants) in MOST are converted to non-characteristic variables for Boussinesq solver without any loss of physical consistency. Established modelling system has been validated through typical test problems to realistic tsunami events. Simulated results reveal good performance of developed modelling system. Since coupled modelling system provides advantageous flexibility feature during implementation, great efficiencies and accuracies are expected to be gained through spot-focusing application of Boussinesq model inside the entire domain of tsunami propagation.
Theoretical model for the wetting of a rough surface.
Hay, K M; Dragila, M I; Liburdy, J
2008-09-15
Many applications would benefit from an understanding of the physical mechanism behind fluid movement on rough surfaces, including the movement of water or contaminants within an unsaturated rock fracture. Presented is a theoretical investigation of the effect of surface roughness on fluid spreading. It is known that surface roughness enhances the effects of hydrophobic or hydrophilic behavior, as well as allowing for faster spreading of a hydrophilic fluid. A model is presented based on the classification of the regimes of spreading that occur when fluid encounters a rough surface: microscopic precursor film, mesoscopic invasion of roughness and macroscopic reaction to external forces. A theoretical relationship is developed for the physical mechanisms that drive mesoscopic invasion, which is used to guide a discussion of the implications of the theory on spreading conditions. Development of the analytical equation is based on a balance between capillary forces and frictional resistive forces. Chemical heterogeneity is ignored. The effect of various methods for estimating viscous dissipation is compared to available data from fluid rise on roughness experiments. Methods that account more accurately for roughness shape better explain the data as they account for more surface friction; the best fit was found for a hydraulic diameter approximation. The analytical solution implies the existence of a critical contact angle that is a function of roughness geometry, below which fluid will spread and above which fluid will resist spreading. The resulting equation predicts movement of a liquid invasion front with a square root of time dependence, mathematically resembling a diffusive process. PMID:18586259
Davis, J.L.; Grant, J.W.
2014-01-01
Anatomically correct turtle utricle geometry was incorporated into two finite element models. The geometrically accurate model included appropriately shaped macular surface and otoconial layer, compact gel and column filament (or shear) layer thicknesses and thickness distributions. The first model included a shear layer where the effects of hair bundle stiffness was included as part of the shear layer modulus. This solid model’s undamped natural frequency was matched to an experimentally measured value. This frequency match established a realistic value of the effective shear layer Young’s modulus of 16 Pascals. We feel this is the most accurate prediction of this shear layer modulus and fits with other estimates (Kondrachuk, 2001b). The second model incorporated only beam elements in the shear layer to represent hair cell bundle stiffness. The beam element stiffness’s were further distributed to represent their location on the neuroepithelial surface. Experimentally measured striola hair cell bundles mean stiffness values were used in the striolar region and the mean extrastriola hair cell bundles stiffness values were used in this region. The results from this second model indicated that hair cell bundle stiffness contributes approximately 40% to the overall stiffness of the shear layer– hair cell bundle complex. This analysis shows that high mass saccules, in general, achieve high gain at the sacrifice of frequency bandwidth. We propose the mechanism by which this can be achieved is through increase the otoconial layer mass. The theoretical difference in gain (deflection per acceleration) is shown for saccules with large otoconial layer mass relative to saccules and utricles with small otoconial layer mass. Also discussed is the necessity of these high mass saccules to increase their overall system shear layer stiffness. Undamped natural frequencies and mode shapes for these sensors are shown. PMID:25445820
Theoretical Models of Parental HIV Disclosure: A Critical Review
Qiao, Shan; Li, Xiaoming; Stanton, Bonita
2012-01-01
This review critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model, the Disclosure Decision Making Model, and the Disclosure Process Model), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This review also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current review underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones. PMID:22866903
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
Accurate modelling of flow induced stresses in rigid colloidal aggregates
NASA Astrophysics Data System (ADS)
Vanni, Marco
2015-07-01
A method has been developed to estimate the motion and the internal stresses induced by a fluid flow on a rigid aggregate. The approach couples Stokesian dynamics and structural mechanics in order to take into account accurately the effect of the complex geometry of the aggregates on hydrodynamic forces and the internal redistribution of stresses. The intrinsic error of the method, due to the low-order truncation of the multipole expansion of the Stokes solution, has been assessed by comparison with the analytical solution for the case of a doublet in a shear flow. In addition, it has been shown that the error becomes smaller as the number of primary particles in the aggregate increases and hence it is expected to be negligible for realistic reproductions of large aggregates. The evaluation of internal forces is performed by an adaptation of the matrix methods of structural mechanics to the geometric features of the aggregates and to the particular stress-strain relationship that occurs at intermonomer contacts. A preliminary investigation on the stress distribution in rigid aggregates and their mode of breakup has been performed by studying the response to an elongational flow of both realistic reproductions of colloidal aggregates (made of several hundreds monomers) and highly simplified structures. A very different behaviour has been evidenced between low-density aggregates with isostatic or weakly hyperstatic structures and compact aggregates with highly hyperstatic configuration. In low-density clusters breakup is caused directly by the failure of the most stressed intermonomer contact, which is typically located in the inner region of the aggregate and hence originates the birth of fragments of similar size. On the contrary, breakup of compact and highly cross-linked clusters is seldom caused by the failure of a single bond. When this happens, it proceeds through the removal of a tiny fragment from the external part of the structure. More commonly, however
Magnetic field models of nine CP stars from "accurate" measurements
NASA Astrophysics Data System (ADS)
Glagolevskij, Yu. V.
2013-01-01
The dipole models of magnetic fields in nine CP stars are constructed based on the measurements of metal lines taken from the literature, and performed by the LSD method with an accuracy of 10-80 G. The model parameters are compared with the parameters obtained for the same stars from the hydrogen line measurements. For six out of nine stars the same type of structure was obtained. Some parameters, such as the field strength at the poles B p and the average surface magnetic field B s differ considerably in some stars due to differences in the amplitudes of phase dependences B e (Φ) and B s (Φ), obtained by different authors. It is noted that a significant increase in the measurement accuracy has little effect on the modelling of the large-scale structures of the field. By contrast, it is more important to construct the shape of the phase dependence based on a fairly large number of field measurements, evenly distributed by the rotation period phases. It is concluded that the Zeeman component measurement methods have a strong effect on the shape of the phase dependence, and that the measurements of the magnetic field based on the lines of hydrogen are more preferable for modelling the large-scale structures of the field.
An Accurate In Vitro Model of the E. coli Envelope
Clifton, Luke A; Holt, Stephen A; Hughes, Arwel V; Daulton, Emma L; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R; Webster, John R P; Kinane, Christian J; Lakey, Jeremy H
2015-01-01
Gram-negative bacteria are an increasingly serious source of antibiotic-resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir–Blodgett and Langmuir–Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:26331292
An accurate and efficient Lagrangian sub-grid model for multi-particle dispersion
NASA Astrophysics Data System (ADS)
Toschi, Federico; Mazzitelli, Irene; Lanotte, Alessandra S.
2014-11-01
Many natural and industrial processes involve the dispersion of particle in turbulent flows. Despite recent theoretical progresses in the understanding of particle dynamics in simple turbulent flows, complex geometries often call for numerical approaches based on eulerian Large Eddy Simulation (LES). One important issue related to the Lagrangian integration of tracers in under-resolved velocity fields is connected to the lack of spatial correlations at unresolved scales. Here we propose a computationally efficient Lagrangian model for the sub-grid velocity of tracers dispersed in statistically homogeneous and isotropic turbulent flows. The model incorporates the multi-scale nature of turbulent temporal and spatial correlations that are essential to correctly reproduce the dynamics of multi-particle dispersion. The new model is able to describe the Lagrangian temporal and spatial correlations in clouds of particles. In particular we show that pairs and tetrads dispersion compare well with results from Direct Numerical Simulations of statistically isotropic and homogeneous 3d turbulence. This model may offer an accurate and efficient way to describe multi-particle dispersion in under resolved turbulent velocity fields such as the one employed in eulerian LES. This work is part of the research programmes FP112 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). We acknowledge support from the EU COST Action MP0806.
NASA Astrophysics Data System (ADS)
Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.
2015-12-01
We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.
Nielsen, Jens; D’Avezac, Mayeul; Hetherington, James; Stamatakis, Michail
2013-12-14
Ab initio kinetic Monte Carlo (KMC) simulations have been successfully applied for over two decades to elucidate the underlying physico-chemical phenomena on the surfaces of heterogeneous catalysts. These simulations necessitate detailed knowledge of the kinetics of elementary reactions constituting the reaction mechanism, and the energetics of the species participating in the chemistry. The information about the energetics is encoded in the formation energies of gas and surface-bound species, and the lateral interactions between adsorbates on the catalytic surface, which can be modeled at different levels of detail. The majority of previous works accounted for only pairwise-additive first nearest-neighbor interactions. More recently, cluster-expansion Hamiltonians incorporating long-range interactions and many-body terms have been used for detailed estimations of catalytic rate [C. Wu, D. J. Schmidt, C. Wolverton, and W. F. Schneider, J. Catal. 286, 88 (2012)]. In view of the increasing interest in accurate predictions of catalytic performance, there is a need for general-purpose KMC approaches incorporating detailed cluster expansion models for the adlayer energetics. We have addressed this need by building on the previously introduced graph-theoretical KMC framework, and we have developed Zacros, a FORTRAN2003 KMC package for simulating catalytic chemistries. To tackle the high computational cost in the presence of long-range interactions we introduce parallelization with OpenMP. We further benchmark our framework by simulating a KMC analogue of the NO oxidation system established by Schneider and co-workers [J. Catal. 286, 88 (2012)]. We show that taking into account only first nearest-neighbor interactions may lead to large errors in the prediction of the catalytic rate, whereas for accurate estimates thereof, one needs to include long-range terms in the cluster expansion.
Empathy and Child Neglect: A Theoretical Model
ERIC Educational Resources Information Center
De Paul, Joaquin; Guibert, Maria
2008-01-01
Objective: To present an explanatory theory-based model of child neglect. This model does not address neglectful behaviors of parents with mental retardation, alcohol or drug abuse, or severe mental health problems. In this model parental behavior aimed to satisfy a child's need is considered a helping behavior and, as a consequence, child neglect…
Discrete state model and accurate estimation of loop entropy of RNA secondary structures.
Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie
2008-03-28
Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982
A theoretical model to study melting of metals under pressure
NASA Astrophysics Data System (ADS)
Kholiya, Kuldeep; Chandra, Jeewan
2015-10-01
On the basis of the thermal equation-of-state a simple theoretical model is developed to study the pressure dependence of melting temperature. The model is then applied to compute the high pressure melting curve of 10 metals (Cu, Mg, Pb, Al, In, Cd, Zn, Au, Ag and Mn). It is found that the melting temperature is not linear with pressure and the slope dTm/dP of the melting curve decreases continuously with the increase in pressure. The results obtained with the present model are also compared with the previous theoretical and experimental data. A good agreement between theoretical and experimental result supports the validity of the present model.
Information-Theoretic Perspectives on Geophysical Models
NASA Astrophysics Data System (ADS)
Nearing, Grey
2016-04-01
To test any hypothesis about any dynamic system, it is necessary to build a model that places that hypothesis into the context of everything else that we know about the system: initial and boundary conditions and interactions between various governing processes (Hempel and Oppenheim, 1948, Cartwright, 1983). No hypothesis can be tested in isolation, and no hypothesis can be tested without a model (for a geoscience-related discussion see Clark et al., 2011). Science is (currently) fundamentally reductionist in the sense that we seek some small set of governing principles that can explain all phenomena in the universe, and such laws are ontological in the sense that they describe the object under investigation (Davies, 1990 gives several competing perspectives on this claim). However, since we cannot build perfect models of complex systems, any model that does not also contain an epistemological component (i.e., a statement, like a probability distribution, that refers directly to the quality of of the information from the model) is falsified immediately (in the sense of Popper, 2002) given only a small number of observations. Models necessarily contain both ontological and epistemological components, and what this means is that the purpose of any robust scientific method is to measure the amount and quality of information provided by models. I believe that any viable philosophy of science must be reducible to this statement. The first step toward a unified theory of scientific models (and therefore a complete philosophy of science) is a quantitative language that applies to both ontological and epistemological questions. Information theory is one such language: Cox' (1946) theorem (see Van Horn, 2003) tells us that probability theory is the (only) calculus that is consistent with Classical Logic (Jaynes, 2003; chapter 1), and information theory is simply the integration of convex transforms of probability ratios (integration reduces density functions to scalar
Improvements to Nuclear Data and Its Uncertainties by Theoretical Modeling
Danon, Yaron; Nazarewicz, Witold; Talou, Patrick
2013-02-18
This project addresses three important gaps in existing evaluated nuclear data libraries that represent a significant hindrance against highly advanced modeling and simulation capabilities for the Advanced Fuel Cycle Initiative (AFCI). This project will: Develop advanced theoretical tools to compute prompt fission neutrons and gamma-ray characteristics well beyond average spectra and multiplicity, and produce new evaluated files of U and Pu isotopes, along with some minor actinides; Perform state-of-the-art fission cross-section modeling and calculations using global and microscopic model input parameters, leading to truly predictive fission cross-sections capabilities. Consistent calculations for a suite of Pu isotopes will be performed; Implement innovative data assimilation tools, which will reflect the nuclear data evaluation process much more accurately, and lead to a new generation of uncertainty quantification files. New covariance matrices will be obtained for Pu isotopes and compared to existing ones. The deployment of a fleet of safe and efficient advanced reactors that minimize radiotoxic waste and are proliferation-resistant is a clear and ambitious goal of AFCI. While in the past the design, construction and operation of a reactor were supported through empirical trials, this new phase in nuclear energy production is expected to rely heavily on advanced modeling and simulation capabilities. To be truly successful, a program for advanced simulations of innovative reactors will have to develop advanced multi-physics capabilities, to be run on massively parallel super- computers, and to incorporate adequate and precise underlying physics. And all these areas have to be developed simultaneously to achieve those ambitious goals. Of particular interest are reliable fission cross-section uncertainty estimates (including important correlations) and evaluations of prompt fission neutrons and gamma-ray spectra and uncertainties.
Theoretical Modeling of Amphiphilic Self-Assembly
NASA Astrophysics Data System (ADS)
Gunn, John Robert
1992-01-01
Mixtures of oil, water, and surfactant exhibit a number of complex phases and interesting properties. In an effort to provide a detailed statistical mechanical understanding of these systems, the following models have been developed. A microscopic model of lyotropic systems is presented in which amphiphile and water molecules are described by simple intermolecular potentials which correctly include important excluded volume effects and the relative energy scales in the system. A constant-temperature molecular dynamics study in which the divergence of the pressure tensor is constrained to zero is discussed. Preliminary calculations on the order parameters and dynamical observables of the model are reported. To explore the phase diagram further, a three -component lattice model with unit-vector orientations at the lattice sites is introduced. The model describes ternary mixtures of oil, water, and amphiphile, and in particular the microemulsion phase. The phase diagram of the model is derived using mean-field theory and simulation. It is shown that the results of Monte Carlo simulations of sufficiently large systems show remarkable agreement with experiment. In particular, the present model reproduces the mesoscopic order of the microemulsion phase. The structure of the microemulsion is understood in terms of the liquid -crystalline phases adjacent to it on the phase diagram, and the nature of the phase transitions that occur between them. The behaviour of the system when the ratio of oil to water is changed is investigated and the percolation threshold is described. The amphiphilic film is also discussed in the context of a simple surface model. We then present an algorithm for carrying out time-dependent canonical Monte Carlo simulations using this model. Sample calculations are carried out for the 2-dimensional Ising model for which the exact partition function is known. Our method reproduces the results of standard Monte Carlo simulations with comparable accuracy
THEORETICAL BASIS FOR MODELING ELEMENT CYCLING
A biophysical basis for modeling element cycling is described. The scheme consists of element cycles, organisms necessary to completely catalyze all the component reactions, and higher organisms as structurally complex systems and as subsystems of more complex ecosystems, all to ...
Electrochemical phase formation: classical and atomistic theoretical models.
Milchev, Alexander
2016-08-01
The process of electrochemical phase formation at constant thermodynamic supersaturation is considered in terms of classical and atomistic nucleation theories. General theoretical expressions are derived for important thermodynamic and kinetic quantities commenting also upon the correlation between the existing theoretical models and experimental results. Progressive and instantaneous nucleation and growth of multiple clusters of the new phase are briefly considered, too. PMID:27108683
Theoretical outdoor noise propagation models: Application to practical predictions
NASA Astrophysics Data System (ADS)
Tuominen, H. T.; Lahti, T.
1982-02-01
The theoretical calculation approaches for outdoor noise propagation are reviewed. Possibilities for their application to practical engineering calculations are outlined. A calculation procedure, which is a combination and extension of several theoretical models, is described. Calculation examples are compared with the results of some propagation studies.
A Theoretical Framework for Physics Education Research: Modeling Student Thinking
ERIC Educational Resources Information Center
Redish, Edward F.
2004-01-01
Education is a goal-oriented field. But if we want to treat education scientifically so we can accumulate, evaluate, and refine what we learn, then we must develop a theoretical framework that is strongly rooted in objective observations and through which different theoretical models of student thinking can be compared. Much that is known in the…
A theoretical model for airborne radars
NASA Astrophysics Data System (ADS)
Faubert, D.
1989-11-01
This work describes a general theory for the simulation of airborne (or spaceborne) radars. It can simulate many types of systems including Airborne Intercept and Airborne Early Warning radars, airborne missile approach warning systems etc. It computes the average Signal-to-Noise ratio at the output of the signal processor. In this manner, one obtains the average performance of the radar without having to use Monte Carlo techniques. The model has provision for a waveform without frequency modulation and one with linear frequency modulation. The waveform may also have frequency hopping for Electronic Counter Measures or for clutter suppression. The model can accommodate any type of encounter including air-to-air, air-to-ground (look-down) and rear attacks. It can simulate systems with multiple phase centers on receive for studying advanced clutter or jamming interference suppression techniques. An Airborne Intercept radar is investigated to demonstrate the validity and the capability of the model.
Theoretical models of synaptic short term plasticity
Hennig, Matthias H.
2013-01-01
Short term plasticity is a highly abundant form of rapid, activity-dependent modulation of synaptic efficacy. A shared set of mechanisms can cause both depression and enhancement of the postsynaptic response at different synapses, with important consequences for information processing. Mathematical models have been extensively used to study the mechanisms and roles of short term plasticity. This review provides an overview of existing models and their biological basis, and of their main properties. Special attention will be given to slow processes such as calcium channel inactivation and the effect of activation of presynaptic autoreceptors. PMID:23626536
Theoretical Model for Nanoporous Carbon Supercapacitors
Sumpter, Bobby G; Meunier, Vincent; Huang, Jingsong
2008-01-01
The unprecedented anomalous increase in capacitance of nanoporous carbon supercapacitors at pore sizes smaller than 1 nm [Science 2006, 313, 1760.] challenges the long-held presumption that pores smaller than the size of solvated electrolyte ions do not contribute to energy storage. We propose a heuristic model to replace the commonly used model for an electric double-layer capacitor (EDLC) on the basis of an electric double-cylinder capacitor (EDCC) for mesopores (2 {50 nm pore size), which becomes an electric wire-in-cylinder capacitor (EWCC) for micropores (< 2 nm pore size). Our analysis of the available experimental data in the micropore regime is confirmed by 1st principles density functional theory calculations and reveals significant curvature effects for carbon capacitance. The EDCC (and/or EWCC) model allows the supercapacitor properties to be correlated with pore size, specific surface area, Debye length, electrolyte concentration and dielectric constant, and solute ion size. The new model not only explains the experimental data, but also offers a practical direction for the optimization of the properties of carbon supercapacitors through experiments.
Theoretical Tinnitus Framework: A Neurofunctional Model
Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C. B.; Sani, Siamak S.; Ekhtiari, Hamed; Sanchez, Tanit G.
2016-01-01
Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the “sourceless” sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be
Theoretical Tinnitus Framework: A Neurofunctional Model.
Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C B; Sani, Siamak S; Ekhtiari, Hamed; Sanchez, Tanit G
2016-01-01
Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the "sourceless" sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be
Theoretical models of possible compact nucleosome structures.
Besker, Neva; Anselmi, Claudio; De Santis, Pasquale
2005-04-01
Chromatin structure seems related to the DNA linker length. This paper presents a systematic search of the possible chromatin structure as a function of the linker lengths, starting from three different low-resolution molecular models of the nucleosome. Gay-Berne potential was used to evaluate the relative nucleosome packing energy. Results suggest that linker DNAs, which bridges and orientate nucleosomes, affect both the geometry and the rigidity of the global chromatin structure. PMID:15752596
A theoretical model for whole genome alignment.
Belal, Nahla A; Heath, Lenwood S
2011-05-01
We present a graph-based model for representing two aligned genomic sequences. An alignment graph is a mixed graph consisting of two sets of vertices, each representing one of the input sequences, and three sets of edges. These edges allow the model to represent a number of evolutionary events. This model is used to perform sequence alignment at the level of nucleotides. We define a scoring function for alignment graphs. We show that minimizing the score is NP-complete. However, we present a dynamic programming algorithm that solves the minimization problem optimally for a certain class of alignments, called breakable arrangements. Algorithms for analyzing breakable arrangements are presented. We also present a greedy algorithm that is capable of representing reversals. We present a dynamic programming algorithm that optimally aligns two genomic sequences, when one of the input sequences is a breakable arrangement of the other. Comparing what we define as breakable arrangements to alignments generated by other algorithms, it is seen that many already aligned genomes fall into the category of being breakable. Moreover, the greedy algorithm is shown to represent reversals, besides rearrangements, mutations, and other evolutionary events. PMID:21210739
Theoretical model for plasma opening switch
Baker, L.
1980-07-01
The theory of an explosive plasma switch is developed and compared with the experimental results of Pavlovskii and work at Sandia. A simple analytic model is developed, which predicts that such switches may achieve opening times of approximately 100 ns. When the switching time is limited by channel mixing it scales as t = C(m d/sub 0/)/sup 1/2/P/sub 0//sup 2/P/sub e//sup -5/2/ where m is the foil mass per unit area, d/sub 0/ the channel thickness and P/sub 0/ the channel pressure (at explosive breakout), P/sub e/ the explosive pressure, C a constant of order 10 for c.g.s. units. Thus faster switching times may be achieved by minimizing foil mass and channel pressure, or increasing explosive product pressure, with the scaling exponents as shown suggesting that changes in pressures would be more effective.
Theoretical modelling of epigenetically modified DNA sequences.
Carvalho, Alexandra Teresa Pires; Gouveia, Maria Leonor; Raju Kanna, Charan; Wärmländer, Sebastian K T S; Platts, Jamie; Kamerlin, Shina Caroline Lynn
2015-01-01
We report herein a set of calculations designed to examine the effects of epigenetic modifications on the structure of DNA. The incorporation of methyl, hydroxymethyl, formyl and carboxy substituents at the 5-position of cytosine is shown to hardly affect the geometry of CG base pairs, but to result in rather larger changes to hydrogen-bond and stacking binding energies, as predicted by dispersion-corrected density functional theory (DFT) methods. The same modifications within double-stranded GCG and ACA trimers exhibit rather larger structural effects, when including the sugar-phosphate backbone as well as sodium counterions and implicit aqueous solvation. In particular, changes are observed in the buckle and propeller angles within base pairs and the slide and roll values of base pair steps, but these leave the overall helical shape of DNA essentially intact. The structures so obtained are useful as a benchmark of faster methods, including molecular mechanics (MM) and hybrid quantum mechanics/molecular mechanics (QM/MM) methods. We show that previously developed MM parameters satisfactorily reproduce the trimer structures, as do QM/MM calculations which treat bases with dispersion-corrected DFT and the sugar-phosphate backbone with AMBER. The latter are improved by inclusion of all six bases in the QM region, since a truncated model including only the central CG base pair in the QM region is considerably further from the DFT structure. This QM/MM method is then applied to a set of double-stranded DNA heptamers derived from a recent X-ray crystallographic study, whose size puts a DFT study beyond our current computational resources. These data show that still larger structural changes are observed than in base pairs or trimers, leading us to conclude that it is important to model epigenetic modifications within realistic molecular contexts. PMID:26448859
Neighbor intervention: a game-theoretic model.
Mesterton-Gibbons, Mike; Sherratt, Tom N
2009-01-21
It has long been argued that a resident may benefit from helping its neighbor defend a territory against a challenger to avoid renegotiating its boundaries with a new and potentially stronger individual. We quantify this theory by exploring games involving challengers, residents and potential allies. In a simplified discrete game with zero variation of fighting strength, helping neighbors is part of an evolutionarily stable strategy (ESS) only if fighting costs are low relative to those of renegotiation. However, if relative fighting costs are high then an interventional ESS remains possible with finite variation of strength. Under these conditions, neighbors may help residents fight off intruders, but only when the resident does not stand a reliable chance of winning alone. We show that neighbor intervention is more likely with low home advantage to occupying a territory, strengths combining synergistically or low probability that an ally will be usurped, amongst other factors. Our parameterized model readily explains occasional intervention in the Australian fiddler crab, including why the ally tended to be larger than both the assisted neighbor and the intruder. Reciprocity is not necessary for this type of cooperation to persist, but also it is by no means inevitable in territorial species. PMID:18977365
A theoretical model of asymmetric wave ripples
Blondeaux, P.; Foti, E.; Vittori, G.
2015-01-01
The time development of ripples under sea waves is investigated by means of the weakly nonlinear stability analysis of a flat sandy bottom subjected to the viscous oscillatory flow that is present in the boundary layer at the bottom of propagating sea waves. Second-order effects in the wave steepness are considered, to take into account the presence of the steady drift generated by the surface waves. Hence, the work of Vittori & Blondeaux (1990 J. Fluid Mech. 218, 19–39 (doi:10.1017/S002211209000091X)) is extended by considering steeper waves and/or less deep waters. As shown by the linear analysis of Blondeaux et al. (2000 Eur. J. Mech. B 19, 285–301 (doi:10.1016/S0997-7546(90)00106-I)), because of the presence of a steady velocity component in the direction of wave propagation, ripples migrate at a constant rate that depends on sediment and wave characteristics. The weakly nonlinear analysis shows that the ripple profile is no longer symmetric with respect to ripple crests and troughs and the symmetry index is computed as a function of the parameters of the problem. In particular, a relationship is determined between the symmetry index and the strength of the steady drift. A fair agreement between model results and laboratory data is obtained, albeit further data and analyses are necessary to determine the behaviour of vortex ripples and to be conclusive. PMID:25512587
A theoretical model of asymmetric wave ripples.
Blondeaux, P; Foti, E; Vittori, G
2015-01-28
The time development of ripples under sea waves is investigated by means of the weakly nonlinear stability analysis of a flat sandy bottom subjected to the viscous oscillatory flow that is present in the boundary layer at the bottom of propagating sea waves. Second-order effects in the wave steepness are considered, to take into account the presence of the steady drift generated by the surface waves. Hence, the work of Vittori & Blondeaux (1990 J. Fluid Mech. 218, 19-39 (doi:10.1017/S002211209000091X)) is extended by considering steeper waves and/or less deep waters. As shown by the linear analysis of Blondeaux et al. (2000 Eur. J. Mech. B 19, 285-301 (doi:10.1016/S0997-7546(90)00106-I)), because of the presence of a steady velocity component in the direction of wave propagation, ripples migrate at a constant rate that depends on sediment and wave characteristics. The weakly nonlinear analysis shows that the ripple profile is no longer symmetric with respect to ripple crests and troughs and the symmetry index is computed as a function of the parameters of the problem. In particular, a relationship is determined between the symmetry index and the strength of the steady drift. A fair agreement between model results and laboratory data is obtained, albeit further data and analyses are necessary to determine the behaviour of vortex ripples and to be conclusive. PMID:25512587
Theoretical and numerical modelling of shocks in dusty plasmas
Eliasson, B.; Shukla, P.K.
2005-10-31
The formation of dust acoustic (DA) and dust ion-acoustic (DIA) shocks are are studied theoretically and numerically by means of simple-wave solutions and a comparison between fluid and kinetic model for DIA waves. A fluid model admits sharp discontinuities at the shock front while the kinetic model involves Landau-damping of the the shock front.
The Psychopathological Model of Mental Retardation: Theoretical and Therapeutic Considerations.
ERIC Educational Resources Information Center
La Malfa, Giampaolo; Campigli, Marco; Bertelli, Marco; Mangiapane, Antonio; Cabras, Pier Luigi
1997-01-01
Describes a new integrated bio-psycho-social model of etiology for mental retardation. Discusses the problems with current models and the ability of the "universe line" model to integrate data from different research areas, especially cognitive and psychopathologic indicators. Addresses implications of this theoretical approach. (Author/CR)
Dynamics in Higher Education Politics: A Theoretical Model
ERIC Educational Resources Information Center
Kauko, Jaakko
2013-01-01
This article presents a model for analysing dynamics in higher education politics (DHEP). Theoretically the model draws on the conceptual history of political contingency, agenda-setting theories and previous research on higher education dynamics. According to the model, socio-historical complexity can best be analysed along two dimensions: the…
Opposition Surge: Lab Studies and Theoretical Models
NASA Astrophysics Data System (ADS)
Nelson, R. M.; Hapke, B. W.; Smythe, W. D.; Hale, A. S.; Piatek, J. L.; Green, J.
The opposition effect, a non-linear intensity increase in the reflectance phase curve with decreasing phase angle, has long been observed in solar system bodies and in laboratory investigations of the angular scattering properties of particulate media[1]. It has been attributed to two processes. One, shadow hiding, is the elimination of shadows mutually cast between the regolith grains as the phase angle decreases[2]. The other is coherent constructive interference between rays of light traveling along identical but opposite paths in multiply scattering media (CBOE). [3,4,5,6]. We report the results of an investigation into the opposition surge of particulate materials of the same particle size and packing density but of differing reflectance. The measurements were made on the long arm goniometer at JPL. The phase angle studied varied from 0.05 to 5o. Samples of Al2O3, diamond, Si4C, and B4C were presented with linearly and circularly polarized light from a laser of wavelength 0.633 µm. The uncompressed, 22-24 µm samples differed widely in reflectance. Many published models of CBOE suggest that as the materials become more absorbing the shape of the phase curve should become more rounded near 0o [7,8 9, 10, 11,12,13]. We find that, regardless of reflectance, the phase curve exhibits increasing slope with decreasing phase angle down to the angular limit of our measurement. It becomes more sharply peaked and does not become rounded. Our measurements of powdered materials, including lunar regolith samples[14,15,16], do not agree with current models of coherent backscatter, which predict a rounding and truncation of the opposition effect peak near zero phase. This lack of rounding is consistent with the hypothesis that very long light paths contribute to the CBOE of particulate materials including planetary regoliths. This work was performed at NASA's JPL under a grant from NASA's Planetary Geology / Geophysics program. References: [1] T. Gehrels, Astrrophys. J. 123
MONA: An accurate two-phase well flow model based on phase slippage
Asheim, H.
1984-10-01
In two phase flow, holdup and pressure loss are related to interfacial slippage. A model based on the slippage concept has been developed and tested using production well data from Forties, the Ekofisk area, and flowline data from Prudhoe Bay. The model developed turned out considerably more accurate than the standard models used for comparison.
Testing a Theoretical Model of Immigration Transition and Physical Activity.
Chang, Sun Ju; Im, Eun-Ok
2015-01-01
The purposes of the study were to develop a theoretical model to explain the relationships between immigration transition and midlife women's physical activity and test the relationships among the major variables of the model. A theoretical model, which was developed based on transitions theory and the midlife women's attitudes toward physical activity theory, consists of 4 major variables, including length of stay in the United States, country of birth, level of acculturation, and midlife women's physical activity. To test the theoretical model, a secondary analysis with data from 127 Hispanic women and 123 non-Hispanic (NH) Asian women in a national Internet study was used. Among the major variables of the model, length of stay in the United States was negatively associated with physical activity in Hispanic women. Level of acculturation in NH Asian women was positively correlated with women's physical activity. Country of birth and level of acculturation were significant factors that influenced physical activity in both Hispanic and NH Asian women. The findings support the theoretical model that was developed to examine relationships between immigration transition and physical activity; it shows that immigration transition can play an essential role in influencing health behaviors of immigrant populations in the United States. The NH theoretical model can be widely used in nursing practice and research that focus on immigrant women and their health behaviors. Health care providers need to consider the influences of immigration transition to promote immigrant women's physical activity. PMID:26502554
Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images
NASA Technical Reports Server (NTRS)
Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.
1999-01-01
Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.
NASA Astrophysics Data System (ADS)
Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.
2007-09-01
BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.
Theoretical models for the conformations and the protonation of triacetonamine.
Navajas, C C; Montero, L A; La Serna, B
1990-12-01
In this paper we propose theoretical models for the conformations of triacetonamine and protonated triacetonamine (Vincubine, an anticancer chemotherapeutic agent) developed by quantum and molecular mechanics techniques. We discuss the theoretical factors which are involved in the stabilization of the conformations calculated by the MNDO, MM2 and COPEANE methods and show the relative percent abundance of each molecular shape. Graphic representations of the conformers are depicted. PMID:1965442
Culture and Developmental Trajectories: A Discussion on Contemporary Theoretical Models
ERIC Educational Resources Information Center
de Carvalho, Rafael Vera Cruz; Seidl-de-Moura, Maria Lucia; Martins, Gabriela Dal Forno; Vieira, Mauro Luís
2014-01-01
This paper aims to describe, compare and discuss the theoretical models proposed by Patricia Greenfield, Çigdem Kagitçibasi and Heidi Keller. Their models have the common goal of understanding the developmental trajectories of self based on dimensions of autonomy and relatedness that are structured according to specific cultural and environmental…
Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd
2012-01-01
The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.
Theoretical models on prediction of thermal property of nanofluids
NASA Astrophysics Data System (ADS)
Shalimba, Veikko; Skočilasová, Blanka
2014-08-01
This paper deals with theoretical models on prediction of thermo physical properties of iron nanoparticles in base fluid. A high performance of heat transfer fluid has a great influence on the size, weight and cost of heat transfer systems, therefore a high performance heat transfer fluid is very important in many industries. Over the last decades nanofluids have been developed. According to many researchers and publications on nanofluids it is evident that nanofluids are found to exhibit enhanced thermal properties i.e. thermal conductivity etc. Theoretical models for predicting enhanced thermal conductivity have been established. The underlying mechanisms for the enhancement are still debated and not fully understood. In this paper, theoretical analytical models on prediction of thermal conductivity of iron nano particle in base Jatropha oil are discussed. The work arises from the projects which were realized at UJEP, FPTM, department of Machines and Mechanics with cooperation with Polytechnic of Namibia, department of Mechanical Engineering.
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
A control theoretic model for piloted approach to landing.
NASA Technical Reports Server (NTRS)
Kleinman, D. L.; Baron, S.
1972-01-01
Using manned vehicle systems analysis, a model for manual approach to landing is developed. This model is developed and applied in the specific context of a problem of analytical evaluation of a pictorial display for longitudinal control of glide path errors. This makes it possible to discuss the model in concrete terms, and the availability of experimental data provides opportunities for checking the theoretical results obtained.
Empirical and theoretical models of terrestrial trapped radiation
Panasyuk, M.I.
1996-07-01
A survey of current Skobeltsyn Institute of Nuclear Physics, Moscow State University (INP MSU) empirical and theoretical models of particles (electrons, protons and heavier irons) of the Earth{close_quote}s radiation belts developed to date is presented. Results of intercomparison of the different models as well as comparison with experimental data are reported. Aspects of further development of radiation condition modelling in near-Earth space are discussed. {copyright} {ital 1996 American Institute of Physics.}
Accurate FDTD modelling for dispersive media using rational function and particle swarm optimisation
NASA Astrophysics Data System (ADS)
Chung, Haejun; Ha, Sang-Gyu; Choi, Jaehoon; Jung, Kyung-Young
2015-07-01
This article presents an accurate finite-difference time domain (FDTD) dispersive modelling suitable for complex dispersive media. A quadratic complex rational function (QCRF) is used to characterise their dispersive relations. To obtain accurate coefficients of QCRF, in this work, we use an analytical approach and a particle swarm optimisation (PSO) simultaneously. In specific, an analytical approach is used to obtain the QCRF matrix-solving equation and PSO is applied to adjust a weighting function of this equation. Numerical examples are used to illustrate the validity of the proposed FDTD dispersion model.
Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.
2015-01-01
Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational. PMID:25615870
The Theoretical Basis of the Effective School Improvement Model (ESI)
ERIC Educational Resources Information Center
Scheerens, Jaap; Demeuse, Marc
2005-01-01
This article describes the process of theoretical reflection that preceded the development and empirical verification of a model of "effective school improvement". The focus is on basic mechanisms that could be seen as underlying "getting things in motion" and change in education systems. Four mechanisms are distinguished: synoptic rational…
Healing from Childhood Sexual Abuse: A Theoretical Model
ERIC Educational Resources Information Center
Draucker, Claire Burke; Martsolf, Donna S.; Roller, Cynthia; Knapik, Gregory; Ross, Ratchneewan; Stidham, Andrea Warner
2011-01-01
Childhood sexual abuse is a prevalent social and health care problem. The processes by which individuals heal from childhood sexual abuse are not clearly understood. The purpose of this study was to develop a theoretical model to describe how adults heal from childhood sexual abuse. Community recruitment for an ongoing broader project on sexual…
Organizational Learning and Product Design Management: Towards a Theoretical Model.
ERIC Educational Resources Information Center
Chiva-Gomez, Ricardo; Camison-Zornoza, Cesar; Lapiedra-Alcami, Rafael
2003-01-01
Case studies of four Spanish ceramics companies were used to construct a theoretical model of 14 factors essential to organizational learning. One set of factors is related to the conceptual-analytical phase of the product design process and the other to the creative-technical phase. All factors contributed to efficient product design management…
A Generalized Information Theoretical Model for Quantum Secret Sharing
NASA Astrophysics Data System (ADS)
Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming
2016-07-01
An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.
Theoretical modelling of the feedback stabilization of external MHD modes in toroidal geometry
NASA Astrophysics Data System (ADS)
Chance, M. S.; Chu, M. S.; Okabayashi, M.; Turnbull, A. D.
2002-03-01
A theoretical framework for understanding the feedback mechanism for stabilization of external MHD modes has been formulated. Efficient computational tools - the GATO stability code coupled with a substantially modified VACUUM code - have been developed to effectively design viable feedback systems against these modes. The analysis assumed a thin resistive shell and a feedback coil structure accurately modelled in θ and phi, albeit with only a single harmonic variation in phi. Time constants and induced currents in the enclosing resistive shell are calculated. An optimized configuration based on an idealized model has been computed for the DIII-D device. Up to 90% of the effectiveness of an ideal wall can be achieved.
Identification of accurate nonlinear rainfall-runoff models with unique parameters
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N.
2009-04-01
We propose a strategy to identify models with unique parameters that yield accurate streamflow predictions, given a time-series of rainfall inputs. The procedure consists of five general steps. First, an a priori range of model structures is specified based on prior general and site-specific hydrologic knowledge. To this end, we rely on a flexible model code that allows a specification of a wide range of model structures, from simple to complex. Second, using global optimization each model structure is calibrated to a record of rainfall-runoff data, yielding optimal parameter values for each model structure. Third, accuracy of each model structure is determined by estimating model prediction errors using independent validation and statistical theory. Fourth, parameter identifiability of each calibrated model structure is estimated by means of Monte Carlo Markov Chain simulation. Finally, an assessment is made about each model structure in terms of its accuracy of mimicking rainfall-runoff processes (step 3), and the uniqueness of its parameters (step 4). The procedure results in the identification of the most complex and accurate model supported by the data, without causing parameter equifinality. As such, it provides insight into the information content of the data for identifying nonlinear rainfall-runoff models. We illustrate the method using rainfall-runoff data records from several MOPEX basins in the US.
Electromechanical properties of smart aggregate: theoretical modeling and experimental validation
NASA Astrophysics Data System (ADS)
Wang, Jianjun; Kong, Qingzhao; Shi, Zhifei; Song, Gangbing
2016-09-01
Smart aggregate (SA), as a piezoceramic-based multi-functional device, is formed by sandwiching two lead zirconate titanate (PZT) patches with copper shielding between a pair of solid-machined cylindrical marble blocks with epoxy. Previous researches have successfully demonstrated the capability and reliability of versatile SAs to monitor the structural health of concrete structures. However, the previous works concentrated mainly on the applications of SAs in structural health monitoring; no reasonable theoretical model of SAs was proposed. In this paper, electromechanical properties of SAs were investigated using a proposed theoretical model. Based on one dimensional linear theory of piezo-elasticity, the dynamic solutions of a SA subjected to an external harmonic voltage were solved. Further, the electric impedance of the SA was computed, and the resonance and anti-resonance frequencies were calculated based on derived equations. Numerical analysis was conducted to discuss the effects of the thickness of epoxy layer and the dimension of PZT patch on the fundamental resonance and anti-resonance frequencies as well as the corresponding electromechanical coupling factor. The dynamic solutions based on the proposed theoretical model were further experimentally verified with two SA samples. The fundamental resonance and anti-resonance frequencies of SAs show good agreements in both theoretical and experimental results. The presented analysis and results contribute to the overall understanding of SA properties and help to optimize the working frequencies of SAs in structural health monitoring of civil structures.
Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353
Material Models for Accurate Simulation of Sheet Metal Forming and Springback
NASA Astrophysics Data System (ADS)
Yoshida, Fusahito
2010-06-01
For anisotropic sheet metals, modeling of anisotropy and the Bauschinger effect is discussed in the framework of Yoshida-Uemori kinematic hardening model incorporating with anisotropic yield functions. The performances of the models in predicting yield loci, cyclic stress-strain responses on several types of steel and aluminum sheets are demonstrated by comparing the numerical simulation results with the corresponding experimental observations. From some examples of FE simulation of sheet metal forming and springback, it is concluded that modeling of both the anisotropy and the Bauschinger effect is essential for the accurate numerical simulation.
Development of modified cable models to simulate accurate neuronal active behaviors
2014-01-01
In large network and single three-dimensional (3-D) neuron simulations, high computing speed dictates using reduced cable models to simulate neuronal firing behaviors. However, these models are unwarranted under active conditions and lack accurate representation of dendritic active conductances that greatly shape neuronal firing. Here, realistic 3-D (R3D) models (which contain full anatomical details of dendrites) of spinal motoneurons were systematically compared with their reduced single unbranched cable (SUC, which reduces the dendrites to a single electrically equivalent cable) counterpart under passive and active conditions. The SUC models matched the R3D model's passive properties but failed to match key active properties, especially active behaviors originating from dendrites. For instance, persistent inward currents (PIC) hysteresis, frequency-current (FI) relationship secondary range slope, firing hysteresis, plateau potential partial deactivation, staircase currents, synaptic current transfer ratio, and regional FI relationships were not accurately reproduced by the SUC models. The dendritic morphology oversimplification and lack of dendritic active conductances spatial segregation in the SUC models caused significant underestimation of those behaviors. Next, SUC models were modified by adding key branching features in an attempt to restore their active behaviors. The addition of primary dendritic branching only partially restored some active behaviors, whereas the addition of secondary dendritic branching restored most behaviors. Importantly, the proposed modified models successfully replicated the active properties without sacrificing model simplicity, making them attractive candidates for running R3D single neuron and network simulations with accurate firing behaviors. The present results indicate that using reduced models to examine PIC behaviors in spinal motoneurons is unwarranted. PMID:25277743
NASA Astrophysics Data System (ADS)
Rumple, Christopher; Krane, Michael; Richter, Joseph; Craven, Brent
2013-11-01
The mammalian nose is a multi-purpose organ that houses a convoluted airway labyrinth responsible for respiratory air conditioning, filtering of environmental contaminants, and chemical sensing. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of respiratory airflow and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture an anatomically-accurate transparent model for stereoscopic particle image velocimetry (SPIV) measurements. Challenges in the design and manufacture of an index-matched anatomical model are addressed. PIV measurements are presented, which are used to validate concurrent computational fluid dynamics (CFD) simulations of mammalian nasal airflow. Supported by the National Science Foundation.
A sequential decision-theoretic model for medical diagnostic system.
Li, Aiping; Jin, Songchang; Zhang, Lumin; Jia, Yan
2015-01-01
Although diagnostic expert systems using a knowledge base which models decision-making of traditional experts can provide important information to non-experts, they tend to duplicate the errors made by experts. Decision-Theoretic Model (DTM) is therefore very useful in expert system since they prevent experts from incorrect reasoning under uncertainty. For the diagnostic expert system, corresponding DTM and arithmetic are studied and a sequential diagnostic decision-theoretic model based on Bayesian Network is given. In the model, the alternative features are categorized into two classes (including diseases features and test features), then an arithmetic for prior of test is provided. The different features affect other features weights are also discussed. Bayesian Network is adopted to solve uncertainty presentation and propagation. The model can help knowledge engineers model the knowledge involved in sequential diagnosis and decide evidence alternative priority. A practical example of the models is also presented: at any time of the diagnostic process the expert is provided with a dynamically updated list of suggested tests in order to support him in the decision-making problem about which test to execute next. The results show it is better than the traditional diagnostic model which is based on experience. PMID:26410326
A theoretical model for lunar surface material thermal conductivity.
NASA Technical Reports Server (NTRS)
Khader, M. S.; Vachon, R. I.
1973-01-01
This paper presents a theoretical thermal conductivity model for the uppermost layer of lunar surface material under the lunar vacuum environment. The model assumes that the lunar soil can be simulated by spherical particles in contact with each other and that the effective thermal conductivity is a function of depth, temperature, porosity, particle dimension, and mechanical-thermal properties of the solid particles. Two modes of heat transport are considered, conduction and radiation - with emphasis on the contact resistance between particles. The model gives effective conductivity values that compare favorably with the experimental data from lunar surface samples obtained on Apollo 11 and 12 missions.
Methodology to set up accurate OPC model using optical CD metrology and atomic force microscopy
NASA Astrophysics Data System (ADS)
Shim, Yeon-Ah; Kang, Jaehyun; Lee, Sang-Uk; Kim, Jeahee; Kim, Keeho
2007-03-01
For the 90nm node and beyond, smaller Critical Dimension(CD) control budget is required and the ways to control good CD uniformity are needed. Moreover Optical Proximity Correction(OPC) for the sub-90nm node demands more accurate wafer CD data in order to improve accuracy of OPC model. Scanning Electron Microscope (SEM) is the typical method for measuring CD until ArF process. However SEM can give serious attack such as shrinkage of Photo Resist(PR) by burning of weak chemical structure of ArF PR due to high energy electron beam. In fact about 5nm CD narrowing occur when we measure CD by using CD-SEM in ArF photo process. Optical CD Metrology(OCD) and Atomic Force Microscopy(AFM) has been considered to the method for measuring CD without attack of organic materials. Also the OCD and AFM measurement system have the merits of speed, easiness and accurate data. For model-based OPC, the model is generated using CD data of test patterns transferred onto the wafer. In this study we discuss to generate accurate OPC model using OCD and AFM measurement system.
Can phenological models predict tree phenology accurately under climate change conditions?
NASA Astrophysics Data System (ADS)
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2014-05-01
The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay
Structure of plant photosystem I revealed by theoretical modeling.
Jolley, Craig; Ben-Shem, Adam; Nelson, Nathan; Fromme, Petra
2005-09-30
Photosystem (PS) I is a large membrane protein complex vital for oxygenic photosynthesis, one of the most important biological processes on the planet. We present an "atomic" model of higher plant PSI, based on theoretical modeling using the recent 4.4 angstroms x-ray crystal structure of PSI from pea. Because of the lack of information on the amino acid side chains in the x-ray structural model and the high cofactor content in this system, novel modeling techniques were developed. Our model reveals some important structural features of plant PSI that were not visible in the crystal structure, and our model sheds light on the evolutionary relationship between plant and cyanobacterial PSI. PMID:15955818
Building an accurate 3D model of a circular feature for robot vision
NASA Astrophysics Data System (ADS)
Li, L.
2012-06-01
In this paper, an accurate 3D model analysis of a circular feature is built with error compensation for robot vision. We propose an efficient method of fitting ellipses to data points by minimizing the algebraic distance subject to the constraint that a conic should be an ellipse and solving the ellipse parameters through a direct ellipse fitting method by analysing the 3D geometrical representation in a perspective projection scheme, the 3D position of a circular feature with known radius can be obtained. A set of identical circles, machined on a calibration board whose centres were known, was calibrated with a camera and did the model analysis that our method developed. Experimental results show that our method is more accurate than other methods.
Seth A Veitzer
2008-10-21
Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.
Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z
2016-09-01
The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models. PMID:26956430
Ustinov, E A
2014-10-01
Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system. PMID:25296827
Ustinov, E. A.
2014-10-07
Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
NASA Astrophysics Data System (ADS)
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Accurate and efficient halo-based galaxy clustering modelling with simulations
NASA Astrophysics Data System (ADS)
Zheng, Zheng; Guo, Hong
2016-06-01
Small- and intermediate-scale galaxy clustering can be used to establish the galaxy-halo connection to study galaxy formation and evolution and to tighten constraints on cosmological parameters. With the increasing precision of galaxy clustering measurements from ongoing and forthcoming large galaxy surveys, accurate models are required to interpret the data and extract relevant information. We introduce a method based on high-resolution N-body simulations to accurately and efficiently model the galaxy two-point correlation functions (2PCFs) in projected and redshift spaces. The basic idea is to tabulate all information of haloes in the simulations necessary for computing the galaxy 2PCFs within the framework of halo occupation distribution or conditional luminosity function. It is equivalent to populating galaxies to dark matter haloes and using the mock 2PCF measurements as the model predictions. Besides the accurate 2PCF calculations, the method is also fast and therefore enables an efficient exploration of the parameter space. As an example of the method, we decompose the redshift-space galaxy 2PCF into different components based on the type of galaxy pairs and show the redshift-space distortion effect in each component. The generalizations and limitations of the method are discussed.
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.
2015-09-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2016-10-01
The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. PMID:27272707
Accurate protein structure modeling using sparse NMR data and homologous structure information
Thompson, James M.; Sgourakis, Nikolaos G.; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L.; Szyperski, Thomas; Montelione, Gaetano T.; Baker, David
2012-01-01
While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining , 13C, and 15N backbone and 13Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2–1.9 Å relative to the conventional determined NMR ensembles and of 0.9–1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments. PMID:22665781
Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters.
Zagni, F; Cicoria, G; Lucconi, G; Infantino, A; Lodi, F; Marengo, M
2014-12-01
Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the "PENELOPE" EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration. PMID:25195174
Coarse-grained red blood cell model with accurate mechanical properties, rheology and dynamics.
Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George E
2009-01-01
We present a coarse-grained red blood cell (RBC) model with accurate and realistic mechanical properties, rheology and dynamics. The modeled membrane is represented by a triangular mesh which incorporates shear inplane energy, bending energy, and area and volume conservation constraints. The macroscopic membrane elastic properties are imposed through semi-analytic theory, and are matched with those obtained in optical tweezers stretching experiments. Rheological measurements characterized by time-dependent complex modulus are extracted from the membrane thermal fluctuations, and compared with those obtained from the optical magnetic twisting cytometry results. The results allow us to define a meaningful characteristic time of the membrane. The dynamics of RBCs observed in shear flow suggests that a purely elastic model for the RBC membrane is not appropriate, and therefore a viscoelastic model is required. The set of proposed analyses and numerical tests can be used as a complete model testbed in order to calibrate the modeled viscoelastic membranes to accurately represent RBCs in health and disease. PMID:19965026
Sethurajan, Athinthra Krishnaswamy; Krachkovskiy, Sergey A; Halalay, Ion C; Goward, Gillian R; Protas, Bartosz
2015-09-17
We used NMR imaging (MRI) combined with data analysis based on inverse modeling of the mass transport problem to determine ionic diffusion coefficients and transference numbers in electrolyte solutions of interest for Li-ion batteries. Sensitivity analyses have shown that accurate estimates of these parameters (as a function of concentration) are critical to the reliability of the predictions provided by models of porous electrodes. The inverse modeling (IM) solution was generated with an extension of the Planck-Nernst model for the transport of ionic species in electrolyte solutions. Concentration-dependent diffusion coefficients and transference numbers were derived using concentration profiles obtained from in situ (19)F MRI measurements. Material properties were reconstructed under minimal assumptions using methods of variational optimization to minimize the least-squares deviation between experimental and simulated concentration values with uncertainty of the reconstructions quantified using a Monte Carlo analysis. The diffusion coefficients obtained by pulsed field gradient NMR (PFG-NMR) fall within the 95% confidence bounds for the diffusion coefficient values obtained by the MRI+IM method. The MRI+IM method also yields the concentration dependence of the Li(+) transference number in agreement with trends obtained by electrochemical methods for similar systems and with predictions of theoretical models for concentrated electrolyte solutions, in marked contrast to the salt concentration dependence of transport numbers determined from PFG-NMR data. PMID:26247105
A theoretical model for smoking prevention studies in preteen children.
McGahee, T W; Kemp, V; Tingen, M
2000-01-01
The age of the onset of smoking is on a continual decline, with the prime age of tobacco use initiation being 12-14 years. A weakness of the limited research conducted on smoking prevention programs designed for preteen children (ages 10-12) is a well-defined theoretical basis. A theoretical perspective is needed in order to make a meaningful transition from empirical analysis to application of knowledge. Bandura's Social Cognitive Theory (1977, 1986), the Theory of Reasoned Action (Ajzen & Fishbein, 1980), and other literature linking various concepts to smoking behaviors in preteens were used to develop a model that may be useful for smoking prevention studies in preteen children. PMID:12026266
Theoretical model for plasma expansion generated by hypervelocity impact
Ju, Yuanyuan; Zhang, Qingming Zhang, Dongjiang; Long, Renrong; Chen, Li; Huang, Fenglei; Gong, Zizheng
2014-09-15
The hypervelocity impact experiments of spherical LY12 aluminum projectile diameter of 6.4 mm on LY12 aluminum target thickness of 23 mm have been conducted using a two-stage light gas gun. The impact velocity of the projectile is 5.2, 5.7, and 6.3 km/s, respectively. The experimental results show that the plasma phase transition appears under the current experiment conditions, and the plasma expansion consists of accumulation, equilibrium, and attenuation. The plasma characteristic parameters decrease as the plasma expands outward and are proportional with the third power of the impact velocity, i.e., (T{sub e}, n{sub e}) ∝ v{sub p}{sup 3}. Based on the experimental results, a theoretical model on the plasma expansion is developed and the theoretical results are consistent with the experimental data.
Accurate Analytic Results for the Steady State Distribution of the Eigen Model
NASA Astrophysics Data System (ADS)
Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun
2016-04-01
Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.
A Modified Theoretical Model of Intrinsic Hardness of Crystalline Solids
Dai, Fu-Zhi; Zhou, Yanchun
2016-01-01
Super-hard materials have been extensively investigated due to their practical importance in numerous industrial applications. To stimulate the design and exploration of new super-hard materials, microscopic models that elucidate the fundamental factors controlling hardness are desirable. The present work modified the theoretical model of intrinsic hardness proposed by Gao. In the modification, we emphasize the critical role of appropriately decomposing a crystal to pseudo-binary crystals, which should be carried out based on the valence electron population of each bond. After modification, the model becomes self-consistent and predicts well the hardness values of many crystals, including crystals composed of complex chemical bonds. The modified model provides fundamental insights into the nature of hardness, which can facilitate the quest for intrinsic super-hard materials. PMID:27604165
A Modified Theoretical Model of Intrinsic Hardness of Crystalline Solids.
Dai, Fu-Zhi; Zhou, Yanchun
2016-01-01
Super-hard materials have been extensively investigated due to their practical importance in numerous industrial applications. To stimulate the design and exploration of new super-hard materials, microscopic models that elucidate the fundamental factors controlling hardness are desirable. The present work modified the theoretical model of intrinsic hardness proposed by Gao. In the modification, we emphasize the critical role of appropriately decomposing a crystal to pseudo-binary crystals, which should be carried out based on the valence electron population of each bond. After modification, the model becomes self-consistent and predicts well the hardness values of many crystals, including crystals composed of complex chemical bonds. The modified model provides fundamental insights into the nature of hardness, which can facilitate the quest for intrinsic super-hard materials. PMID:27604165
Theoretical models for Mars and their seismic properties
NASA Technical Reports Server (NTRS)
Okal, E. A.; Anderson, D. L.
1978-01-01
Theoretical seismic properties of the planet Mars are investigated on the basis of the various models which have been proposed for the internal composition of the planet. The latest interpretation of gravity-field data, assuming a lower value of the moment of inertia, would require a less dense mantle and a larger core than previous models. If Mars is chondritic in composition, the most reasonable models are an incompletely differentiated H-chondrite or a mixture of H-chondrites and carbonaceous chondrites. Seismic profiles, travel times, and free oscillation periods are computed for various models, with the aim of establishing which seismic data is crucial for deciding among the alternatives. A detailed discussion is given of the seismic properties which could - in principle - help answer the questions of whether Mars' core is liquid or solid and whether Mars has a partially molten asthenosphere in its upper mantle.
Establishment and validation for the theoretical model of the vehicle airbag
NASA Astrophysics Data System (ADS)
Zhang, Junyuan; Jin, Yang; Xie, Lizhe; Chen, Chao
2015-05-01
The current design and optimization of the occupant restraint system (ORS) are based on numerous actual tests and mathematic simulations. These two methods are overly time-consuming and complex for the concept design phase of the ORS, though they're quite effective and accurate. Therefore, a fast and directive method of the design and optimization is needed in the concept design phase of the ORS. Since the airbag system is a crucial part of the ORS, in this paper, a theoretical model for the vehicle airbag is established in order to clarify the interaction between occupants and airbags, and further a fast design and optimization method of airbags in the concept design phase is made based on the proposed theoretical model. First, the theoretical expression of the simplified mechanical relationship between the airbag's design parameters and the occupant response is developed based on classical mechanics, then the momentum theorem and the ideal gas state equation are adopted to illustrate the relationship between airbag's design parameters and occupant response. By using MATLAB software, the iterative algorithm method and discrete variables are applied to the solution of the proposed theoretical model with a random input in a certain scope. And validations by MADYMO software prove the validity and accuracy of this theoretical model in two principal design parameters, the inflated gas mass and vent diameter, within a regular range. This research contributes to a deeper comprehension of the relation between occupants and airbags, further a fast design and optimization method for airbags' principal parameters in the concept design phase, and provides the range of the airbag's initial design parameters for the subsequent CAE simulations and actual tests.
Theoretical consideration of a microcontinuum model of graphene
NASA Astrophysics Data System (ADS)
Yang, Gang; Huang, Zaixing; Gao, Cun-Fa; Zhang, Bin
2016-05-01
A microcontinuum model of graphene is proposed based on micromorphic theory, in which the planar Bravais cell of graphene crystal is taken as the basal element of finite size. Governing equations including the macro-displacements and the micro-deformations of the basal element are modified and derived in global coordinates. Since independent freedom degrees of the basal element are closely related to the modes of phonon dispersions, the secular equations in micromorphic form are obtained by substituting the assumed harmonic wave equations into the governing equations, and simplified further according to the properties of phonon dispersion relations of two-dimensional (2D) crystals. Thus, the constitutive equations of the microcontinuum model are confirmed, in which the constitutive constants are determined by fitting the data of experimental and theoretical phonon dispersion relations in literature respectively. By employing the 2D microcontinuum model, we obtained sound velocities, Rayleigh velocity and elastic moduli of graphene, which show good agreements with available experimental or theoretical values, indicating that the current model would be another efficient and reliable methodology to study the mechanical behaviors of graphene.
Healing from Childhood Sexual Abuse: A Theoretical Model
Draucker, Claire Burke; Martsolf, Donna S.; Roller, Cynthia; Knapik, Gregory; Ross, Ratchneewan; Stidham, Andrea Warner
2014-01-01
Childhood sexual abuse (CSA) is a prevalent social and healthcare problem. The processes by which individuals heal from CSA are not clearly understood. The purpose of this study was to develop a theoretical model to describe how adults heal from CSA. Community recruitment for an on-going, broader project on sexual violence throughout the lifespan, referred to as the Sexual Violence Study, yielded a subsample of 48 women and 47 men who had experienced CSA. During semi-structured, open-ended interviews, they were asked to describe their experiences with healing from CSA and other victimization throughout their lives. Constructivist grounded theory methods were used with these data to develop constructs and hypotheses about healing. For the Sexual Violence Study, frameworks were developed to describe the participants' life patterns, parenting experiences, disclosures about sexual violence, spirituality, and altruism. Several analytic techniques were used to synthesize the findings of these frameworks to develop an overarching theoretical model that describes healing from CSA. The model includes four stages of healing, five domains of functioning, and six enabling factors that facilitate movement from one stage to the next. The findings indicate that healing is a complex and dynamic trajectory. The model can be used to alert clinicians to a variety of processes and enabling factors that facilitate healing in several domains and to guide discussions on important issues related to healing from CSA. PMID:21812546
Game-Theoretic Models of Information Overload in Social Networks
NASA Astrophysics Data System (ADS)
Borgs, Christian; Chayes, Jennifer; Karrer, Brian; Meeder, Brendan; Ravi, R.; Reagans, Ray; Sayedi, Amin
We study the effect of information overload on user engagement in an asymmetric social network like Twitter. We introduce simple game-theoretic models that capture rate competition between celebrities producing updates in such networks where users non-strategically choose a subset of celebrities to follow based on the utility derived from high quality updates as well as disutility derived from having to wade through too many updates. Our two variants model the two behaviors of users dropping some potential connections (followership model) or leaving the network altogether (engagement model). We show that under a simple formulation of celebrity rate competition, there is no pure strategy Nash equilibrium under the first model. We then identify special cases in both models when pure rate equilibria exist for the celebrities: For the followership model, we show existence of a pure rate equilibrium when there is a global ranking of the celebrities in terms of the quality of their updates to users. This result also generalizes to the case when there is a partial order consistent with all the linear orders of the celebrities based on their qualities to the users. Furthermore, these equilibria can be computed in polynomial time. For the engagement model, pure rate equilibria exist when all users are interested in the same number of celebrities, or when they are interested in at most two. Finally, we also give a finite though inefficient procedure to determine if pure equilibria exist in the general case of the followership model.
Accurate modeling of switched reluctance machine based on hybrid trained WNN
Song, Shoujun Ge, Lefei; Ma, Shaojie; Zhang, Man
2014-04-15
According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.
Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models
NASA Astrophysics Data System (ADS)
Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo
2014-04-01
We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.
Information-Theoretic Benchmarking of Land Surface Models
NASA Astrophysics Data System (ADS)
Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong
2016-04-01
Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed
Beyond Ellipse(s): Accurately Modelling the Isophotal Structure of Galaxies with ISOFIT and CMODEL
NASA Astrophysics Data System (ADS)
Ciambur, B. C.
2015-09-01
This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
Naturalness of unknown physics: Theoretical models and experimental signatures
NASA Astrophysics Data System (ADS)
Kilic, Can
In the last few decades collider experiments have not only spectacularly confirmed the predictions of the Standard Model but also have not revealed any direct evidence for new physics beyond the SM, which has led theorists to devise numerous models where the new physics couples weakly to the SM or is simply beyond the reach of past experiments. While phenomenologically viable, many such models appear finely tuned, even contrived. This work illustrates three attempts at coming up with explanations to fine-tunings we observe in the world around us, such as the gauge hierarchy problem or the cosmological constant problem, emphasizing both the theoretical aspects of model building as well as possible experimental signatures. First we investigate the "Little Higgs" mechanism and work on a specifical model, the "Minimal Moose" to highlight its impact on precision observables in the SM, and illustrate that it does not require implausible fine-tuning. Next we build a supersymmetric model, the "Fat Higgs", with an extended gauge structure which becomes confining. This model, aside from naturally preserving the unification of the SM gauge couplings at high energies, also makes it possible to evade the bounds on the lightest Higgs boson mass which are quite restrictive in minimal SUSY scenarios. Lastly we take a look at a possible resolution of the cosmological constant problem through the mechanism of "Ghost Condensation" and dwell on astrophysical observables from the Lorentz Violating sector in this model. We use current experimental data to constrain the coupling of this sector to the SM.
Theoretical models for coronary vascular biomechanics: progress & challenges.
Waters, Sarah L; Alastruey, Jordi; Beard, Daniel A; Bovendeerd, Peter H M; Davies, Peter F; Jayaraman, Girija; Jensen, Oliver E; Lee, Jack; Parker, Kim H; Popel, Aleksander S; Secomb, Timothy W; Siebes, Maria; Sherwin, Spencer J; Shipley, Rebecca J; Smith, Nicolas P; van de Vosse, Frans N
2011-01-01
A key aim of the cardiac Physiome Project is to develop theoretical models to simulate the functional behaviour of the heart under physiological and pathophysiological conditions. Heart function is critically dependent on the delivery of an adequate blood supply to the myocardium via the coronary vasculature. Key to this critical function of the coronary vasculature is system dynamics that emerge via the interactions of the numerous constituent components at a range of spatial and temporal scales. Here, we focus on several components for which theoretical approaches can be applied, including vascular structure and mechanics, blood flow and mass transport, flow regulation, angiogenesis and vascular remodelling, and vascular cellular mechanics. For each component, we summarise the current state of the art in model development, and discuss areas requiring further research. We highlight the major challenges associated with integrating the component models to develop a computational tool that can ultimately be used to simulate the responses of the coronary vascular system to changing demands and to diseases and therapies. PMID:21040741
Theoretical models for coronary vascular biomechanics: Progress & challenges
Waters, Sarah L.; Alastruey, Jordi; Beard, Daniel A.; Bovendeerd, Peter H.M.; Davies, Peter F.; Jayaraman, Girija; Jensen, Oliver E.; Lee, Jack; Parker, Kim H.; Popel, Aleksander S.; Secomb, Timothy W.; Siebes, Maria; Sherwin, Spencer J.; Shipley, Rebecca J.; Smith, Nicolas P.; van de Vosse, Frans N.
2013-01-01
A key aim of the cardiac Physiome Project is to develop theoretical models to simulate the functional behaviour of the heart under physiological and pathophysiological conditions. Heart function is critically dependent on the delivery of an adequate blood supply to the myocardium via the coronary vasculature. Key to this critical function of the coronary vasculature is system dynamics that emerge via the interactions of the numerous constituent components at a range of spatial and temporal scales. Here, we focus on several components for which theoretical approaches can be applied, including vascular structure and mechanics, blood flow and mass transport, flow regulation, angiogenesis and vascular remodelling, and vascular cellular mechanics. For each component, we summarise the current state of the art in model development, and discuss areas requiring further research. We highlight the major challenges associated with integrating the component models to develop a computational tool that can ultimately be used to simulate the responses of the coronary vascular system to changing demands and to diseases and therapies. PMID:21040741
Accuracy Analysis of a Box-wing Theoretical SRP Model
NASA Astrophysics Data System (ADS)
Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui
2016-07-01
For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.
The theoretical aspects of UrQMD & AMPT models
NASA Astrophysics Data System (ADS)
Saini, Abhilasha; Bhardwaj, Sudhir
2016-05-01
The field of high energy physics is very challenging in carrying out theories and experiments to unlock the secrets of heavy ion collisions and still not cracked and solved completely. There are many theoretical queries; some may be due to the inherent causes like the non-perturbative nature of QCD in the strong coupling limit, also due to the multi-particle production and evolution during the heavy ion collisions which increase the complexity of the phenomena. So for the purpose of understanding the phenomena, variety of theories and ideas are developed which are usually implied in the form of Monte-Carlo codes. The UrQMD model and the AMPT model are discussed here in detail. These methods are useful in modeling the nuclear collisions.
Theoretical Modeling of Mechanical-Electrical Coupling of Carbon Nanotubes
Lu, Jun-Qiang; Jiang, Hanqiang
2008-01-01
Carbon nanotubes have been studied extensively due to their unique properties, ranging from electrical, mechanical, optical, to thermal properties. The coupling between the electrical and mechanical properties of carbon nanotubes has emerged as a new field, which raises both interesting fundamental problems and huge application potentials. In this article, we will review our recently work on the theoretical modeling on mechanical-electrical coupling of carbon nanotubes subject to various loading conditions, including tension/compression, torsion, and squashing. Some related work by other groups will be also mentioned.
Theoretical Models and Operational Frameworks in Public Health Ethics
Petrini, Carlo
2010-01-01
The article is divided into three sections: (i) an overview of the main ethical models in public health (theoretical foundations); (ii) a summary of several published frameworks for public health ethics (practical frameworks); and (iii) a few general remarks. Rather than maintaining the superiority of one position over the others, the main aim of the article is to summarize the basic approaches proposed thus far concerning the development of public health ethics by describing and comparing the various ideas in the literature. With this in mind, an extensive list of references is provided. PMID:20195441
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756
NASA Astrophysics Data System (ADS)
McCullagh, Nuala; Jeong, Donghui; Szalay, Alexander S.
2016-01-01
Accurate modelling of non-linearities in the galaxy bispectrum, the Fourier transform of the galaxy three-point correlation function, is essential to fully exploit it as a cosmological probe. In this paper, we present numerical and theoretical challenges in modelling the non-linear bispectrum. First, we test the robustness of the matter bispectrum measured from N-body simulations using different initial conditions generators. We run a suite of N-body simulations using the Zel'dovich approximation and second-order Lagrangian perturbation theory (2LPT) at different starting redshifts, and find that transients from initial decaying modes systematically reduce the non-linearities in the matter bispectrum. To achieve 1 per cent accuracy in the matter bispectrum at z ≤ 3 on scales k < 1 h Mpc-1, 2LPT initial conditions generator with initial redshift z ≳ 100 is required. We then compare various analytical formulas and empirical fitting functions for modelling the non-linear matter bispectrum, and discuss the regimes for which each is valid. We find that the next-to-leading order (one-loop) correction from standard perturbation theory matches with N-body results on quasi-linear scales for z ≥ 1. We find that the fitting formula in Gil-Marín et al. accurately predicts the matter bispectrum for z ≤ 1 on a wide range of scales, but at higher redshifts, the fitting formula given in Scoccimarro & Couchman gives the best agreement with measurements from N-body simulations.
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
Double Cluster Heads Model for Secure and Accurate Data Fusion in Wireless Sensor Networks
Fu, Jun-Song; Liu, Yun
2015-01-01
Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy. PMID:25608211
NASA Astrophysics Data System (ADS)
Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.
2012-11-01
A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.
Applying an accurate spherical model to gamma-ray burst afterglow observations
NASA Astrophysics Data System (ADS)
Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.
2013-05-01
We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.
NASA Astrophysics Data System (ADS)
Wohlfeil, J.; Hirschmüller, H.; Piltz, B.; Börner, A.; Suppa, M.
2012-07-01
Modern pixel-wise image matching algorithms like Semi-Global Matching (SGM) are able to compute high resolution digital surface models from airborne and spaceborne stereo imagery. Although image matching itself can be performed automatically, there are prerequisites, like high geometric accuracy, which are essential for ensuring the high quality of resulting surface models. Especially for line cameras, these prerequisites currently require laborious manual interaction using standard tools, which is a growing problem due to continually increasing demand for such surface models. The tedious work includes partly or fully manual selection of tie- and/or ground control points for ensuring the required accuracy of the relative orientation of images for stereo matching. It also includes masking of large water areas that seriously reduce the quality of the results. Furthermore, a good estimate of the depth range is required, since accurate estimates can seriously reduce the processing time for stereo matching. In this paper an approach is presented that allows performing all these steps fully automated. It includes very robust and precise tie point selection, enabling the accurate calculation of the images' relative orientation via bundle adjustment. It is also shown how water masking and elevation range estimation can be performed automatically on the base of freely available SRTM data. Extensive tests with a large number of different satellite images from QuickBird and WorldView are presented as proof of the robustness and reliability of the proposed method.
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.
1992-01-01
The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.
Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.
2013-01-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
Gröning, Flora; Jones, Marc E H; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E; Fagan, Michael J
2013-07-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
An accurate and comprehensive model of thin fluid flows with inertia on curved substrates
NASA Astrophysics Data System (ADS)
Roberts, A. J.; Li, Zhenquan
2006-04-01
Consider the three-dimensional flow of a viscous Newtonian fluid upon a curved two-dimensional substrate when the fluid film is thin, as occurs in many draining, coating and biological flows. We derive a comprehensive model of the dynamics of the film, the model being expressed in terms of the film thickness eta and the average lateral velocity bar{bm u}. Centre manifold theory assures us that the model accurately and systematically includes the effects of the curvature of substrate, gravitational body force, fluid inertia and dissipation. The model resolves wavelike phenomena in the dynamics of viscous fluid flows over arbitrarily curved substrates such as cylinders, tubes and spheres. We briefly illustrate its use in simulating drop formation on cylindrical fibres, wave transitions, three-dimensional instabilities, Faraday waves, viscous hydraulic jumps, flow vortices in a compound channel and flow down and up a step. These models are the most complete models for thin-film flow of a Newtonian fluid; many other thin-film models can be obtained by different restrictions and truncations of the model derived here.
NMR relaxation induced by iron oxide particles: testing theoretical models.
Gossuin, Y; Orlando, T; Basini, M; Henrard, D; Lascialfari, A; Mattea, C; Stapf, S; Vuong, Q L
2016-04-15
Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water. PMID:26933908
NMR relaxation induced by iron oxide particles: testing theoretical models
NASA Astrophysics Data System (ADS)
Gossuin, Y.; Orlando, T.; Basini, M.; Henrard, D.; Lascialfari, A.; Mattea, C.; Stapf, S.; Vuong, Q. L.
2016-04-01
Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water.
NASA Astrophysics Data System (ADS)
Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart
2013-09-01
The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.
Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm
NASA Astrophysics Data System (ADS)
Huang, Yanhua; Gu, Lizhi
2015-09-01
The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and
Sampling artifact in volume weighted velocity measurement. I. Theoretical modeling
NASA Astrophysics Data System (ADS)
Zhang, Pengjie; Zheng, Yi; Jing, Yipeng
2015-02-01
Cosmology based on large scale peculiar velocity prefers volume weighted velocity statistics. However, measuring the volume weighted velocity statistics from inhomogeneously distributed galaxies (simulation particles/halos) suffers from an inevitable and significant sampling artifact. We study this sampling artifact in the velocity power spectrum measured by the nearest particle velocity assignment method by Zheng et al., [Phys. Rev. D 88, 103510 (2013).]. We derive the analytical expression of leading and higher order terms. We find that the sampling artifact suppresses the z =0 E -mode velocity power spectrum by ˜10 % at k =0.1 h /Mpc , for samples with number density 10-3 (Mpc /h )-3 . This suppression becomes larger for larger k and for sparser samples. We argue that this source of systematic errors in peculiar velocity cosmology, albeit severe, can be self-calibrated in the framework of our theoretical modelling. We also work out the sampling artifact in the density-velocity cross power spectrum measurement. A more robust evaluation of related statistics through simulations will be presented in a companion paper by Zheng et al., [Sampling artifact in volume weighted velocity measurement. II. Detection in simulations and comparison with theoretical modelling, arXiv:1409.6809.]. We also argue that similar sampling artifact exists in other velocity assignment methods and hence must be carefully corrected to avoid systematic bias in peculiar velocity cosmology.
NASA Technical Reports Server (NTRS)
Avrett, E. H.
1984-01-01
Models and spectra of sunspots were studied, because they are important to energy balance and variability discussions. Sunspot observations in the ultraviolet region 140 to 168 nn was obtained by the NRL High Resolution Telescope and Spectrograph. Extensive photometric observations of sunspot umbrae and prenumbrae in 10 chanels covering the wavelength region 387 to 3800 nm were made. Cool star opacities and model atmospheres were computed. The Sun is the first testcase, both to check the opacity calculations against the observed solar spectrum, and to check the purely theoretical model calculation against the observed solar energy distribution. Line lists were finally completed for all the molecules that are important in computing statistical opacities for energy balance and for radiative rate calculations in the Sun (except perhaps for sunspots). Because many of these bands are incompletely analyzed in the laboratory, the energy levels are not well enough known to predict wavelengths accurately for spectrum synthesis and for detailed comparison with the observations.
Inference of Mix from Experimental Data and Theoretical Mix Models
Welser-Sherrill, L.; Haynes, D. A.; Cooley, J. H.; Mancini, R. C.; Haan, S. W.; Golovkin, I. E.
2007-08-02
The mixing between fuel and shell materials in Inertial Confinement Fusion implosion cores is a topic of great interest. Mixing due to hydrodynamic instabilities can affect implosion dynamics and could also go so far as to prevent ignition. We have demonstrated that it is possible to extract information on mixing directly from experimental data using spectroscopic arguments. In order to compare this data-driven analysis to a theoretical framework, two independent mix models, Youngs' phenomenological model and the Haan saturation model, have been implemented in conjunction with a series of clean hydrodynamic simulations that model the experiments. The first tests of these methods were carried out based on a set of indirect drive implosions at the OMEGA laser. We now focus on direct drive experiments, and endeavor to approach the problem from another perspective. In the current work, we use Youngs' and Haan's mix models in conjunction with hydrodynamic simulations in order to design experimental platforms that exhibit measurably different levels of mix. Once the experiments are completed based on these designs, the results of a data-driven mix analysis will be compared to the levels of mix predicted by the simulations. In this way, we aim to increase our confidence in the methods used to extract mixing information from the experimental data, as well as to study sensitivities and the range of validity of the mix models.
Toward Accurate Modeling of the Effect of Ion-Pair Formation on Solute Redox Potential.
Qu, Xiaohui; Persson, Kristin A
2016-09-13
A scheme to model the dependence of a solute redox potential on the supporting electrolyte is proposed, and the results are compared to experimental observations and other reported theoretical models. An improved agreement with experiment is exhibited if the effect of the supporting electrolyte on the redox potential is modeled through a concentration change induced via ion pair formation with the salt, rather than by only considering the direct impact on the redox potential of the solute itself. To exemplify the approach, the scheme is applied to the concentration-dependent redox potential of select molecules proposed for nonaqueous flow batteries. However, the methodology is general and enables rational computational electrolyte design through tuning of the operating window of electrochemical systems by shifting the redox potential of its solutes; including potentially both salts as well as redox active molecules. PMID:27500744
Development of theoretical models of integrated millimeter wave antennas
NASA Technical Reports Server (NTRS)
Yngvesson, K. Sigfrid; Schaubert, Daniel H.
1991-01-01
Extensive radiation patterns for Linear Tapered Slot Antenna (LTSA) Single Elements are presented. The directivity of LTSA elements is predicted correctly by taking the cross polarized pattern into account. A moment method program predicts radiation patterns for air LTSAs with excellent agreement with experimental data. A moment method program was also developed for the task LTSA Array Modeling. Computations performed with this program are in excellent agreement with published results for dipole and monopole arrays, and with waveguide simulator experiments, for more complicated structures. Empirical modeling of LTSA arrays demonstrated that the maximum theoretical element gain can be obtained. Formulations were also developed for calculating the aperture efficiency of LTSA arrays used in reflector systems. It was shown that LTSA arrays used in multibeam systems have a considerable advantage in terms of higher packing density, compared with waveguide feeds. Conversion loss of 10 dB was demonstrated at 35 GHz.
Validation of an Accurate Three-Dimensional Helical Slow-Wave Circuit Model
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1997-01-01
The helical slow-wave circuit embodies a helical coil of rectangular tape supported in a metal barrel by dielectric support rods. Although the helix slow-wave circuit remains the mainstay of the traveling-wave tube (TWT) industry because of its exceptionally wide bandwidth, a full helical circuit, without significant dimensional approximations, has not been successfully modeled until now. Numerous attempts have been made to analyze the helical slow-wave circuit so that the performance could be accurately predicted without actually building it, but because of its complex geometry, many geometrical approximations became necessary rendering the previous models inaccurate. In the course of this research it has been demonstrated that using the simulation code, MAFIA, the helical structure can be modeled with actual tape width and thickness, dielectric support rod geometry and materials. To demonstrate the accuracy of the MAFIA model, the cold-test parameters including dispersion, on-axis interaction impedance and attenuation have been calculated for several helical TWT slow-wave circuits with a variety of support rod geometries including rectangular and T-shaped rods, as well as various support rod materials including isotropic, anisotropic and partially metal coated dielectrics. Compared with experimentally measured results, the agreement is excellent. With the accuracy of the MAFIA helical model validated, the code was used to investigate several conventional geometric approximations in an attempt to obtain the most computationally efficient model. Several simplifications were made to a standard model including replacing the helical tape with filaments, and replacing rectangular support rods with shapes conforming to the cylindrical coordinate system with effective permittivity. The approximate models are compared with the standard model in terms of cold-test characteristics and computational time. The model was also used to determine the sensitivity of various
NASA Astrophysics Data System (ADS)
Smith, R.; Flynn, C.; Candlish, G. N.; Fellhauer, M.; Gibson, B. K.
2015-04-01
We present accurate models of the gravitational potential produced by a radially exponential disc mass distribution. The models are produced by combining three separate Miyamoto-Nagai discs. Such models have been used previously to model the disc of the Milky Way, but here we extend this framework to allow its application to discs of any mass, scalelength, and a wide range of thickness from infinitely thin to near spherical (ellipticities from 0 to 0.9). The models have the advantage of simplicity of implementation, and we expect faster run speeds over a double exponential disc treatment. The potentials are fully analytical, and differentiable at all points. The mass distribution of our models deviates from the radial mass distribution of a pure exponential disc by <0.4 per cent out to 4 disc scalelengths, and <1.9 per cent out to 10 disc scalelengths. We tabulate fitting parameters which facilitate construction of exponential discs for any scalelength, and a wide range of disc thickness (a user-friendly, web-based interface is also available). Our recipe is well suited for numerical modelling of the tidal effects of a giant disc galaxy on star clusters or dwarf galaxies. We consider three worked examples; the Milky Way thin and thick disc, and a discy dwarf galaxy.
Xiao, Suzhi; Tao, Wei; Zhao, Hui
2016-01-01
In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
Fu, Q.; Sun, W.B.; Yang, P.
1998-09-01
An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (D{sub ge}). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is {approximately}2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.
Seth, Ajay; Matias, Ricardo; Veloso, António P.; Delp, Scott L.
2016-01-01
The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual’s anthropometry. We compared the model to “gold standard” bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761
Seth, Ajay; Matias, Ricardo; Veloso, António P; Delp, Scott L
2016-01-01
The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761
NASA Technical Reports Server (NTRS)
Kopasakis, George
2014-01-01
The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.
Theoretical light curves for deflagration models of type Ia supernova
NASA Astrophysics Data System (ADS)
Blinnikov, S. I.; Röpke, F. K.; Sorokina, E. I.; Gieseler, M.; Reinecke, M.; Travaglio, C.; Hillebrandt, W.; Stritzinger, M.
2006-07-01
Aims.We present synthetic bolometric and broad-band UBVRI light curves of SNe Ia for four selected 3D deflagration models of thermonuclear supernovae. Methods: .The light curves are computed with the 1D hydro code stella, which models (multi-group time-dependent) non-equilibrium radiative transfer inside SN ejecta. Angle-averaged results from 3D hydrodynamical explosion simulations with the composition determined in a nucleosynthetic postprocessing step served as the input to the radiative transfer model. Results: .The predicted model {UBV} light curves do agree reasonably well with the observed ones for SNe Ia in the range of low to normal luminosities, although the underlying hydrodynamical explosion models produced only a modest amount of radioactive {}56Ni(i.e. 0.24-0.42 M⊙) and relatively low kinetic energy in the explosion (less than 0.7 × 1051 erg). The evolution of predicted B and V fluxes in the model with a {}56Nimass of 0.42 M⊙ follows the observed decline rate after the maximum very well, although the behavior of fluxes in other filters deviates somewhat from observations, and the bolometric decline rate is a bit slow. The material velocity at the photospheric level is on the order of 104 km s-1 for all models. Using our models, we check the validity of Arnett's rule, relating the peak luminosity to the power of the deposited radioactive heating, and we also check the accuracy of the procedure for extracting the {}56Nimass from the observed light curves. Conclusions: .We find that the comparison between theoretical light curves and observations provides a useful tool to validate SN Ia models. The steps necessary for improving the agreement between theory and observations are set out.
Burward-Hoy, J. M.; Geist, W. H.; Krick, M. S.; Mayo, D. R.
2004-01-01
Neutron multiplicity counting is a technique for the rapid, nondestructive measurement of plutonium mass in pure and impure materials. This technique is very powerful because it uses the measured coincidence count rates to determine the sample mass without requiring a set of representative standards for calibration. Interpreting measured singles, doubles, and triples count rates using the three-parameter standard point model accurately determines plutonium mass, neutron multiplication, and the ratio of ({alpha},n) to spontaneous-fission neutrons (alpha) for oxides of moderate mass. However, underlying standard point model assumptions - including constant neutron energy and constant multiplication throughout the sample - cause significant biases for the mass, multiplication, and alpha in measurements of metal and large, dense oxides.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, J. A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAxwell's equations by the Finite Integration Algorithm (MAFIA). Cold-test parameters have been calculated for several helical traveLing-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making It possible, for the first time, to design complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1997-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979
Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285
An Accurate Model for Biomolecular Helices and Its Application to Helix Visualization
Wang, Lincong; Qiao, Hui; Cao, Chen; Xu, Shutan; Zou, Shuxue
2015-01-01
Helices are the most abundant secondary structural elements in proteins and the structural forms assumed by double stranded DNAs (dsDNA). Though the mathematical expression for a helical curve is simple, none of the previous models for the biomolecular helices in either proteins or DNAs use a genuine helical curve, likely because of the complexity of fitting backbone atoms to helical curves. In this paper we model a helix as a series of different but all bona fide helical curves; each one best fits the coordinates of four consecutive backbone Cα atoms for a protein or P atoms for a DNA molecule. An implementation of the model demonstrates that it is more accurate than the previous ones for the description of the deviation of a helix from a standard helical curve. Furthermore, the accuracy of the model makes it possible to correlate deviations with structural and functional significance. When applied to helix visualization, the ribbon diagrams generated by the model are less choppy or have smaller side chain detachment than those by the previous visualization programs that typically model a helix as a series of low-degree splines. PMID:26126117
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Spurr, R. J. D.; Shia, R. L.; Yung, Y. L.
2014-12-01
Radiative transfer (RT) computations are an essential component of energy budget calculations in climate models. However, full treatment of RT processes is computationally expensive, prompting usage of 2-stream approximations in operational climate models. This simplification introduces errors of the order of 10% in the top of the atmosphere (TOA) fluxes [Randles et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT simulations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those (few) optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Here, we extend the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Comparisons between the new model, called Universal Principal Component Analysis model for Radiative Transfer (UPCART), 2-stream models (such as those used in climate applications) and line-by-line RT models are performed, in order for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the TOA for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and solar and viewing geometries. We demonstrate that very accurate radiative forcing estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases as compared to an exact line-by-line RT model. The model is comparable in speeds to 2-stream models, potentially rendering UPCART useful for operational General Circulation Models (GCMs). The operational speed and accuracy of UPCART can be further
Multiaxial cyclic ratcheting in coiled tubing -- Part 1: Theoretical modeling
Rolovic, R.; Tipton, S.M.
2000-04-01
Coiled tubing is a long, continuous string of steel tubing that is used in the oil well drilling and servicing industry. Bending strains imposed on coiled tubing as it is deployed and retrieved from a well are considerably into the plastic regime and can be as high as 3%. Progressive growth of tubing diameter occurs when tubing is cyclically bent-straightened under constant internal pressure, regardless of the fact that the hoop stress imposed by typical pressure levels is well below the material's yield strength. A new incremental plasticity model is proposed in this study that can predict multiaxial cyclic ratcheting in coiled tubing more accurately than the conventional plasticity models. A new hardening rule is presented based on published experimental observations. The model also implements a new plastic modulus function. The predictions based on the new theory correlate well with experimental results presented in Part 2 of this paper. Some previously unexpected trends in coiled tubing deformation behavior were observed and correctly predicted using the proposed model.
Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.
2015-01-01
The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-02-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674
Are Quasi-Steady-State Approximated Models Suitable for Quantifying Intrinsic Noise Accurately?
Sengupta, Dola; Kar, Sandip
2015-01-01
Large gene regulatory networks (GRN) are often modeled with quasi-steady-state approximation (QSSA) to reduce the huge computational time required for intrinsic noise quantification using Gillespie stochastic simulation algorithm (SSA). However, the question still remains whether the stochastic QSSA model measures the intrinsic noise as accurately as the SSA performed for a detailed mechanistic model or not? To address this issue, we have constructed mechanistic and QSSA models for few frequently observed GRNs exhibiting switching behavior and performed stochastic simulations with them. Our results strongly suggest that the performance of a stochastic QSSA model in comparison to SSA performed for a mechanistic model critically relies on the absolute values of the mRNA and protein half-lives involved in the corresponding GRN. The extent of accuracy level achieved by the stochastic QSSA model calculations will depend on the level of bursting frequency generated due to the absolute value of the half-life of either mRNA or protein or for both the species. For the GRNs considered, the stochastic QSSA quantifies the intrinsic noise at the protein level with greater accuracy and for larger combinations of half-life values of mRNA and protein, whereas in case of mRNA the satisfactory accuracy level can only be reached for limited combinations of absolute values of half-lives. Further, we have clearly demonstrated that the abundance levels of mRNA and protein hardly matter for such comparison between QSSA and mechanistic models. Based on our findings, we conclude that QSSA model can be a good choice for evaluating intrinsic noise for other GRNs as well, provided we make a rational choice based on experimental half-life values available in literature. PMID:26327626
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-01-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-‘one-click’ experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674
Group theoretical modeling of thermal explosion with reactant consumption
NASA Astrophysics Data System (ADS)
Ibragimov, Ranis N.; Dameron, Michael
2012-09-01
Today engineering and science researchers routinely confront problems in mathematical modeling involving nonlinear differential equations. Many mathematical models formulated in terms of nonlinear differential equations can be successfully treated and solved by Lie group methods. Lie group analysis is especially valuable in investigating nonlinear differential equations, for its algorithms act as reliably as for linear cases. The aim of this article is to provide the group theoretical modeling of the symmetrical heating of an exothermally reacting medium with approximations to the body's temperature distribution similar to those made by Thomas [17] and Squire [15]. The quantitative results were found to be in a good agreement with Adler and Enig in [1], where the authors were comparing the integral curves corresponding to the critical conditions for the first-order reaction. Further development of the modeling by including the critical temperature is proposed. Overall, it is shown, in particular, that the application of Lie group analysis allows one to extend the previous analytic results for the first order reactions to nth order ones.
Theoretical model for calculation of helicity in solar active regions
NASA Astrophysics Data System (ADS)
Chatterjee, P.
We (Choudhuri, Chatterjee and Nandy, 2005) calculate helicities of solar active regions based on the idea of Choudhuri (2003) that poloidal flux lines get wrapped around a toroidal flux tube rising through the convection zone, thereby giving rise to the helicity. Rough estimates based on this idea compare favourably with the observed magnitude of helicity. We use our solar dynamo model based on the Babcock--Leighton α-effect to study how helicity varies with latitude and time. At the time of solar maximum, our theoretical model gives negative helicity in the northern hemisphere and positive helicity in the south, in accordance with observed hemispheric trends. However, we find that, during a short interval at the beginning of a cycle, helicities tend to be opposite of the preferred hemispheric trends. Next we (Chatterjee, Choudhuri and Petrovay 2006) use the above idea along with the sunspot decay model of Petrovay and Moreno-Insertis, (1997) to estimate the distribution of helicity inside a flux tube as it keeps collecting more azimuthal flux during its rise through the convection zone and as turbulent diffusion keeps acting on it. By varying parameters over reasonable ranges in our simple 1-d model, we find that the azimuthal flux penetrates the flux tube to some extent instead of being confined to a narrow sheath outside.
Argudo, David; Bethel, Neville P; Marcoline, Frank V; Grabe, Michael
2016-07-01
Biological membranes deform in response to resident proteins leading to a coupling between membrane shape and protein localization. Additionally, the membrane influences the function of membrane proteins. Here we review contributions to this field from continuum elastic membrane models focusing on the class of models that couple the protein to the membrane. While it has been argued that continuum models cannot reproduce the distortions observed in fully-atomistic molecular dynamics simulations, we suggest that this failure can be overcome by using chemically accurate representations of the protein. We outline our recent advances along these lines with our hybrid continuum-atomistic model, and we show the model is in excellent agreement with fully-atomistic simulations of the nhTMEM16 lipid scramblase. We believe that the speed and accuracy of continuum-atomistic methodologies will make it possible to simulate large scale, slow biological processes, such as membrane morphological changes, that are currently beyond the scope of other computational approaches. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. PMID:26853937
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Hybridization modeling of oligonucleotide SNP arrays for accurate DNA copy number estimation
Wan, Lin; Sun, Kelian; Ding, Qi; Cui, Yuehua; Li, Ming; Wen, Yalu; Elston, Robert C.; Qian, Minping; Fu, Wenjiang J
2009-01-01
Affymetrix SNP arrays have been widely used for single-nucleotide polymorphism (SNP) genotype calling and DNA copy number variation inference. Although numerous methods have achieved high accuracy in these fields, most studies have paid little attention to the modeling of hybridization of probes to off-target allele sequences, which can affect the accuracy greatly. In this study, we address this issue and demonstrate that hybridization with mismatch nucleotides (HWMMN) occurs in all SNP probe-sets and has a critical effect on the estimation of allelic concentrations (ACs). We study sequence binding through binding free energy and then binding affinity, and develop a probe intensity composite representation (PICR) model. The PICR model allows the estimation of ACs at a given SNP through statistical regression. Furthermore, we demonstrate with cell-line data of known true copy numbers that the PICR model can achieve reasonable accuracy in copy number estimation at a single SNP locus, by using the ratio of the estimated AC of each sample to that of the reference sample, and can reveal subtle genotype structure of SNPs at abnormal loci. We also demonstrate with HapMap data that the PICR model yields accurate SNP genotype calls consistently across samples, laboratories and even across array platforms. PMID:19586935
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.
Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish
2016-04-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Modeling of rolling element bearing mechanics. Theoretical manual
NASA Technical Reports Server (NTRS)
Merchant, David H.; Greenhill, Lyn M.
1994-01-01
This report documents the theoretical basis for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings; duplex angular contact ball bearings; and cylindrical roller bearings. The model includes the effects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program; and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. A companion report addresses the input instructions for and features of the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Paiz, Mary Rose
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Theoretical model for forming limit diagram predictions without initial inhomogeneity
NASA Astrophysics Data System (ADS)
Gologanu, Mihai; Comsa, Dan Sorin; Banabic, Dorel
2013-05-01
We report on our attempts to build a theoretical model for determining forming limit diagrams (FLD) based on limit analysis that, contrary to the well-known Marciniak and Kuczynski (M-K) model, does not assume the initial existence of a region with material or geometrical inhomogeneity. We first give a new interpretation based on limit analysis for the onset of necking in the M-K model. Considering the initial thickness defect along a narrow band as postulated by the M-K model, we show that incipient necking is a transition in the plastic mechanism from one of plastic flow in both the sheet and the band to another one where the sheet becomes rigid and all plastic deformation is localized in the band. We then draw on some analogies between the onset of necking in a sheet and the onset of coalescence in a porous bulk body. In fact, the main advance in coalescence modeling has been based on a similar limit analysis with an important new ingredient: the evolution of the spatial distribution of voids, due to the plastic deformation, creating weaker regions with higher porosity surrounded by sound regions with no voids. The onset of coalescence is precisely the transition from a mechanism of plastic deformation in both regions to another one, where the sound regions are rigid. We apply this new ingredient to a necking model based on limit analysis, for the first quadrant of the FLD and a porous sheet. We use Gurson's model with some recent extensions to model the porous material. We follow both the evolution of a homogeneous sheet and the evolution of the distribution of voids. At each moment we test for a potential change of plastic mechanism, by comparing the stresses in the uniform region to those in a virtual band with a larger porosity. The main difference with the coalescence of voids in a bulk solid is that the plastic mechanism for a sheet admits a supplementary degree of freedom, namely the change in the thickness of the virtual band. For strain ratios close to
Do Ecological Niche Models Accurately Identify Climatic Determinants of Species Ranges?
Searcy, Christopher A; Shaffer, H Bradley
2016-04-01
Defining species' niches is central to understanding their distributions and is thus fundamental to basic ecology and climate change projections. Ecological niche models (ENMs) are a key component of making accurate projections and include descriptions of the niche in terms of both response curves and rankings of variable importance. In this study, we evaluate Maxent's ranking of environmental variables based on their importance in delimiting species' range boundaries by asking whether these same variables also govern annual recruitment based on long-term demographic studies. We found that Maxent-based assessments of variable importance in setting range boundaries in the California tiger salamander (Ambystoma californiense; CTS) correlate very well with how important those variables are in governing ongoing recruitment of CTS at the population level. This strong correlation suggests that Maxent's ranking of variable importance captures biologically realistic assessments of factors governing population persistence. However, this result holds only when Maxent models are built using best-practice procedures and variables are ranked based on permutation importance. Our study highlights the need for building high-quality niche models and provides encouraging evidence that when such models are built, they can reflect important aspects of a species' ecology. PMID:27028071
Linaro, Daniele; Storace, Marco; Giugliano, Michele
2011-01-01
Stochastic channel gating is the major source of intrinsic neuronal noise whose functional consequences at the microcircuit- and network-levels have been only partly explored. A systematic study of this channel noise in large ensembles of biophysically detailed model neurons calls for the availability of fast numerical methods. In fact, exact techniques employ the microscopic simulation of the random opening and closing of individual ion channels, usually based on Markov models, whose computational loads are prohibitive for next generation massive computer models of the brain. In this work, we operatively define a procedure for translating any Markov model describing voltage- or ligand-gated membrane ion-conductances into an effective stochastic version, whose computer simulation is efficient, without compromising accuracy. Our approximation is based on an improved Langevin-like approach, which employs stochastic differential equations and no Montecarlo methods. As opposed to an earlier proposal recently debated in the literature, our approximation reproduces accurately the statistical properties of the exact microscopic simulations, under a variety of conditions, from spontaneous to evoked response features. In addition, our method is not restricted to the Hodgkin-Huxley sodium and potassium currents and is general for a variety of voltage- and ligand-gated ion currents. As a by-product, the analysis of the properties emerging in exact Markov schemes by standard probability calculus enables us for the first time to analytically identify the sources of inaccuracy of the previous proposal, while providing solid ground for its modification and improvement we present here. PMID:21423712
Accurate integral equation theory for the central force model of liquid water and ionic solutions
NASA Astrophysics Data System (ADS)
Ichiye, Toshiko; Haymet, A. D. J.
1988-10-01
The atom-atom pair correlation functions and thermodynamics of the central force model of water, introduced by Lemberg, Stillinger, and Rahman, have been calculated accurately by an integral equation method which incorporates two new developments. First, a rapid new scheme has been used to solve the Ornstein-Zernike equation. This scheme combines the renormalization methods of Allnatt, and Rossky and Friedman with an extension of the trigonometric basis-set solution of Labik and co-workers. Second, by adding approximate ``bridge'' functions to the hypernetted-chain (HNC) integral equation, we have obtained predictions for liquid water in which the hydrogen bond length and number are in good agreement with ``exact'' computer simulations of the same model force laws. In addition, for dilute ionic solutions, the ion-oxygen and ion-hydrogen coordination numbers display both the physically correct stoichiometry and good agreement with earlier simulations. These results represent a measurable improvement over both a previous HNC solution of the central force model and the ex-RISM integral equation solutions for the TIPS and other rigid molecule models of water.
Efficient and Accurate Explicit Integration Algorithms with Application to Viscoplastic Models
NASA Technical Reports Server (NTRS)
Arya, Vinod K.
1994-01-01
Several explicit integration algorithms with self-adative time integration strategies are developed and investigated for efficiency and accuracy. These algorithms involve the Runge-Kutta second order, the lower Runge-Kutta method of orders one and two, and the exponential integration method. The algorithms are applied to viscoplastic models put forth by Freed and Verrilli and Bodner and Partom for thermal/mechanical loadings (including tensile, relaxation, and cyclic loadings). The large amount of computations performed showed that, for comparable accuracy, the efficiency of an integration algorithm depends significantly on the type of application (loading). However, in general, for the aforementioned loadings and viscoplastic models, the exponential integration algorithm with the proposed self-adaptive time integration strategy worked more (or comparably) efficiently and accurately than the other integration algorithms. Using this strategy for integrating viscoplastic models may lead to considerable savings in computer time (better efficiency) without adversely affecting the accuracy of the results. This conclusion should encourage the utilization of viscoplastic models in the stress analysis and design of structural components.
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782
Computational Graph Theoretical Model of the Zebrafish Sensorimotor Pathway
NASA Astrophysics Data System (ADS)
Peterson, Joshua M.; Stobb, Michael; Mazzag, Bori; Gahtan, Ethan
2011-11-01
Mapping the detailed connectivity patterns of neural circuits is a central goal of neuroscience and has been the focus of extensive current research [4, 3]. The best quantitative approach to analyze the acquired data is still unclear but graph theory has been used with success [3, 1]. We present a graph theoretical model with vertices and edges representing neurons and synaptic connections, respectively. Our system is the zebrafish posterior lateral line sensorimotor pathway. The goal of our analysis is to elucidate mechanisms of information processing in this neural pathway by comparing the mathematical properties of its graph to those of other, previously described graphs. We create a zebrafish model based on currently known anatomical data. The degree distributions and small-world measures of this model is compared to small-world, random and 3-compartment random graphs of the same size (with over 2500 nodes and 160,000 connections). We find that the zebrafish graph shows small-worldness similar to other neural networks and does not have a scale-free distribution of connections.
Santolini, Marc; Mora, Thierry; Hakim, Vincent
2014-01-01
The identification of transcription factor binding sites (TFBSs) on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs), in which each DNA base pair contributes independently to the transcription factor (TF) binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM), a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting TFBSs beyond
Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data
NASA Astrophysics Data System (ADS)
Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej
2016-04-01
GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.
Felmy, Andrew R.; Mason, Marvin; Qafoku, Odeta; Xia, Yuanxian; Wang, Zheming; MacLean, Graham
2003-03-27
Developing accurate thermodynamic models for predicting the chemistry of the high-level waste tanks at Hanford is an extremely daunting challenge in electrolyte and radionuclide chemistry. These challenges stem from the extremely high ionic strength of the tank waste supernatants, presence of chelating agents in selected tanks, wide temperature range in processing conditions and the presence of important actinide species in multiple oxidation states. This presentation summarizes progress made to date in developing accurate models for these tank waste solutions, how these data are being used at Hanford and the important challenges that remain. New thermodynamic measurements on Sr and actinide complexation with specific chelating agents (EDTA, HEDTA and gluconate) will also be presented.
Accurate calculation of conductive conductances in complex geometries for spacecrafts thermal models
NASA Astrophysics Data System (ADS)
Garmendia, Iñaki; Anglada, Eva; Vallejo, Haritz; Seco, Miguel
2016-02-01
The thermal subsystem of spacecrafts and payloads is always designed with the help of Thermal Mathematical Models. In the case of the Thermal Lumped Parameter (TLP) method, the non-linear system of equations that is created is solved to calculate the temperature distribution and the heat power that goes between nodes. The accuracy of the results depends largely on the appropriate calculation of the conductive and radiative conductances. Several established methods for the determination of conductive conductances exist but they present some limitations for complex geometries. Two new methods are proposed in this paper to calculate accurately these conductive conductances: The Extended Far Field method and the Mid-Section method. Both are based on a finite element calculation but while the Extended Far Field method uses the calculation of node mean temperatures, the Mid-Section method is based on assuming specific temperature values. They are compared with traditionally used methods showing the advantages of these two new methods.
A fast and accurate PCA based radiative transfer model: Extension to the broadband shortwave region
NASA Astrophysics Data System (ADS)
Kopparla, Pushkar; Natraj, Vijay; Spurr, Robert; Shia, Run-Lie; Crisp, David; Yung, Yuk L.
2016-04-01
Accurate radiative transfer (RT) calculations are necessary for many earth-atmosphere applications, from remote sensing retrieval to climate modeling. A Principal Component Analysis (PCA)-based spectral binning method has been shown to provide an order of magnitude increase in computational speed while maintaining an overall accuracy of 0.01% (compared to line-by-line calculations) over narrow spectral bands. In this paper, we have extended the PCA method for RT calculations over the entire shortwave region of the spectrum from 0.3 to 3 microns. The region is divided into 33 spectral fields covering all major gas absorption regimes. We find that the RT performance runtimes are shorter by factors between 10 and 100, while root mean square errors are of order 0.01%.
Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.
Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M
2016-06-21
We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy. PMID:27230942
O’Connor, James PB; Boult, Jessica KR; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff JM; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P
2015-01-01
There is a clinical need for non-invasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning and therapy monitoring. Oxygen enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (ΔR1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ΔR1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ΔR1, termed “Oxy-R fraction”) would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here we demonstrate that OE-MRI signals are accurate, precise and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia non-invasively and is immediately translatable to the clinic. PMID:26659574
The S-model: A highly accurate MOST model for CAD
NASA Astrophysics Data System (ADS)
Satter, J. H.
1986-09-01
A new MOST model which combines simplicity and a logical structure with a high accuracy of only 0.5-4.5% is presented. The model is suited for enhancement and depletion devices with either large or small dimensions. It includes the effects of scattering and carrier-velocity saturation as well as the influence of the intrinsic source and drain series resistance. The decrease of the drain current due to substrate bias is incorporated too. The model is in the first place intended for digital purposes. All necessary quantities are calculated in a straightforward manner without iteration. An almost entirely new way of determining the parameters is described and a new cluster parameter is introduced, which is responsible for the high accuracy of the model. The total number of parameters is 7. A still simpler β expression is derived, which is suitable for only one value of the substrate bias and contains only three parameters, while maintaining the accuracy. The way in which the parameters are determined is readily suited for automatic measurement. A simple linear regression procedure programmed in the computer, which controls the measurements, produces the parameter values.
Random generalized linear model: a highly accurate and interpretable ensemble predictor
2013-01-01
Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760
Theoretical model of electroosmotic flow for capillary zone electrophoresis
Tavares, M.F.M.; McGuffin, V.L.
1995-10-15
A mathematical model of electroosmotic flow in capillary zone electrophoresis has been developed by taking into consideration of the ion-selective properties of silica surfaces. The electroosmotic velocity was experimentally determined, underboth constant voltage and constant current conditions, by using the resistance-monitoring method. A detailed study of electroosmotic flow characteristics in solutions of singly charged, strong electrolytes (NaCl, LiCl, KCl, NaBr, NaI, NaNO{sub 3}, and NaClO{sub 4}), as well as the phosphate buffer system, revealed a linear correlation between the {Zeta} potential and the logarithm of the cation activity. These results suggest that the capillary surface behaves as an ion-selective electrode. Consequently, the {Zeta} potential can be calculated as a function of the composition and pH of the solution with the corresponding modified Nernst equation for ion-selective electrodes. If the viscosity and dielectric constant of the solution are known, the electroosmotic velocity can then be accurately predicted by means of the Helmholtz-Smoluchowski equation. The proposed model has been successfully applied to phosphate buffer solutions in the range of pH from 4 to 10, containing sodium chloride from 5 to 15 mM, resulting in nearly 3% error in the estimation of the electroosmotic velocity. 53 refs., 8 figs., 2 tabs.
Graph theoretic modeling of large-scale semantic networks.
Bales, Michael E; Johnson, Stephen B
2006-08-01
During the past several years, social network analysis methods have been used to model many complex real-world phenomena, including social networks, transportation networks, and the Internet. Graph theoretic methods, based on an elegant representation of entities and relationships, have been used in computational biology to study biological networks; however they have not yet been adopted widely by the greater informatics community. The graphs produced are generally large, sparse, and complex, and share common global topological properties. In this review of research (1998-2005) on large-scale semantic networks, we used a tailored search strategy to identify articles involving both a graph theoretic perspective and semantic information. Thirty-one relevant articles were retrieved. The majority (28, 90.3%) involved an investigation of a real-world network. These included corpora, thesauri, dictionaries, large computer programs, biological neuronal networks, word association networks, and files on the Internet. Twenty-two of the 28 (78.6%) involved a graph comprised of words or phrases. Fifteen of the 28 (53.6%) mentioned evidence of small-world characteristics in the network investigated. Eleven (39.3%) reported a scale-free topology, which tends to have a similar appearance when examined at varying scales. The results of this review indicate that networks generated from natural language have topological properties common to other natural phenomena. It has not yet been determined whether artificial human-curated terminology systems in biomedicine share these properties. Large network analysis methods have potential application in a variety of areas of informatics, such as in development of controlled vocabularies and for characterizing a given domain. PMID:16442849
Electron Scale Solar Wind Turbulence: Cluster Observations and Theoretical Modeling
Sahraoui, F.; Goldstein, M. L.
2011-01-04
Turbulence at MagnetoHydroDynamics (MHD) scales of the solar wind has been studied for more than three decades, using data analyzes, theoretical and numerical modeling. However smaller scales have not been explored until very recently. Here, we review recent results on the first observation of cascade and dissipation of the solar wind turbulence at the electron scales. Thanks to the high resolution magnetic and electric field data of the Cluster spacecraft, we computed the spectra of turbulence up to {approx}100 Hz (in the spacecraft reference frame) and found two distinct breakpoints in the magnetic spectrum at 0.4 Hz and 35 Hz, which correspond, respectively, to the Doppler-shifted proton and electron gyroscales, f{sub {rho}p} and f{sub {rho}e}. Below f{sub {rho}p} the spectrum follows a Kolmogorov scaling f{sup -1.62}, typical of spectra observed at 1 AU. Above f{sub {rho}p} a second inertial range is formed with a scaling f{sup -2.3} down to f{sub {rho}e}. Above f{sub {rho}e} the spectrum has a steeper power law {approx}f{sup -4.1} down to the noise level of the instrument. Solving numerically the linear Maxwell-Vlasov equations combined with recent theoretical predictions of the Gyro-Kinetic theory, we show that the present results are fully consistent with a scenario of a quasi-two-dimensional cascade into Kinetic Alfven modes (KAW).
Gay, Guillaume; Courtheoux, Thibault; Reyes, Céline
2012-01-01
In fission yeast, erroneous attachments of spindle microtubules to kinetochores are frequent in early mitosis. Most are corrected before anaphase onset by a mechanism involving the protein kinase Aurora B, which destabilizes kinetochore microtubules (ktMTs) in the absence of tension between sister chromatids. In this paper, we describe a minimal mathematical model of fission yeast chromosome segregation based on the stochastic attachment and detachment of ktMTs. The model accurately reproduces the timing of correct chromosome biorientation and segregation seen in fission yeast. Prevention of attachment defects requires both appropriate kinetochore orientation and an Aurora B–like activity. The model also reproduces abnormal chromosome segregation behavior (caused by, for example, inhibition of Aurora B). It predicts that, in metaphase, merotelic attachment is prevented by a kinetochore orientation effect and corrected by an Aurora B–like activity, whereas in anaphase, it is corrected through unbalanced forces applied to the kinetochore. These unbalanced forces are sufficient to prevent aneuploidy. PMID:22412019
Wijma, Hein J; Marrink, Siewert J; Janssen, Dick B
2014-07-28
Computational approaches could decrease the need for the laborious high-throughput experimental screening that is often required to improve enzymes by mutagenesis. Here, we report that using multiple short molecular dynamics (MD) simulations makes it possible to accurately model enantioselectivity for large numbers of enzyme-substrate combinations at low computational costs. We chose four different haloalkane dehalogenases as model systems because of the availability of a large set of experimental data on the enantioselective conversion of 45 different substrates. To model the enantioselectivity, we quantified the frequency of occurrence of catalytically productive conformations (near attack conformations) for pairs of enantiomers during MD simulations. We found that the angle of nucleophilic attack that leads to carbon-halogen bond cleavage was a critical variable that limited the occurrence of productive conformations; enantiomers for which this angle reached values close to 180° were preferentially converted. A cluster of 20-40 very short (10 ps) MD simulations allowed adequate conformational sampling and resulted in much better agreement to experimental enantioselectivities than single long MD simulations (22 ns), while the computational costs were 50-100 fold lower. With single long MD simulations, the dynamics of enzyme-substrate complexes remained confined to a conformational subspace that rarely changed significantly, whereas with multiple short MD simulations a larger diversity of conformations of enzyme-substrate complexes was observed. PMID:24916632
Accurate models for P-gp drug recognition induced from a cancer cell line cytotoxicity screen.
Levatić, Jurica; Ćurak, Jasna; Kralj, Marijeta; Šmuc, Tomislav; Osmak, Maja; Supek, Fran
2013-07-25
P-glycoprotein (P-gp, MDR1) is a promiscuous drug efflux pump of substantial pharmacological importance. Taking advantage of large-scale cytotoxicity screening data involving 60 cancer cell lines, we correlated the differential biological activities of ∼13,000 compounds against cellular P-gp levels. We created a large set of 934 high-confidence P-gp substrates or nonsubstrates by enforcing agreement with an orthogonal criterion involving P-gp overexpressing ADR-RES cells. A support vector machine (SVM) was 86.7% accurate in discriminating P-gp substrates on independent test data, exceeding previous models. Two molecular features had an overarching influence: nearly all P-gp substrates were large (>35 atoms including H) and dense (specific volume of <7.3 Å(3)/atom) molecules. Seven other descriptors and 24 molecular fragments ("effluxophores") were found enriched in the (non)substrates and incorporated into interpretable rule-based models. Biological experiments on an independent P-gp overexpressing cell line, the vincristine-resistant VK2, allowed us to reclassify six compounds previously annotated as substrates, validating our method's predictive ability. Models are freely available at http://pgp.biozyne.com . PMID:23772653
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Reid, J. S.; Schmidt, C. C.; Giglio, L.; Prins, E.
2009-12-01
The diurnal cycle of fire activity is crucial for accurate simulation of atmospheric effects of fire emissions, especially at finer spatial and temporal scales. Estimating diurnal variability in emissions is also a critical problem for construction of emissions estimates from multiple sensors with variable coverage patterns. An optimal diurnal emissions estimate will use as much information as possible from satellite fire observations, compensate known biases in those observations, and use detailed theoretical models of the diurnal cycle to fill in missing information. As part of ongoing improvements to the Fire Location and Monitoring of Burning Emissions (FLAMBE) fire monitoring system, we evaluated several different methods of integrating observations with different temporal sampling. We used geostationary fire detections from WF_ABBA, fire detection data from MODIS, empirical diurnal cycles from TRMM, and simple theoretical diurnal curves based on surface heating. Our experiments integrated these data in different combinations to estimate the diurnal cycles of emissions for each location and time. Hourly emissions estimates derived using these methods were tested using an aerosol transport model. We present results of this comparison, and discuss the implications of our results for the broader problem of multi-sensor data fusion in fire emissions modeling.
A theoretical model of speed-dependent steering torque for rolling tyres
NASA Astrophysics Data System (ADS)
Wei, Yintao; Oertel, Christian; Liu, Yahui; Li, Xuebing
2016-04-01
It is well known that the tyre steering torque is highly dependent on the tyre rolling speed. In limited cases, i.e. parking manoeuvre, the steering torque approaches the maximum. With the increasing tyre speed, the steering torque decreased rapidly. Accurate modelling of the speed-dependent behaviour for the tyre steering torque is a key factor to calibrate the electric power steering (EPS) system and tune the handling performance of vehicles. However, no satisfactory theoretical model can be found in the existing literature to explain this phenomenon. This paper proposes a new theoretical framework to model this important tyre behaviour, which includes three key factors: (1) tyre three-dimensional transient rolling kinematics with turn-slip; (2) dynamical force and moment generation; and (3) the mixed Lagrange-Euler method for contact deformation solving. A nonlinear finite-element code has been developed to implement the proposed approach. It can be found that the main mechanism for the speed-dependent steering torque is due to turn-slip-related kinematics. This paper provides a theory to explain the complex mechanism of the tyre steering torque generation, which helps to understand the speed-dependent tyre steering torque, tyre road feeling and EPS calibration.
2011-01-01
Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645
Posttraumatic Stress Disorder: A Theoretical Model of the Hyperarousal Subtype
Weston, Charles Stewart E.
2014-01-01
Posttraumatic stress disorder (PTSD) is a frequent and distressing mental disorder, about which much remains to be learned. It is a heterogeneous disorder; the hyperarousal subtype (about 70% of occurrences and simply termed PTSD in this paper) is the topic of this article, but the dissociative subtype (about 30% of occurrences and likely involving quite different brain mechanisms) is outside its scope. A theoretical model is presented that integrates neuroscience data on diverse brain regions known to be involved in PTSD, and extensive psychiatric findings on the disorder. Specifically, the amygdala is a multifunctional brain region that is crucial to PTSD, and processes peritraumatic hyperarousal on grounded cognition principles to produce hyperarousal symptoms. Amygdala activity also modulates hippocampal function, which is supported by a large body of evidence, and likewise amygdala activity modulates several brainstem regions, visual cortex, rostral anterior cingulate cortex (rACC), and medial orbitofrontal cortex (mOFC), to produce diverse startle, visual, memory, numbing, anger, and recklessness symptoms. Additional brain regions process other aspects of peritraumatic responses to produce further symptoms. These contentions are supported by neuroimaging, neuropsychological, neuroanatomical, physiological, cognitive, and behavioral evidence. Collectively, the model offers an account of how responses at the time of trauma are transformed into an extensive array of the 20 PTSD symptoms that are specified in the Diagnostic and Statistical Manual of Mental Disorders, Fifth edition. It elucidates the neural mechanisms of a specific form of psychopathology, and accords with the Research Domain Criteria framework. PMID:24772094
A theoretical model for the Lorentz force particle analyzer
NASA Astrophysics Data System (ADS)
Moreau, René; Tao, Zhen; Wang, Xiaodong
2016-07-01
In a previous paper [X. Wang et al., J. Appl. Phys. 120, 014903 (2016)], several experimental devices have been presented, which demonstrate the efficiency of electromagnetic techniques for detecting and sizing electrically insulating particles entrained in the flow of a molten metal. In each case, a non-uniform magnetic field is applied across the flow of the electrically conducting liquid, thereby generating a braking Lorentz force on this moving medium and a reaction force on the magnet, which tends to be entrained in the flow direction. The purpose of this letter is to derive scaling laws for this Lorentz force from an elementary theoretical model. For simplicity, as in the experiments, the flowing liquid is modeled as a solid body moving with a uniform velocity U. The eddy currents in the moving domain are derived from the classic induction equation and Ohm's law, and expressions for the Lorentz force density j ×B and for its integral over the entire moving domain follow. The insulating particles that are eventually present and entrained with this body are then treated as small disturbances in a classic perturbation analysis, thereby leading to scaling laws for the pulses they generate in the Lorentz force. The purpose of this letter is both to illustrate the eddy currents without and with insulating particles in the electrically conducting liquid and to derive a key relation between the pulses in the Lorentz force and the main parameters (particle volume and dimensions of the region subjected to the magnetic field).
A game theoretic model of drug launch in India.
Bhaduri, Saradindu; Ray, Amit Shovon
2006-01-01
There is a popular belief that drug launch is delayed in developing countries like India because of delayed transfer of technology due to a 'post-launch' imitation threat through weak intellectual property rights (IPR). In fact, this belief has been a major reason for the imposition of the Trade Related Intellectual Property Rights regime under the WTO. This construct undermines the fact that in countries like India, with high reverse engineering capabilities, imitation can occur even before the formal technology transfer, and fails to recognize the first mover advantage in pharmaceutical markets. This paper argues that the first mover advantage is important and will vary across therapeutic areas, especially in developing countries with diverse levels of patient enlightenment and quality awareness. We construct a game theoretic model of incomplete information to examine the delay in drug launch in terms of costs and benefits of first move, assumed to be primarily a function of the therapeutic area of the new drug. Our model shows that drug launch will be delayed only for external (infective/communicable) diseases, while drugs for internal, non-communicable diseases (accounting for the overwhelming majority of new drug discovery) will be launched without delay. PMID:18634701
Theoretical model of prion propagation: a misfolded protein induces misfolding.
Małolepsza, Edyta; Boniecki, Michal; Kolinski, Andrzej; Piela, Lucjan
2005-05-31
There is a hypothesis that dangerous diseases such as bovine spongiform encephalopathy, Creutzfeldt-Jakob, Alzheimer's, fatal familial insomnia, and several others are induced by propagation of wrong or misfolded conformations of some vital proteins. If for some reason the misfolded conformations were acquired by many such protein molecules it might lead to a "conformational" disease of the organism. Here, a theoretical model of the molecular mechanism of such a conformational disease is proposed, in which a metastable (or misfolded) form of a protein induces a similar misfolding of another protein molecule (conformational autocatalysis). First, a number of amino acid sequences composed of 32 aa have been designed that fold rapidly into a well defined native-like alpha-helical conformation. From a large number of such sequences a subset of 14 had a specific feature of their energy landscape, a well defined local energy minimum (higher than the global minimum for the alpha-helical fold) corresponding to beta-type structure. Only one of these 14 sequences exhibited a strong autocatalytic tendency to form a beta-sheet dimer capable of further propagation of protofibril-like structure. Simulations were done by using a reduced, although of high resolution, protein model and the replica exchange Monte Carlo sampling procedure. PMID:15911770
Theoretical model of prion propagation: A misfolded protein induces misfolding
Małolepsza, Edyta; Boniecki, Michał; Kolinski, Andrzej; Piela, Lucjan
2005-01-01
There is a hypothesis that dangerous diseases such as bovine spongiform encephalopathy, Creutzfeldt-Jakob, Alzheimer's, fatal familial insomnia, and several others are induced by propagation of wrong or misfolded conformations of some vital proteins. If for some reason the misfolded conformations were acquired by many such protein molecules it might lead to a “conformational” disease of the organism. Here, a theoretical model of the molecular mechanism of such a conformational disease is proposed, in which a metastable (or misfolded) form of a protein induces a similar misfolding of another protein molecule (conformational autocatalysis). First, a number of amino acid sequences composed of 32 aa have been designed that fold rapidly into a well defined native-like α-helical conformation. From a large number of such sequences a subset of 14 had a specific feature of their energy landscape, a well defined local energy minimum (higher than the global minimum for the α-helical fold) corresponding to β-type structure. Only one of these 14 sequences exhibited a strong autocatalytic tendency to form a β-sheet dimer capable of further propagation of protofibril-like structure. Simulations were done by using a reduced, although of high resolution, protein model and the replica exchange Monte Carlo sampling procedure. PMID:15911770
Elvira, L; Hernandez, F; Cuesta, P; Cano, S; Gonzalez-Martin, J-V; Astiz, S
2013-06-01
Although the intensive production system of Lacaune dairy sheep is the only profitable method for producers outside of the French Roquefort area, little is known about this type of systems. This study evaluated yield records of 3677 Lacaune sheep under intensive management between 2005 and 2010 in order to describe the lactation curve of this breed and to investigate the suitability of different mathematical functions for modeling this curve. A total of 7873 complete lactations during a 40-week lactation period corresponding to 201 281 pieces of weekly yield data were used. First, five mathematical functions were evaluated on the basis of the residual mean square, determination coefficient, Durbin Watson and Runs Test values. The two better models were found to be Pollott Additive and fractional polynomial (FP). In the second part of the study, the milk yield, peak of milk yield, day of peak and persistency of the lactations were calculated with Pollot Additive and FP models and compared with the real data. The results indicate that both models gave an extremely accurate fit to Lacaune lactation curves in order to predict milk yields (P = 0.871), with the FP model being the best choice to provide a good fit to an extensive amount of real data and applicable on farm without specific statistical software. On the other hand, the interpretation of the parameters of the Pollott Additive function helps to understand the biology of the udder of the Lacaune sheep. The characteristics of the Lacaune lactation curve and milk yield are affected by lactation number and length. The lactation curves obtained in the present study allow the early identification of ewes with low milk yield potential, which will help to optimize farm profitability. PMID:23257242
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
NASA Astrophysics Data System (ADS)
Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana
2016-04-01
Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.
Empirical STORM-E Model. [I. Theoretical and Observational Basis
NASA Technical Reports Server (NTRS)
Mertens, Christopher J.; Xu, Xiaojing; Bilitza, Dieter; Mlynczak, Martin G.; Russell, James M., III
2013-01-01
Auroral nighttime infrared emission observed by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics (TIMED) satellite is used to develop an empirical model of geomagnetic storm enhancements to E-region peak electron densities. The empirical model is called STORM-E and will be incorporated into the 2012 release of the International Reference Ionosphere (IRI). The proxy for characterizing the E-region response to geomagnetic forcing is NO+(v) volume emission rates (VER) derived from the TIMED/SABER 4.3 lm channel limb radiance measurements. The storm-time response of the NO+(v) 4.3 lm VER is sensitive to auroral particle precipitation. A statistical database of storm-time to climatological quiet-time ratios of SABER-observed NO+(v) 4.3 lm VER are fit to widely available geomagnetic indices using the theoretical framework of linear impulse-response theory. The STORM-E model provides a dynamic storm-time correction factor to adjust a known quiescent E-region electron density peak concentration for geomagnetic enhancements due to auroral particle precipitation. Part II of this series describes the explicit development of the empirical storm-time correction factor for E-region peak electron densities, and shows comparisons of E-region electron densities between STORM-E predictions and incoherent scatter radar measurements. In this paper, Part I of the series, the efficacy of using SABER-derived NO+(v) VER as a proxy for the E-region response to solar-geomagnetic disturbances is presented. Furthermore, a detailed description of the algorithms and methodologies used to derive NO+(v) VER from SABER 4.3 lm limb emission measurements is given. Finally, an assessment of key uncertainties in retrieving NO+(v) VER is presented
NASA Astrophysics Data System (ADS)
Guerlet, Sandrine; Spiga, A.; Sylvestre, M.; Fouchet, T.; Millour, E.; Wordsworth, R.; Leconte, J.; Forget, F.
2013-10-01
Recent observations of Saturn’s stratospheric thermal structure and composition revealed new phenomena: an equatorial oscillation in temperature, reminiscent of the Earth's Quasi-Biennal Oscillation ; strong meridional contrasts of hydrocarbons ; a warm “beacon” associated with the powerful 2010 storm. Those signatures cannot be reproduced by 1D photochemical and radiative models and suggest that atmospheric dynamics plays a key role. This motivated us to develop a complete 3D General Circulation Model (GCM) for Saturn, based on the LMDz hydrodynamical core, to explore the circulation, seasonal variability, and wave activity in Saturn's atmosphere. In order to closely reproduce Saturn's radiative forcing, a particular emphasis was put in obtaining fast and accurate radiative transfer calculations. Our radiative model uses correlated-k distributions and spectral discretization tailored for Saturn's atmosphere. We include internal heat flux, ring shadowing and aerosols. We will report on the sensitivity of the model to spectral discretization, spectroscopic databases, and aerosol scenarios (varying particle sizes, opacities and vertical structures). We will also discuss the radiative effect of the ring shadowing on Saturn's atmosphere. We will present a comparison of temperature fields obtained with this new radiative equilibrium model to that inferred from Cassini/CIRS observations. In the troposphere, our model reproduces the observed temperature knee caused by heating at the top of the tropospheric aerosol layer. In the lower stratosphere (20mbar
modeled temperature is 5-10K too low compared to measurements. This suggests that processes other than radiative heating/cooling by trace
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of
Martian weathering processes: Terrestrial analog and theoretical modeling studies
NASA Astrophysics Data System (ADS)
McAdam, Amy Catherine
2008-06-01
Understanding the role of water in the Martian near-surface, and its implications for possible habitable environments, is among the highest priorities of NASA's Mars Exploration Program. Characterization of alteration signatures in surface materials provides the best opportunity to assess the role of water on Mars. This dissertation investigates Martian alteration processes through analyses of Antarctic analogs and numerical modeling of mineral-fluid interactions. Analog work involved studying an Antarctic diabase, and associated soils, as Mars analogs to understand weathering processes in cold, dry environments. The soils are dominated by primary basaltic minerals, but also contain phyllosilicates, salts, iron oxides/oxyhydroxides, and zeolites. Soil clay minerals and zeolites, formed primarily during deuteric or hydrothermal alteration of the parent rock, were subsequently transferred to the soil by physical rock weathering. Authigenic soil iron oxides/oxyhydroxides and small amounts of poorly-ordered secondary silicates indicate some contributions from low-temperature aqueous weathering. Soil sulfates, which exhibit a sulfate- aerosol-derived mass-independent oxygen isotope signature, suggest contributions from acid aerosol-rock interactions. The complex alteration history of the Antarctic materials resulted in several similarities to Martian materials. The processes that affected the analogs, including deuteric/ hydrothermal clay formation, may be important in producing Martian surface materials. Theoretical modeling focused on investigating the alteration of Martian rocks under acidic conditions and using modeling results to interpret Martian observations. Kinetic modeling of the dissolution of plagioclase-pyroxene mineral mixtures under acidic conditions suggested that surfaces with high plagioclase/pyroxene, such as several northern regions, could have experienced some preferential dissolution of pyroxenes at a pH less than approximately 3-4. Modeling of the
Accurate modeling of cache replacement policies in a Data-Grid.
Otoo, Ekow J.; Shoshani, Arie
2003-01-23
Caching techniques have been used to improve the performance gap of storage hierarchies in computing systems. In data intensive applications that access large data files over wide area network environment, such as a data grid,caching mechanism can significantly improve the data access performance under appropriate workloads. In a data grid, it is envisioned that local disk storage resources retain or cache the data files being used by local application. Under a workload of shared access and high locality of reference, the performance of the caching techniques depends heavily on the replacement policies being used. A replacement policy effectively determines which set of objects must be evicted when space is needed. Unlike cache replacement policies in virtual memory paging or database buffering, developing an optimal replacement policy for data grids is complicated by the fact that the file objects being cached have varying sizes and varying transfer and processing costs that vary with time. We present an accurate model for evaluating various replacement policies and propose a new replacement algorithm referred to as ''Least Cost Beneficial based on K backward references (LCB-K).'' Using this modeling technique, we compare LCB-K with various replacement policies such as Least Frequently Used (LFU), Least Recently Used (LRU), Greedy DualSize (GDS), etc., using synthetic and actual workload of accesses to and from tertiary storage systems. The results obtained show that (LCB-K) and (GDS) are the most cost effective cache replacement policies for storage resource management in data grids.
An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).
Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert
2015-08-01
The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system. PMID:26196255
NASA Astrophysics Data System (ADS)
Walter, Johannes; Thajudeen, Thaseem; Süß, Sebastian; Segets, Doris; Peukert, Wolfgang
2015-04-01
Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles.
ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS
Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Boerner, G.
2009-12-10
A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance LAMBDACDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and LAMBDACDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the LAMBDACDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass
Models in Educational Administration: Revisiting Willower's "Theoretically Oriented" Critique
ERIC Educational Resources Information Center
Newton, Paul; Burgess, David; Burns, David P.
2010-01-01
Three decades ago, Willower (1975) argued that much of what we take to be theory in educational administration is in fact only theoretically oriented. If we accept Willower's assessment of the field as true, what implications does this statement hold for the academic study and practical application of the theoretically oriented aspects of our…
NASA Astrophysics Data System (ADS)
Berezovska, Ganna; Prada-Gracia, Diego; Mostarda, Stefano; Rao, Francesco
2012-11-01
Molecular simulations as well as single molecule experiments have been widely analyzed in terms of order parameters, the latter representing candidate probes for the relevant degrees of freedom. Notwithstanding this approach is very intuitive, mounting evidence showed that such descriptions are inaccurate, leading to ambiguous definitions of states and wrong kinetics. To overcome these limitations a framework making use of order parameter fluctuations in conjunction with complex network analysis is investigated. Derived from recent advances in the analysis of single molecule time traces, this approach takes into account the fluctuations around each time point to distinguish between states that have similar values of the order parameter but different dynamics. Snapshots with similar fluctuations are used as nodes of a transition network, the clusterization of which into states provides accurate Markov-state-models of the system under study. Application of the methodology to theoretical models with a noisy order parameter as well as the dynamics of a disordered peptide illustrates the possibility to build accurate descriptions of molecular processes on the sole basis of order parameter time series without using any supplementary information.
Accurate modeling of SiPM detectors coupled to FE electronics for timing performance analysis
NASA Astrophysics Data System (ADS)
Ciciriello, F.; Corsi, F.; Licciulli, F.; Marzocca, C.; Matarrese, G.; Del Guerra, A.; Bisogni, M. G.
2013-08-01
It has already been shown how the shape of the current pulse produced by a SiPM in response to an incident photon is sensibly affected by the characteristics of the front-end electronics (FEE) used to read out the detector. When the application requires to approach the best theoretical time performance of the detection system, the influence of all the parasitics associated to the coupling SiPM-FEE can play a relevant role and must be adequately modeled. In particular, it has been reported that the shape of the current pulse is affected by the parasitic inductance of the wiring connection between SiPM and FEE. In this contribution, we extend the validity of a previously presented SiPM model to account for the wiring inductance. Various combinations of the main performance parameters of the FEE (input resistance and bandwidth) have been simulated in order to evaluate their influence on the time accuracy of the detection system, when the time pick-off of each single event is extracted by means of a leading edge discriminator (LED) technique.
Sequence design in lattice models by graph theoretical methods
NASA Astrophysics Data System (ADS)
Sanjeev, B. S.; Patra, S. M.; Vishveshwara, S.
2001-01-01
A general strategy has been developed based on graph theoretical methods, for finding amino acid sequences that take up a desired conformation as the native state. This problem of inverse design has been addressed by assigning topological indices for the monomer sites (vertices) of the polymer on a 3×3×3 cubic lattice. This is a simple design strategy, which takes into account only the topology of the target protein and identifies the best sequence for a given composition. The procedure allows the design of a good sequence for a target native state by assigning weights for the vertices on a lattice site in a given conformation. It is seen across a variety of conformations that the predicted sequences perform well both in sequence and in conformation space, in identifying the target conformation as native state for a fixed composition of amino acids. Although the method is tested in the framework of the HP model [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] it can be used in any context if proper potential functions are available, since the procedure derives unique weights for all the sites (vertices, nodes) of the polymer chain of a chosen conformation (graph).
Thermophotonic heat pump—a theoretical model and numerical simulations
NASA Astrophysics Data System (ADS)
Oksanen, Jani; Tulkki, Jukka
2010-05-01
We have recently proposed a solid state heat pump based on photon mediated heat transfer between two large-area light emitting diodes coupled by the electromagnetic field and enclosed in a semiconductor structure with a nearly homogeneous refractive index. Ideally the thermophotonic heat pump (THP) allows heat transfer at Carnot efficiency but in reality there are several factors that limit the efficiency. The efficient operation of the THP is based on the following construction factors and operational characteristics: (1) broad area semiconductor diodes to enable operation at optimal carrier density and high efficiency, (2) recycling of the energy of the emitted photons, (3) elimination of photon extraction losses by integrating the emitting and the absorbing diodes within a single semiconductor structure, and (4) eliminating the reverse thermal conduction by a nanometer scale vacuum layer between the diodes. In this paper we develop a theoretical model for the THP and study the fundamental physical limitations and potential of the concept. The results show that even when the most important losses of the THPs are accounted for, the THP has potential to outperform the thermoelectric coolers especially for heat transfer across large temperature differences and possibly even to compete with conventional small scale compressor based heat pumps.
A Game-Theoretic Model of Marketing Skin Whiteners.
Mendoza, Roger Lee
2015-01-01
Empirical studies consistently find that people in less developed countries tend to regard light or "white" skin, particularly among women, as more desirable or superior. This is a study about the marketing of skin whiteners in these countries, where over 80 percent of users are typically women. It proceeds from the following premises: a) Purely market or policy-oriented approaches toward the risks and harms of skin whitening are cost-inefficient; b) Psychosocial and informational factors breed uninformed and risky consumer choices that favor toxic skin whiteners; and c) Proliferation of toxic whiteners in a competitive buyer's market raises critical supplier accountability issues. Is intentional tort a rational outcome of uncooperative game equilibria? Can voluntary cooperation nonetheless evolve between buyers and sellers of skin whiteners? These twin questions are key to addressing the central paradox in this study: A robust and expanding buyer's market, where cheap whitening products abound at a high risk to personal and societal health and safety. Game-theoretic modeling of two-player and n-player strategic interactions is proposed in this study for both its explanatory and predictive value. Therein also lie its practical contributions to the economic literature on skin whitening. PMID:26565686
Network-theoretic approach to model vortex interactions
NASA Astrophysics Data System (ADS)
Nair, Aditya; Taira, Kunihiko
2014-11-01
We present a network-theoretic approach to describe a system of point vortices in two-dimensional flow. By considering the point vortices as nodes, a complete graph is constructed with edges connecting each vortex to every other vortex. The interactions between the vortices are captured by the graph edge weights. We employ sparsification techniques on these graph representations based on spectral theory to construct sparsified models of the overall vortical interactions. The edge weights are redistributed through spectral sparsification of the graph such that the sum of the interactions associated with each vortex is maintained constant. In addition, sparse configurations maintain similar spectral properties as the original setup. Through the reduction in the number of interactions, key vortex interactions can be highlighted. Identification of vortex structures based on graph sparsification is demonstrated with an example of clusters of point vortices. We also evaluate the computational performance of sparsification for large collection of point vortices. Work supported by US Army Research Office (W911NF-14-1-0386) and US Air Force Office of Scientific Research (YIP: FA9550-13-1-0183).
Information theoretic aspects of the two-dimensional Ising model.
Lau, Hon Wai; Grassberger, Peter
2013-02-01
We present numerical results for various information theoretic properties of the square lattice Ising model. First, using a bond propagation algorithm, we find the difference 2H(L)(w)-H(2L)(w) between entropies on cylinders of finite lengths L and 2L with open end cap boundaries, in the limit L→∞. This essentially quantifies how the finite length correction for the entropy scales with the cylinder circumference w. Secondly, using the transfer matrix, we obtain precise estimates for the information needed to specify the spin state on a ring encircling an infinitely long cylinder. Combining both results, we obtain the mutual information between the two halves of a cylinder (the "excess entropy" for the cylinder), where we confirm with higher precision but for smaller systems the results recently obtained by Wilms et al., and we show that the mutual information between the two halves of the ring diverges at the critical point logarithmically with w. Finally, we use the second result together with Monte Carlo simulations to show that also the excess entropy of a straight line of n spins in an infinite lattice diverges at criticality logarithmically with n. We conjecture that such logarithmic divergence happens generically for any one-dimensional subset of sites at any two-dimensional second-order phase transition. Comparing straight lines on square and triangular lattices with square loops and with lines of thickness 2, we discuss questions of universality. PMID:23496480
Information theoretic aspects of the two-dimensional Ising model
NASA Astrophysics Data System (ADS)
Lau, Hon Wai; Grassberger, Peter
2013-02-01
We present numerical results for various information theoretic properties of the square lattice Ising model. First, using a bond propagation algorithm, we find the difference 2HL(w)-H2L(w) between entropies on cylinders of finite lengths L and 2L with open end cap boundaries, in the limit L→∞. This essentially quantifies how the finite length correction for the entropy scales with the cylinder circumference w. Secondly, using the transfer matrix, we obtain precise estimates for the information needed to specify the spin state on a ring encircling an infinitely long cylinder. Combining both results, we obtain the mutual information between the two halves of a cylinder (the “excess entropy” for the cylinder), where we confirm with higher precision but for smaller systems the results recently obtained by Wilms , and we show that the mutual information between the two halves of the ring diverges at the critical point logarithmically with w. Finally, we use the second result together with Monte Carlo simulations to show that also the excess entropy of a straight line of n spins in an infinite lattice diverges at criticality logarithmically with n. We conjecture that such logarithmic divergence happens generically for any one-dimensional subset of sites at any two-dimensional second-order phase transition. Comparing straight lines on square and triangular lattices with square loops and with lines of thickness 2, we discuss questions of universality.
Theoretical modeling for radiofrequency ablation: state-of-the-art and challenges for the future
Berjano, Enrique J
2006-01-01
Radiofrequency ablation is an interventional technique that in recent years has come to be employed in very different medical fields, such as the elimination of cardiac arrhythmias or the destruction of tumors in different locations. In order to investigate and develop new techniques, and also to improve those currently employed, theoretical models and computer simulations are a powerful tool since they provide vital information on the electrical and thermal behavior of ablation rapidly and at low cost. In the future they could even help to plan individual treatment for each patient. This review analyzes the state-of-the-art in theoretical modeling as applied to the study of radiofrequency ablation techniques. Firstly, it describes the most important issues involved in this methodology, including the experimental validation. Secondly, it points out the present limitations, especially those related to the lack of an accurate characterization of the biological tissues. After analyzing the current and future benefits of this technique it finally suggests future lines and trends in the research of this area. PMID:16620380
Hu, Y.X.; Stamnes, K. )
1993-04-01
A new parameterization of the radiative Properties of water clouds is presented. Cloud optical properties for valent radius throughout the solar and both solar and terrestrial spectra and for cloud equivalent radii in the range 2.5-60 [mu]m are calculated from Mie theory. It is found that cloud optical properties depend mainly on equivalent radius throughout the solar and terrestrial spectrum and are insensitive to the details of the droplet size distribution, such as shape, skewness, width, and modality (single or bimodal). This suggests that in cloud models, aimed at predicting the evolution of cloud microphysics with climate change, it is sufficient to determine the third and the second moments of the size distribution (the ratio of which determines the equivalent radius). It also implies that measurements of the cloud liquid water content and the extinction coefficient are sufficient to determine cloud optical properties experimentally (i.e., measuring the complete droplet size distribution is not required). Based on the detailed calculations, the optical properties are parameterized as a function of cloud liquid water path and equivalent cloud droplet radius by using a nonlinear least-square fitting. The parameterization is performed separately for the range of radii 2.5-12 [mu]m, 12-30,[mu]m, and 30-60 [mu]m. Cloud heating and cooling rates are computed from this parameterization by using a comprehensive radiation model. Comparison with similar results obtained from exact Mie scattering calculations shows that this parameterization yields very accurate results and that it is several thousand times faster. This parameterization separates the dependence of cloud optical properties on droplet size and liquid water content, and is suitable for inclusion into climate models. 22 refs., 7 figs., 6 tabs.
Theoretical model for electrophilic oxygen atom insertion into hydrocarbons
Bach, R.D.; Su, M.D. ); Andres, J.L. Wayne State Univ., Detroit, MI ); McDouall, J.J.W. )
1993-06-30
A theoretical model suggesting the mechanistic pathway for the oxidation of saturated-alkanes to their corresponding alcohols and ketones is described. Water oxide (H[sub 2]O-O) is employed as a model singlet oxygen atom donor. Molecular orbital calculations with the 6-31G basis set at the MP2, QCISD, QCISD(T), CASSCF, and MRCI levels of theory suggest that oxygen insertion by water oxide occurs by the interaction of an electrophilic oxygen atom with a doubly occupied hydrocarbon fragment orbital. The electrophilic oxygen approaches the hydrocarbon along the axis of the atomic carbon p orbital comprising a [pi]-[sub CH(2)] or [pi]-[sub CHCH(3)] fragment orbital to form a carbon-oxygen [sigma] bond. A concerted hydrogen migration to an adjacent oxygen lone pair of electrons affords the alcohol insertion product in a stereoselective fashion with predictable stereochemistry. Subsequent oxidation of the alcohol to a ketone (or aldehyde) occurs in a similar fashion and has a lower activation barrier. The calculated (MP4/6-31G*//MP2/6-31G*) activation barriers for oxygen atom insertion into the C-H bonds of methane, ethane, propane, butane, isobutane, and methanol are 10.7, 8.2, 3.9, 4.8, 4.5, and 3.3 kcal/mol, respectively. We use ab initio molecular orbital calculations in support of a frontier MO theory that provides a unique rationale for both the stereospecificity and the stereoselectivity of insertion of electrophilic oxygen and related electrophiles into the carbon-hydrogen bond. 13 refs., 7 figs., 2 tabs.
Modeling of Non-Gravitational Forces for Precise and Accurate Orbit Determination
NASA Astrophysics Data System (ADS)
Hackel, Stefan; Gisinger, Christoph; Steigenberger, Peter; Balss, Ulrich; Montenbruck, Oliver; Eineder, Michael
2014-05-01
Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The precise reconstruction of the satellite's trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency Integrated Geodetic and Occultation Receiver (IGOR) onboard the spacecraft. The increasing demand for precise radar products relies on validation methods, which require precise and accurate orbit products. An analysis of the orbit quality by means of internal and external validation methods on long and short timescales shows systematics, which reflect deficits in the employed force models. Following the proper analysis of this deficits, possible solution strategies are highlighted in the presentation. The employed Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for gravitational and non-gravitational forces. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). The satellite TerraSAR-X flies on a dusk-dawn orbit with an altitude of approximately 510 km above ground. Due to this constellation, the Sun almost constantly illuminates the satellite, which causes strong across-track accelerations on the plane rectangular to the solar rays. The indirect effect of the solar radiation is called Earth Radiation Pressure (ERP). This force depends on the sunlight, which is reflected by the illuminated Earth surface (visible spectra) and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed. The scope of
NASA Astrophysics Data System (ADS)
Yang, H.-Y. Karen; Sutter, P. M.; Ricker, Paul M.
2012-12-01
Cosmological constraints derived from galaxy clusters rely on accurate predictions of cluster observable properties, in which feedback from active galactic nuclei (AGN) is a critical component. In order to model the physical effects due to supermassive black holes (SMBH) on cosmological scales, subgrid modelling is required, and a variety of implementations have been developed in the literature. However, theoretical uncertainties due to model and parameter variations are not yet well understood, limiting the predictive power of simulations including AGN feedback. By performing a detailed parameter-sensitivity study in a single cluster using several commonly adopted AGN accretion and feedback models with FLASH, we quantify the model uncertainties in predictions of cluster integrated properties. We find that quantities that are more sensitive to gas density have larger uncertainties (˜20 per cent for Mgas and a factor of ˜2 for LX at R500), whereas TX, YSZ and YX are more robust (˜10-20 per cent at R500). To make predictions beyond this level of accuracy would require more constraints on the most relevant parameters: the accretion model, mechanical heating efficiency and size of feedback region. By studying the impact of AGN feedback on the scaling relations, we find that an anti-correlation exists between Mgas and TX, which is another reason why YSZ and YX are excellent mass proxies. This anti-correlation also implies that AGN feedback is likely to be an important source of intrinsic scatter in the Mgas-TX and LX-TX relations.
Theoretical Modeling of the Discharge-Pumped Xenon - Excimer Laser.
NASA Astrophysics Data System (ADS)
Zhu, Sheng-Bai
The present dissertation is dedicated to a theoretical study of the discharge pumped XeCl excimer laser. For a better description of our system, Two modelings which supplement each other from different angles have been successfully developed. The first one, a comprehensive kinetics model which can be applied to the detailed simulations of the temporal behavior of the discharge characteristics and laser performance, is constructed by a set of coupled first -order differential equations, such as the rate equations, the Boltzmann equation, the external electric circuit equations, the energy balance equation, and the equations of optical resonator. The starting and termination of the discharge are taken into deliberation for the first time, especially for the Blumlein case. Some 70 kinetic processes and 23 chemical species are included. Such a problem can only be numerically solved by means of an elaborate computer code. Another model, on the other hand, pays attention to the quasi-steady-state to facilitate parametric study. A group of rate coefficients for the kinetic processes involving free electrons are approximated by analytic expressions using numerical results compiled from computer code calculations. Explicit expressions of the number densities for all relevant chemical species are obtained. Among them, HCI(O), H, and Cl can never reach steady-state population. Time history of the concentrations for these species are computed instead. With the discussions about the effect of vibrational relaxation and state-to-state transfer in the upper energy level, and the discussions about the rotational structure, collisional broadening, and dissociation of the diatomic ground state, we have extensively investigated the spontaneous emission spectra, the small-signal gain, the non-saturable absorption, the steady-state laser output power, and various efficiencies. Saturation effects in laser oscillators and laser amplifiers are discussed as well. These topics relate to the
ERIC Educational Resources Information Center
Markon, Kristian E.; Krueger, Robert F.
2006-01-01
Distinguishing between discrete and continuous latent variable distributions has become increasingly important in numerous domains of behavioral science. Here, the authors explore an information-theoretic approach to latent distribution modeling, in which the ability of latent distribution models to represent statistical information in observed…
NASA Astrophysics Data System (ADS)
Olivier, Thomas; Billard, Franck; Akhouayri, Hassan
2004-06-01
Self-focusing is one of the dramatic phenomena that may occur during the propagation of a high power laser beam in a nonlinear material. This phenomenon leads to a degradation of the wave front and may also lead to a photoinduced damage of the material. Realistic simulations of the propagation of high power laser beams require an accurate knowledge of the nonlinear refractive index γ. In the particular case of fused silica and in the nanosecond regime, it seems that electronic mechanisms as well as electrostriction and thermal effects can lead to a significant refractive index variation. Compared to the different methods used to measure this parmeter, the Z-scan method is simple, offers a good sensitivity and may give absolute measurements if the incident beam is accurately studied. However, this method requires a very good knowledge of the incident beam and of its propagation inside a nonlinear sample. We used a split-step propagation algorithm to simlate Z-scan curves for arbitrary beam shape, sample thickness and nonlinear phase shift. According to our simulations and a rigorous analysis of the Z-scan measured signal, it appears that some abusive approximations lead to very important errors. Thus, by reducing possible errors on the interpretation of Z-scan experimental studies, we performed accurate measurements of the nonlinear refractive index of fused silica that show the significant contribution of nanosecond mechanisms.
Walter, Johannes; Thajudeen, Thaseem; Süss, Sebastian; Segets, Doris; Peukert, Wolfgang
2015-04-21
Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles. PMID:25789666
NASA Astrophysics Data System (ADS)
Lachaume, Regis; Rabus, Markus; Jordan, Andres
2015-08-01
In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.
Stable, accurate and efficient computation of normal modes for horizontal stratified models
NASA Astrophysics Data System (ADS)
Wu, Bo; Chen, Xiaofei
2016-08-01
We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.
Stable, accurate and efficient computation of normal modes for horizontal stratified models
NASA Astrophysics Data System (ADS)
Wu, Bo; Chen, Xiaofei
2016-06-01
We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of "family of secular functions" that we herein call "adaptive mode observers", is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of "turning point", our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.
String Theoretic Toy Models of the Big Bang
NASA Astrophysics Data System (ADS)
Michelson, Jeremy
2006-03-01
Recently, examples of toy cosmologies have been found that are exact solutions of String Theory. These solutions have the feature that the theoretical framework permits reliable calculation arbitrarily close to the big bang singularity. Thus one can understand both the big bang, and late time physics. I will describe these toy cosmologies, and how they fit into String Theory's chains of equivalences between gravitational and nongravitational theories. These equivalences are the means by which one theoretically probes the big bang.
Accurate analytical modelling of cosmic ray induced failure rates of power semiconductor devices
NASA Astrophysics Data System (ADS)
Bauer, Friedhelm D.
2009-06-01
A new, simple and efficient approach is presented to conduct estimations of the cosmic ray induced failure rate for high voltage silicon power devices early in the design phase. This allows combining common design issues such as device losses and safe operating area with the constraints imposed by the reliability to result in a better and overall more efficient design methodology. Starting from an experimental and theoretical background brought forth a few yeas ago [Kabza H et al. Cosmic radiation as a cause for power device failure and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 9-12, Zeller HR. Cosmic ray induced breakdown in high voltage semiconductor devices, microscopic model and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 339-40, and Matsuda H et al. Analysis of GTO failure mode during d.c. blocking. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 221-5], an exact solution of the failure rate integral is derived and presented in a form which lends itself to be combined with the results available from commercial semiconductor simulation tools. Hence, failure rate integrals can be obtained with relative ease for realistic two- and even three-dimensional semiconductor geometries. Two case studies relating to IGBT cell design and planar junction termination layout demonstrate the purpose of the method.
Accurate calculation and modeling of the adiabatic connection in density functional theory
NASA Astrophysics Data System (ADS)
Teale, A. M.; Coriani, S.; Helgaker, T.
2010-04-01
AC. When parametrized in terms of the same input data, the AC-CI model offers improved performance over the corresponding AC-D model, which is shown to be the lowest-order contribution to the AC-CI model. The utility of the accurately calculated AC curves for the analysis of standard density functionals is demonstrated for the BLYP exchange-correlation functional and the interaction-strength-interpolation (ISI) model AC integrand. From the results of this analysis, we investigate the performance of our proposed two-parameter AC-D and AC-CI models when a simple density functional for the AC at infinite interaction strength is employed in place of information at the fully interacting point. The resulting two-parameter correlation functionals offer a qualitatively correct behavior of the AC integrand with much improved accuracy over previous attempts. The AC integrands in the present work are recommended as a basis for further work, generating functionals that avoid spurious error cancellations between exchange and correlation energies and give good accuracy for the range of densities and types of correlation contained in the systems studied here.
Towards more accurate wind and solar power prediction by improving NWP model physics
NASA Astrophysics Data System (ADS)
Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo
2014-05-01
nighttime to well mixed conditions during the day presents a big challenge to NWP models. Fast decrease and successive increase in hub-height wind speed after sunrise, and the formation of nocturnal low level jets will be discussed. For PV, the life cycle of low stratus clouds and fog is crucial. Capturing these processes correctly depends on the accurate simulation of diffusion or vertical momentum transport and the interaction with other atmospheric and soil processes within the numerical weather model. Results from Single Column Model simulations and 3d case studies will be presented. Emphasis is placed on wind forecasts; however, some references to highlights concerning the PV-developments will also be given. *) ORKA: Optimierung von Ensembleprognosen regenerativer Einspeisung für den Kürzestfristbereich am Anwendungsbeispiel der Netzsicherheitsrechnungen **) EWeLiNE: Erstellung innovativer Wetter- und Leistungsprognosemodelle für die Netzintegration wetterabhängiger Energieträger, www.projekt-eweline.de
Theoretical model atmosphere spectra used for the calibration of infrared instruments
NASA Astrophysics Data System (ADS)
Decin, L.; Eriksson, K.
2007-09-01
Context: One of the key ingredients in establishing the relation between input signal and output flux from a spectrometer is accurate determination of the spectrophotometric calibration. In the case of spectrometers onboard satellites, the accuracy of this part of the calibration pedigree is ultimately linked to the accuracy of the set of reference spectral energy distributions (SEDs) that the spectrophotometric calibration is built on. Aims: In this paper, we deal with the spectrophotometric calibration of infrared (IR) spectrometers onboard satellites in the 2 to 200 μm wavelength range. We aim at comparing the different reference SEDs used for the IR spectrophotometric calibration. The emphasis is on the reference SEDs of stellar standards with spectral type later than A0, with special focus on the theoretical model atmosphere spectra. Methods: Using the MARCS model atmosphere code, spectral reference SEDs were constructed for a set of IR stellar standards (A dwarfs, solar analogs, G9-M0 giants). A detailed error analysis was performed to estimate proper uncertainties on the predicted flux values. Results: It is shown that the uncertainty on the predicted fluxes can be as high as 10%, but in case high-resolution observational optical or near-IR data are available, and IR excess can be excluded, the uncertainty on medium-resolution SEDs can be reduced to 1-2% in the near-IR, to ~3% in the mid-IR, and to ~5% in the far-IR. Moreover, it is argued that theoretical stellar atmosphere spectra are at the moment the best representations for the IR fluxes of cool stellar standards. Conclusions: When aiming at a determination of the spectrophotometric calibration of IR spectrometers better than 3%, effort should be put into constructing an appropriate set of stellar reference SEDs based on theoretical atmosphere spectra for some 15 standard stars with spectral types between A0 V and M0 III.
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei
2015-01-15
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0
Towards accurate kinetic modeling of prompt NO formation in hydrocarbon flames via the NCN pathway
Sutton, Jeffrey A.; Fleming, James W.
2008-08-15
A basic kinetic mechanism that can predict the appropriate prompt-NO precursor NCN, as shown by experiment, with relative accuracy while still producing postflame NO results that can be calculated as accurately as or more accurately than through the former HCN pathway is presented for the first time. The basic NCN submechanism should be a starting point for future NCN kinetic and prompt NO formation refinement.
Myint, P. C.; Hao, Y.; Firoozabadi, A.
2015-03-27
Thermodynamic property calculations of mixtures containing carbon dioxide (CO_{2}) and water, including brines, are essential in theoretical models of many natural and industrial processes. The properties of greatest practical interest are density, solubility, and enthalpy. Many models for density and solubility calculations have been presented in the literature, but there exists only one study, by Spycher and Pruess, that has compared theoretical molar enthalpy predictions with experimental data [1]. In this report, we recommend two different models for enthalpy calculations: the CPA equation of state by Li and Firoozabadi [2], and the CO_{2} activity coefficient model by Duan and Sun [3]. We show that the CPA equation of state, which has been demonstrated to provide good agreement with density and solubility data, also accurately calculates molar enthalpies of pure CO_{2}, pure water, and both CO_{2}-rich and aqueous (H_{2}O-rich) mixtures of the two species. It is applicable to a wider range of conditions than the Spycher and Pruess model. In aqueous sodium chloride (NaCl) mixtures, we show that Duan and Sun’s model yields accurate results for the partial molar enthalpy of CO_{2}. It can be combined with another model for the brine enthalpy to calculate the molar enthalpy of H_{2}O-CO_{2}-NaCl mixtures. We conclude by explaining how the CPA equation of state may be modified to further improve agreement with experiments. This generalized CPA is the basis of our future work on this topic.
Transport-theoretic model for the electron-proton-hydrogen atom aurora. I. Theory
Basu, B.; Jasperse, J.R; Strickland, D.J.
1993-12-01
The first self-consistent transport-theoretic model for the combined electron-proton-hydrogen atom aurora is presented. This is needed for accurate modeling of the diffuse aurora, particularly in the midnight sector, for which a statistical study indicates that the proton contribution to the total auroral energy flux is (on the average) about 20 to 25% of that of the electrons. As a result, the ionization yield as well as the yields of many emission features will be underestimated (on the average) by about the same percentage if the proton-hydrogen atom contributions are neglected. The model presented here can also be used to study a pure electron aurora or a pure proton-hydrogen atom aurora by choosing the appropriate boundary conditions, namely, by setting the incident flux of one or the other particle population equal to zero. In the latter case, the new feature of the present model is the rigorous transport-theoretic treatment of the contributions to ionization rates and to emission rates and yields from the secondary electrons produced by protons and hydrogen atoms. A coupled set of three linear transport equations is presented. Protons and hydrogen atoms are coupled only to each other through charge-changing (charge exchange and stripping) collisions, while the electrons are coupled to both protons and hydrogen atoms through the secondary electrons that they produce. Source functions for the secondary electrons produced by the three primary particle populations are compared and contrasted, and the numerical methods for solving the coupled transport equations are described. Finally, formulas for calculating pertinent aurora-related quantities from the particle fluxes are given. 66 refs., 9 figs., 2 tabs.
Laboratory and theoretical models of planetary-scale instabilities and waves
NASA Technical Reports Server (NTRS)
Hart, John E.; Toomre, Juri
1990-01-01
Meteorologists and planetary astronomers interested in large-scale planetary and solar circulations recognize the importance of rotation and stratification in determining the character of these flows. In the past it has been impossible to accurately model the effects of sphericity on these motions in the laboratory because of the invariant relationship between the uni-directional terrestrial gravity and the rotation axis of an experiment. Researchers studied motions of rotating convecting liquids in spherical shells using electrohydrodynamic polarization forces to generate radial gravity, and hence centrally directed buoyancy forces, in the laboratory. The Geophysical Fluid Flow Cell (GFFC) experiments performed on Spacelab 3 in 1985 were analyzed. Recent efforts at interpretation led to numerical models of rotating convection with an aim to understand the possible generation of zonal banding on Jupiter and the fate of banana cells in rapidly rotating convection as the heating is made strongly supercritical. In addition, efforts to pose baroclinic wave experiments for future space missions using a modified version of the 1985 instrument led to theoretical and numerical models of baroclinic instability. Rather surprising properties were discovered, which may be useful in generating rational (rather than artificially truncated) models for nonlinear baroclinic instability and baroclinic chaos.
Theoretical model of the hydrogeology of a pull-apart basin
White, P.M.
1991-03-01
An accurate model of the hydrogeology of a basin is important in assessing the migration path of oil and its potential for remaining within a trap. Fluid flow in a basin is influenced by three driving forces: gravity, compaction, and density. The hydrogeology of most basins is affected by a combination of these three forces, but one is usually dominant. The hydrogeology of a pull-apart basin, such as the Los Angeles basin, is controlled by a combination of gravity and compaction forces. Tectonic movement within the Los Angeles basin has produced a number of small mountain ranges. These elevated features produce a large hydraulic head, driving groundwater into the basin. At the same time, the basin is undergoing compaction driving groundwater out of the basin. The complex interaction of these two forces has influenced the hydrogeologic flow within the Los Angeles basin. Oilgen, a computer modeling program, was used to develop a theoretical model for fluid flow within the Los Angeles basin. Extraction of oil in the early part of this century caused extensive subsidence in parts of the basin. To prevent further subsidence Long Beach established a water injection program in 1958. The water injection program has been successful in inhibiting subsidence and has even produced small, but measurable, amounts of rebound. Modeling was done both pre- and postinjection to allow the effects of the water injection on the hydrology of the basin to be evaluated.
Daegling, D J; Hylander, W L
2000-08-01
Experimental studies and mathematical models are disparate approaches for inferring the stress and strain environment in mammalian jaws. Experimental designs offer accurate, although limited, characterization of biomechanical behavior, while mathematical approaches (finite element modeling in particular) offer unparalleled precision in depiction of strain magnitudes, directions, and gradients throughout the mandible. Because the empirical (experimental) and theoretical (mathematical) perspectives differ in their initial assumptions and their proximate goals, the two methods can yield divergent conclusions about how masticatory stresses are distributed in the dentary. These different sources of inference may, therefore, tangibly influence subsequent biological interpretation. In vitro observation of bone strain in primate mandibles under controlled loading conditions offers a test of finite element model predictions. Two issues which have been addressed by both finite element models and experimental approaches are: (1) the distribution of torsional shear strains in anthropoid jaws and (2) the dissipation of bite forces in the human alveolar process. Not surprisingly, the experimental data and mathematical models agree on some issues, but on others exhibit discordance. Achieving congruence between these methods is critical if the nature of the relationship of masticatory stress to mandibular form is to be intelligently assessed. A case study of functional/mechanical significance of gnathic morphology in the hominid genus Paranthropus offers insight into the potential benefit of combining theoretical and experimental approaches. Certain finite element analyses claim to have identified a biomechanical problem unrecognized in previous comparative work, which, in essence, is that the enlarged transverse dimensions of the postcanine corpus may have a less important role in resisting torsional stresses than previously thought. Experimental data have identified
A theoretical model of grainsize evolution during deformation
NASA Astrophysics Data System (ADS)
Ricard, Y.; Bercovici, D.; Rozel, A.
2007-12-01
Lithospheric shear localization, as occurs in the formation of tectonic plate boundaries, is often associated with diminished grainsize (e.g., mylonites). Grainsize reduction is typically attributed to dynamic recrystallization; however, theoretical models of shear-localization arising from this hypothesis are problematic since (1) they require the simultaneous action of two exclusive creep mechanisms (diffusion and dislocation creep), and (2) the grain-growth ("healing") laws employed by these models are derived from static grain-growth or coarsening theory, although the shear-localization setting itself is far from static equilibrium. We present a new first-principles grained-continuum theory which accounts for both coarsening and damage-induced grainsize reduction. Damage per se is the generic process for generation of microcracks, defects, dislocations (including recrystallization), subgrains, nucleii and cataclastic breakdown of grains. The theory contains coupled statistical grain-scale and continuum macroscopic components. The grain-scale element of the theory prescribes both the evolution of the grainsize distribution, and a phenomenological grain-growth law derived from non-equilibrium thermodynamics; grain-growth thus incorporates the free energy differences between grains, including both grain-boundary surface energy (which controls coarsening) and the contribution of deformational work to these free energiesconservation and positivity of entropy production provide the phenomenological law for the statistical grain-growth law. We identify four potential mechanisms that affect the distribution of grainsize; two of them conserve the number of grains but change their relative masses and two of them change the number of grains by sticking them together or breaking them. In the limit of static equilibrium, only the two mechanisms that increase the average grainsize are allowed by the second law of thermodynamics. The first one is a diffusive mass transport
A theoretical microbial contamination model for a human Mars mission
NASA Astrophysics Data System (ADS)
Lupisella, Mark Lewis
Contamination from a human presence on Mars could significantly compromise the search for extraterrestrial life. In particular, the difficulties in controlling microbial contamination, the potential for terrestrial microbes to grow, evolve, compete, and modify the Martian environment, and the likely microbial nature of putative Martian life, make microbial contamination worthy of focus as we begin to plan for a human mission to Mars. This dissertation describes a relatively simple theoretical model that can be used to explore how microbial contamination from a human Mars mission might survive and grow in the Martian soil environment surrounding a habitat. A user interface has been developed to allow a general practitioner to choose values and functions for almost all parameters ranging from the number of astronauts to the half-saturation constants for microbial growth. Systematic deviations from a baseline set of parameter values are explored as potential plausible scenarios for the first human Mars missions. The total viable population and population density are the primary state variables of interest, but other variables such as the total number of births and total dead and viable microbes are also tracked. The general approach was to find the most plausible parameter value combinations that produced a population density of 1 microbe/cm3 or greater, a threshold that was used to categorize the more noteworthy populations for subsequent analysis. Preliminary assessments indicate that terrestrial microbial contamination resulting from leakage from a limited human mission (perhaps lasting up to 5 months) will not likely become a problematic population in the near-term as long as reasonable contamination control measures are implemented (for example, a habitat leak rate no greater than 1% per hour). However, there appear to be plausible, albeit unlikely, scenarios that could cause problematic populations, depending in part on (a) the initial survival fraction and
Peterson, K.A.; Skokov, S.; Bowman, J.M.
1999-10-01
A new, global analytical potential energy surface is constructed for the X&hthinsp;{sup 1}A{sup {prime}} electronic ground state of HOCl that accurately includes the HClO isomer. The potential is obtained by using accurate {ital ab initio} data from a previously published surface [Skokov {ital et al.}, J. Chem. Phys. {bold 109}, 2662 (1998)], as well as a significant number of new data for the HClO region of the surface at the same multireference configuration interaction, complete basis set limit level of theory. Vibrational energy levels and intensities are computed for both HOCl and HClO up to the OH+Cl dissociation limit and above the isomerization barrier. After making only minor adjustments to the {ital ab initio} surface, the errors with respect to experiment for HOCl are generally within a few cm{sup {minus}1} for 22 vibrational levels with the largest error being 26 cm{sup {minus}1}. A total of 813 bound vibrational states are calculated for HOCl. The HClO potential well supports 57 localized states, of which only the first 3 are bound. The strongest dipole transitions for HClO were computed for the fundamentals{emdash}33, 2.9, and 25 km/mol for {nu}{sub 1}, {nu}{sub 2}, and {nu}{sub 3}, respectively. From exact J=1 ro-vibrational calculations, state dependent rotational constants have been calculated for HClO. Lastly, resonance calculations with the new potential demonstrate that the presence of the HClO minimum has a negligible effect on the resonance states of HOCl near the dissociation threshold due to the relatively high and wide isomerization barrier. {copyright} {ital 1999 American Institute of Physics.}
Measured Model, Theoretical Model and Represented Model: the So-Called Arch of Drusus in Rome
NASA Astrophysics Data System (ADS)
Canciani, M.; Maestri, D.; Spadafora, G.; Manacorda, D.; Di Cola, V.
2011-09-01
The Arch of Drusus is a complex building, stratified over time. It isn't possible to advance only one hypothesis about its origin, but its several transformations may be given some interpretations. The difficulty lies in the coexistence of two structures, typologically and chronologically different, in a single monument: an original structure which can be related to a commemorative travertine arch sheathed in marble, dating back to the Imperial Age, which probably had three fornices and a later structure reused in the III century as an aque- duct arch and monumentalized again with the application of decorated architectural elements on the southern façade. In order to provide a graphic description as much accurate as possible from the metric-dimensional point of view and as much detailed as possible in all the elements which form the building, a new survey methodology has been tested. It uses different kinds of systems - instrumental, topographic and GPS, photogrammetric and direct traditional - which complement each other, in order to render a three-dimensional computerized reference model. The analysis process of the monument, made from what emerged from the archaeological analysis, thanks to the carrying out of dif- ferent navigable models, has been developed making, in the early stage, a represented model subsequently detailed on the basis of the incongruities detected in the survey.
Presenting a Theoretical Model of Four Conceptions of Civic Education
ERIC Educational Resources Information Center
Cohen, Aviv
2010-01-01
This conceptual study will question the ways different epistemological conceptions of citizenship and education influence the characteristics of civic education. While offering a new theoretical framework, the different undercurrent conceptions that lay at the base of the civic education process shall be brought forth. With the use of the method…
Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis
Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.
2015-07-28
The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to provide better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability.
Technology Transfer Automated Retrieval System (TEKTRAN)
The three evapotranspiration (ET) measurement/retrieval techniques used in this study, lysimeter, scintillometer and remote sensing vary in their level of complexity, accuracy, resolution and applicability. The lysimeter with its point measurement is the most accurate and direct method to measure ET...
Knorr, K L; Hilsenbeck, S G; Wenger, C R; Pounds, G; Oldaker, T; Vendely, P; Pandian, M R; Harrington, D; Clark, G M
1992-01-01
Determining an appropriate level of adjuvant therapy is one of the most difficult facets of treating breast cancer patients. Although the myriad of prognostic factors aid in this decision, often they give conflicting reports of a patient's prognosis. What we need is a survival model which can properly utilize the information contained in these factors and give an accurate, reliable account of the patient's probability of recurrence. We also need a method of evaluating these models' predictive ability instead of simply measuring goodness-of-fit, as is currently done. Often, prognostic factors are broken into two categories such as positive or negative. But this dichotomization may hide valuable prognostic information. We investigated whether continuous representations of factors, including standard transformations--logarithmic, square root, categorical, and smoothers--might more accurately estimate the underlying relationship between each factor and survival. We chose the logistic regression model, a special case of the commonly used Cox model, to test our hypothesis. The model containing continuous transformed factors fit the data more closely than the model containing the traditional dichotomized factors. In order to appropriately evaluate these models, we introduce three predictive validity statistics--the Calibration score, the Overall Calibration score, and the Brier score--designed to assess the model's accuracy and reliability. These standardized scores showed the transformed factors predicted three year survival accurately and reliably. The scores can also be used to assess models or compare across studies. PMID:1391991
LAF: Theoretical Model of Large Amplitude Folding of a Single Viscous Layer
NASA Astrophysics Data System (ADS)
Adamuszek, M.; Schmid, D. W.; Dabrowski, M.
2012-04-01
We present a theoretical model for Large Amplitude Folding (LAF) during buckling of a single, viscous layer. The model accurately predicts the evolution of geometrical fold parameters (amplitude, wavelength, and thickness) and is not restricted to any viscosity ratio or type of perturbation. The model employs two corrections to the formula of the initial growth rate of folds that is calculated using the thick-plate solution of Fletcher (Tectonophysics, 1977). The growth rate is modified by incorporating 1) the evolution of wavelength to thickness ratio, after Fletcher (American Journal of Science, 1974) and 2) the reduction of the growth rate, originally introduced by Schmalholz and Podladchikov (EPSL, 2000). The former correction is a consequence of the layer shortening and thickening. The latter modification is the result of using an effective rate of layer shortening as the driving force for fold growth, rather than the applied background shortening rate. The effective rate of the layer shortening is approximated by the rate of fold arclength shortening. In the model, we use an analytical expression derived based on the evolution of sinusoidal waveforms. These two modifications to the growth rate were already separately employed in previous studies. Through comparison with numerical models, we show that the simultaneous application of both corrections in LAF provides a better prediction of the evolution of the fold geometry parameters up to large amplitudes, compared to the models with only one correction. Our studies of the fold evolution from initial single and multiple (random noise, step and bell-shape function) waveforms show a remarkable fit between LAF and the numerical results. In the multiple waveform models, we predict a coupling between the components. In LAF, folds developed from initial random perturbations exhibit irregular but periodic shapes, characteristic for folds observed in nature. We also show that the evolution of folds from localized
ERIC Educational Resources Information Center
Dziedziewicz, Dorota; Karwowski, Maciej
2015-01-01
This paper presents a new theoretical model of creative imagination and its applications in early education. The model sees creative imagination as composed of three inter-related components: vividness of images, their originality, and the level of transformation of imageries. We explore the theoretical and practical consequences of this new…
A theoretical model of phase changes of a klystron due to variation of operating parameters
NASA Technical Reports Server (NTRS)
Kupiszewski, A.
1980-01-01
A mathematical model for phase changes of the VA-876 CW klystron amplifier output is presented and variations of several operating parameters are considered. The theoretical approach to the problem is based upon a gridded gap modeling with inclusion of a second order correction term so that actual gap geometry is reflected in the formulation. Physical measurements are contrasted to theoretical calculations.
A Model of Resource Allocation in Public School Districts: A Theoretical and Empirical Analysis.
ERIC Educational Resources Information Center
Chambers, Jay G.
This paper formulates a comprehensive model of resource allocation in a local public school district. The theoretical framework specified could be applied equally well to any number of local public social service agencies. Section 1 develops the theoretical model describing the process of resource allocation. This involves the determination of the…
NASA Technical Reports Server (NTRS)
Raj, S. V.
2010-01-01
Establishing the geometry of foam cells is useful in developing microstructure-based acoustic and structural models. Since experimental data on the geometry of the foam cells are limited, most modeling efforts use the three-dimensional, space-filling Kelvin tetrakaidecahedron. The validity of this assumption is investigated in the present paper. Several FeCrAlY foams with relative densities varying between 3 and 15 percent and cells per mm (c.p.mm.) varying between 0.2 and 3.9 c.p.mm. were microstructurally evaluated. The number of edges per face for each foam specimen was counted by approximating the cell faces by regular polygons, where the number of cell faces measured varied between 207 and 745. The present observations revealed that 50 to 57 percent of the cell faces were pentagonal while 24 to 28 percent were quadrilateral and 15 to 22 percent were hexagonal. The present measurements are shown to be in excellent agreement with literature data. It is demonstrated that the Kelvin model, as well as other proposed theoretical models, cannot accurately describe the FeCrAlY foam cell structure. Instead, it is suggested that the ideal foam cell geometry consists of 11 faces with 3 quadrilateral, 6 pentagonal faces and 2 hexagonal faces consistent with the 3-6-2 cell.
NASA Technical Reports Server (NTRS)
Raj. Sai V.
2011-01-01
Establishing the geometry of foam cells is useful in developing microstructure-based acoustic and structural models. Since experimental data on the geometry of the foam cells are limited, most modeling efforts use an idealized three-dimensional, space-filling Kelvin tetrakaidecahedron. The validity of this assumption is investigated in the present paper. Several FeCrAlY foams with relative densities varying between 3 and 15% and cells per mm (c.p.mm.) varying between 0.2 and 3.9 c.p.mm. were microstructurally evaluated. The number of edges per face for each foam specimen was counted by approximating the cell faces by regular polygons, where the number of cell faces measured varied between 207 and 745. The present observations revealed that 50-57% of the cell faces were pentagonal while 24-28% were quadrilateral and 15-22% were hexagonal. The present measurements are shown to be in excellent agreement with literature data. It is demonstrated that the Kelvin model, as well as other proposed theoretical models, cannot accurately describe the FeCrAlY foam cell structure. Instead, it is suggested that the ideal foam cell geometry consists of 11 faces with 3 quadrilateral, 6 pentagonal faces and 2 hexagonal faces consistent with the 3-6-2 Matzke cell
Information-theoretic model comparison unifies saliency metrics
Kümmerer, Matthias; Wallis, Thomas S. A.; Bethge, Matthias
2015-01-01
Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed “saliency” prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a “saliency map” entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use. PMID:26655340
Theoretical model-based quantitative optimisation of numerical modelling for eddy current NDT
NASA Astrophysics Data System (ADS)
Yu, Yating; Li, Xinhua; Simm, Anthony; Tian, Guiyun
2011-06-01
Eddy current (EC) nondestructive testing (NDT) is one of the most widely used NDT methods. Numerical modelling of NDT methods has been used as an important investigative approach alongside experimental and theoretical studies. This paper investigates the set-up of numerical modelling using finite-element method in terms of the optimal selection of element mesh size in different regions within the model based on theoretical analysis of EC NDT. The modelling set-up is refined and evaluated through numerical simulation, balancing both computation time and simulation accuracy. A case study in the optimisation of the modelling set-up of the EC NDT system with a cylindrical probe coil is carried out to verify the proposed optimisation approach. Here, the mesh size of the simulation model is set based on the geometries of the coil and the magnetic sensor, as well as on the skin depth in the sample; so the optimised modelling set-up can be useful even when the geometry of EC system, the excitation frequency or the pulsed width is changed in multi-frequency EC, sweep-frequency EC or system and pulsed EC. Furthermore, this optimisation approach can be used to improve the trade-off between accuracy and the computation time in other more complex EC NDT simulations.
College Students Solving Chemistry Problems: A Theoretical Model of Expertise
ERIC Educational Resources Information Center
Taasoobshirazi, Gita; Glynn, Shawn M.
2009-01-01
A model of expertise in chemistry problem solving was tested on undergraduate science majors enrolled in a chemistry course. The model was based on Anderson's "Adaptive Control of Thought-Rational" (ACT-R) theory. The model shows how conceptualization, self-efficacy, and strategy interact and contribute to the successful solution of quantitative,…
Dunn, Nicholas J. H.; Noid, W. G.
2015-12-28
The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.
ERIC Educational Resources Information Center
Kim, Young Rae
2013-01-01
A theoretical model of metacognition in complex modeling activities has been developed based on existing frameworks, by synthesizing the re-conceptualization of metacognition at multiple levels by looking at the three sources that trigger metacognition. Using the theoretical model as a framework, this study was designed to explore how students'…
Theoretical modelling of the semiconductor-electrolyte interface
NASA Astrophysics Data System (ADS)
Schelling, Patrick Kenneth
We have developed tight-binding models of transition metal oxides. In contrast to many tight-binding models, these models include a description of electron-electron interactions. After parameterizing to bulk first-principles calculations, we demonstrated the transferability of the model by calculating atomic and electronic structure of rutile surfaces, which compared well with experiment and first-principles calculations. We also studied the structure of twist grain boundaries in rutile. Molecular dynamics simulations using the model were also carried out to describe polaron localization. We have also demonstrated that tight-binding models can be constructed to describe metallic systems. The computational cost tight-binding simulations was greatly reduced by incorporating O(N) electronic structure methods. We have also interpreted photoluminesence experiments on GaAs electrodes in contact with an electrolyte using drift-diffusion models. Electron transfer velocities were obtained by fitting to experimental results.
A graph theoretical perspective of a drug abuse epidemic model
NASA Astrophysics Data System (ADS)
Nyabadza, F.; Mukwembi, S.; Rodrigues, B. G.
2011-05-01
A drug use epidemic can be represented by a finite number of states and transition rules that govern the dynamics of drug use in each discrete time step. This paper investigates the spread of drug use in a community where some users are in treatment and others are not in treatment, citing South Africa as an example. In our analysis, we consider the neighbourhood prevalence of each individual, i.e., the proportion of the individual’s drug user contacts who are not in treatment amongst all of his or her contacts. We introduce parameters α∗, β∗ and γ∗, depending on the neighbourhood prevalence, which govern the spread of drug use. We examine how changes in α∗, β∗ and γ∗ affect the system dynamics. Simulations presented support the theoretical results.
Accurate cortical tissue classification on MRI by modeling cortical folding patterns.
Kim, Hosung; Caldairou, Benoit; Hwang, Ji-Wook; Mansi, Tommaso; Hong, Seok-Jun; Bernasconi, Neda; Bernasconi, Andrea
2015-09-01
Accurate tissue classification is a crucial prerequisite to MRI morphometry. Automated methods based on intensity histograms constructed from the entire volume are challenged by regional intensity variations due to local radiofrequency artifacts as well as disparities in tissue composition, laminar architecture and folding patterns. Current work proposes a novel anatomy-driven method in which parcels conforming cortical folding were regionally extracted from the brain. Each parcel is subsequently classified using nonparametric mean shift clustering. Evaluation was carried out on manually labeled images from two datasets acquired at 3.0 Tesla (n = 15) and 1.5 Tesla (n = 20). In both datasets, we observed high tissue classification accuracy of the proposed method (Dice index >97.6% at 3.0 Tesla, and >89.2% at 1.5 Tesla). Moreover, our method consistently outperformed state-of-the-art classification routines available in SPM8 and FSL-FAST, as well as a recently proposed local classifier that partitions the brain into cubes. Contour-based analyses localized more accurate white matter-gray matter (GM) interface classification of the proposed framework compared to the other algorithms, particularly in central and occipital cortices that generally display bright GM due to their highly degree of myelination. Excellent accuracy was maintained, even in the absence of correction for intensity inhomogeneity. The presented anatomy-driven local classification algorithm may significantly improve cortical boundary definition, with possible benefits for morphometric inference and biomarker discovery. PMID:26037453
Surface electron density models for accurate ab initio molecular dynamics with electronic friction
NASA Astrophysics Data System (ADS)
Novko, D.; Blanco-Rey, M.; Alducin, M.; Juaristi, J. I.
2016-06-01
Ab initio molecular dynamics with electronic friction (AIMDEF) is a valuable methodology to study the interaction of atomic particles with metal surfaces. This method, in which the effect of low-energy electron-hole (e-h) pair excitations is treated within the local density friction approximation (LDFA) [Juaristi et al., Phys. Rev. Lett. 100, 116102 (2008), 10.1103/PhysRevLett.100.116102], can provide an accurate description of both e-h pair and phonon excitations. In practice, its applicability becomes a complicated task in those situations of substantial surface atoms displacements because the LDFA requires the knowledge at each integration step of the bare surface electron density. In this work, we propose three different methods of calculating on-the-fly the electron density of the distorted surface and we discuss their suitability under typical surface distortions. The investigated methods are used in AIMDEF simulations for three illustrative adsorption cases, namely, dissociated H2 on Pd(100), N on Ag(111), and N2 on Fe(110). Our AIMDEF calculations performed with the three approaches highlight the importance of going beyond the frozen surface density to accurately describe the energy released into e-h pair excitations in case of large surface atom displacements.
A Type-Theoretic Framework for Certified Model Transformations
NASA Astrophysics Data System (ADS)
Calegari, Daniel; Luna, Carlos; Szasz, Nora; Tasistro, Álvaro
We present a framework based on the Calculus of Inductive Constructions (CIC) and its associated tool the Coq proof assistant to allow certification of model transformations in the context of Model-Driven Engineering (MDE). The approached is based on a semi-automatic translation process from metamodels, models and transformations of the MDE technical space into types, propositions and functions of the CIC technical space. We describe this translation and illustrate its use in a standard case study.
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Vu-Quoc, Loc
2007-07-01
We present in this paper the displacement-driven version of a tangential force-displacement (TFD) model that accounts for both elastic and plastic deformations together with interfacial friction occurring in collisions of spherical particles. This elasto-plastic frictional TFD model, with its force-driven version presented in [L. Vu-Quoc, L. Lesburg, X. Zhang. An accurate tangential force-displacement model for granular-flow simulations: contacting spheres with plastic deformation, force-driven formulation, Journal of Computational Physics 196(1) (2004) 298-326], is consistent with the elasto-plastic frictional normal force-displacement (NFD) model presented in [L. Vu-Quoc, X. Zhang. An elasto-plastic contact force-displacement model in the normal direction: displacement-driven version, Proceedings of the Royal Society of London, Series A 455 (1991) 4013-4044]. Both the NFD model and the present TFD model are based on the concept of additive decomposition of the radius of contact area into an elastic part and a plastic part. The effect of permanent indentation after impact is represented by a correction to the radius of curvature. The effect of material softening due to plastic flow is represented by a correction to the elastic moduli. The proposed TFD model is accurate, and is validated against nonlinear finite element analyses involving plastic flows in both the loading and unloading conditions. The proposed consistent displacement-driven, elasto-plastic NFD and TFD models are designed for implementation in computer codes using the discrete-element method (DEM) for granular-flow simulations. The model is shown to be accurate and is validated against nonlinear elasto-plastic finite-element analysis.
THEORETICAL MODEL OF SOILING OF SURFACES BY AIRBORNE PARTICLES
A model is developed which can be used to predict the change in reflectance from a surface as a function of time. Reflectance change is a measure of soiling caused by the deposition of particles on a surface. The major inputs to the model are the parameters to a bimodal distribut...
Multi Sensor Data Integration for AN Accurate 3d Model Generation
NASA Astrophysics Data System (ADS)
Chhatkuli, S.; Satoh, T.; Tachibana, K.
2015-05-01
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
ERIC Educational Resources Information Center
Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.
2011-01-01
Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…
Models in biology: ‘accurate descriptions of our pathetic thinking’
2014-01-01
In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
Experimental observations and theoretical models for beam-beam phenomena
Kheifets, S.
1981-03-01
The beam-beam interaction in storage rings exhibits all the characteristics of nonintegrable dynamical systems. Here one finds all kinds of resonances, closed orbits, stable and unstable fixed points, stochastic layers, chaotic behavior, diffusion, etc. The storage ring itself being an expensive device nevertheless while constructed and put into operation presents a good opportunity of experimentally studying the long-time behavior of both conservative (proton machines) and nonconservative (electron machines) dynamical systems - the number of bunch-bunch interactions routinely reaches values of 10/sup 10/-10/sup 11/ and could be increased by decreasing the beam current. At the same time the beam-beam interaction puts practical limits for the yield of the storage ring. This phenomenon not only determines the design value of main storage ring parameters (luminosity, space charge parameters, beam current), but also in fact prevents many of the existing storage rings from achieving design parameters. Hence, the problem has great practical importance along with its enormous theoretical interest. A brief overview of the problem is presented.
Psychosocial stress and prostate cancer: a theoretical model.
Ellison, G L; Coker, A L; Hebert, J R; Sanderson, S M; Royal, C D; Weinrich, S P
2001-01-01
African-American men are more likely to develop and die from prostate cancer than are European-American men; yet, factors responsible for the racial disparity in incidence and mortality have not been elucidated. Socioeconomic disadvantage is more prevalent among African-American than among European-American men. Socioeconomic disadvantage can lead to psychosocial stress and may be linked to negative lifestyle behaviors. Regardless of socioeconomic position, African-American men routinely experience racism-induced stress. We propose a theoretical framework for an association between psychosocial stress and prostate cancer. Within the context of history and culture, we further propose that psychosocial stress may partially explain the variable incidence of prostate cancer between these diverse groups. Psychosocial stress may negatively impact the immune system leaving the individual susceptible to malignancies. Behavioral responses to psychosocial stress are amenable to change. If psychosocial stress is found to negatively impact prostate cancer risk, interventions may be designed to modify reactions to environmental demands. PMID:11572415
Theoretical Tools in Modeling Communication and Language Dynamics
NASA Astrophysics Data System (ADS)
Loreto, Vittorio
Statistical physics has proven to be a very fruitful framework to describe phenomena outside the realm of traditional physics. In social phenomena, the basic constituents are not particles but humans and every individual interacts with a limited number of peers, usually negligible compared to the total number of people in the system. In spite of that, human societies are characterized by stunning global regularities that naturally call for a statistical physics approach to social behavior, i.e., the attempt to understand regularities at large scale as collective effects of the interaction among single individuals, considered as relatively simple entities. This is the paradigm of Complex Systems: an assembly of many interacting (and simple) units whose collective behavior is not trivially deducible from the knowledge of the rules governing their mutual interactions. In this chapter we review the main theoretical concepts and tools that physics can borrow to socially-motivated problems. Despite their apparent diversity, most research lines in social dynamics are actually closely connected from the point of view of both the methodologies employed and, more importantly, of the general phenomenological questions, e.g., what are the fundamental interaction mechanisms leading to the emergence of consensus on an issue, a shared culture, a common language or a collective motion?
More accurate predictions with transonic Navier-Stokes methods through improved turbulence modeling
NASA Technical Reports Server (NTRS)
Johnson, Dennis A.
1989-01-01
Significant improvements in predictive accuracies for off-design conditions are achievable through better turbulence modeling; and, without necessarily adding any significant complication to the numerics. One well established fact about turbulence is it is slow to respond to changes in the mean strain field. With the 'equilibrium' algebraic turbulence models no attempt is made to model this characteristic and as a consequence these turbulence models exaggerate the turbulent boundary layer's ability to produce turbulent Reynolds shear stresses in regions of adverse pressure gradient. As a consequence, too little momentum loss within the boundary layer is predicted in the region of the shock wave and along the aft part of the airfoil where the surface pressure undergoes further increases. Recently, a 'nonequilibrium' algebraic turbulence model was formulated which attempts to capture this important characteristic of turbulence. This 'nonequilibrium' algebraic model employs an ordinary differential equation to model the slow response of the turbulence to changes in local flow conditions. In its original form, there was some question as to whether this 'nonequilibrium' model performed as well as the 'equilibrium' models for weak interaction cases. However, this turbulence model has since been further improved wherein it now appears that this turbulence model performs at least as well as the 'equilibrium' models for weak interaction cases and for strong interaction cases represents a very significant improvement. The performance of this turbulence model relative to popular 'equilibrium' models is illustrated for three airfoil test cases of the 1987 AIAA Viscous Transonic Airfoil Workshop, Reno, Nevada. A form of this 'nonequilibrium' turbulence model is currently being applied to wing flows for which similar improvements in predictive accuracy are being realized.
Johnson, Timothy C.; Wellman, Dawn M.
2015-06-26
Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method is implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.
Ion Implantation into Presolar Grains: A Theoretical Model
NASA Astrophysics Data System (ADS)
Verchovsky, A. B.; Wright, I. P.; Pillinger, C. T.
A numerical model for ion implantation into spherical grains in free space has been developed. It can be applied to single grains or collections of grains with known grain-size distributions. Ion-scattering effects were taken into account using results of computer simulations. Possible isotope and element fractionation of the implanted species was investigated using this model. The astrophysical significance of the model lies in the possible identification of energetically different components (such as noble gases) implanted into presolar grains (such as diamond and SiC) and in establishing implantation energies of the components.
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Cerling, T.
2006-12-01
Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across
NASA Astrophysics Data System (ADS)
Hauser, H.; Melikhov, Y.; Jiles, D. C.
2007-10-01
Two recent theoretical hysteresis models (Jiles-Atherton model and energetic model) are examined with respect to their capability to describe the dependence of the magnetization on magnetic field, microstructure, and anisotropy. It is shown that the classical Rayleigh law for the behavior of magnetization at low fields and the Stoner-Wohlfarth theory of domain magnetization rotation in noninteracting magnetic single domain particles can be considered as limiting cases of a more general theoretical treatment of hysteresis in ferromagnetism.
NASA Astrophysics Data System (ADS)
Toyokuni, Genti; Takenaka, Hiroshi
2012-06-01
We propose a method for modeling global seismic wave propagation through an attenuative Earth model including the center. This method enables accurate and efficient computations since it is based on the 2.5-D approach, which solves wave equations only on a 2-D cross section of the whole Earth and can correctly model 3-D geometrical spreading. We extend a numerical scheme for the elastic waves in spherical coordinates using the finite-difference method (FDM), to solve the viscoelastodynamic equation. For computation of realistic seismic wave propagation, incorporation of anelastic attenuation is crucial. Since the nature of Earth material is both elastic solid and viscous fluid, we should solve stress-strain relations of viscoelastic material, including attenuative structures. These relations represent the stress as a convolution integral in time, which has had difficulty treating viscoelasticity in time-domain computation such as the FDM. However, we now have a method using so-called memory variables, invented in the 1980s, followed by improvements in Cartesian coordinates. Arbitrary values of the quality factor (Q) can be incorporated into the wave equation via an array of Zener bodies. We also introduce the multi-domain, an FD grid of several layers with different grid spacings, into our FDM scheme. This allows wider lateral grid spacings with depth, so as not to perturb the FD stability criterion around the Earth center. In addition, we propose a technique to avoid the singularity problem of the wave equation in spherical coordinates at the Earth center. We develop a scheme to calculate wavefield variables on this point, based on linear interpolation for the velocity-stress, staggered-grid FDM. This scheme is validated through a comparison of synthetic seismograms with those obtained by the Direct Solution Method for a spherically symmetric Earth model, showing excellent accuracy for our FDM scheme. As a numerical example, we apply the method to simulate seismic
Theoretical model of impact damage in structural ceramics
NASA Technical Reports Server (NTRS)
Liaw, B. M.; Kobayashi, A. S.; Emery, A. G.
1984-01-01
This paper presents a mechanistically consistent model of impact damage based on elastic failures due to tensile and shear overloading. An elastic axisymmetric finite element model is used to determine the dynamic stresses generated by a single particle impact. Local failures in a finite element are assumed to occur when the primary/secondary principal stresses or the maximum shear stress reach critical tensile or shear stresses, respectively. The succession of failed elements thus models macrocrack growth. Sliding motions of cracks, which closed during unloading, are resisted by friction and the unrecovered deformation represents the 'plastic deformation' reported in the literature. The predicted ring cracks on the contact surface, as well as the cone cracks, median cracks, radial cracks, lateral cracks, and damage-induced porous zones in the interior of hot-pressed silicon nitride plates, matched those observed experimentally. The finite element model also predicted the uplifting of the free surface surrounding the impact site.
Theoretical models for duct acoustic propagation and radiation
NASA Technical Reports Server (NTRS)
Eversman, Walter
1991-01-01
The development of computational methods in acoustics has led to the introduction of analysis and design procedures which model the turbofan inlet as a coupled system, simultaneously modeling propagation and radiation in the presence of realistic internal and external flows. Such models are generally large, require substantial computer speed and capacity, and can be expected to be used in the final design stages, with the simpler models being used in the early design iterations. Emphasis is given to practical modeling methods that have been applied to the acoustical design problem in turbofan engines. The mathematical model is established and the simplest case of propagation in a duct with hard walls is solved to introduce concepts and terminologies. An extensive overview is given of methods for the calculation of attenuation in uniform ducts with uniform flow and with shear flow. Subsequent sections deal with numerical techniques which provide an integrated representation of duct propagation and near- and far-field radiation for realistic geometries and flight conditions.
Learning models of PTSD: Theoretical accounts and psychobiological evidence.
Lissek, Shmuel; van Meurs, Brian
2015-12-01
Learning abnormalities have long been centrally implicated in posttraumatic psychopathology. Indeed, of all anxiety disorders, PTSD may be most clearly attributable to discrete, aversive learning events. In PTSD, such learning is acquired during the traumatic encounter and is expressed as both conditioned fear to stimuli associated with the event and more general over-reactivity-or failure to adapt-to intense, novel, or fear-related stimuli. The relatively straightforward link between PTSD and these basic, evolutionarily old, learning processes of conditioning, sensitization, and habituation affords models of PTSD comprised of fundamental, experimentally tractable mechanisms of learning that have been well characterized across a variety of mammalian species including humans. Though such learning mechanisms have featured prominently in explanatory models of psychological maladjustment to trauma for at least 90years, much of the empirical testing of these models has occurred only in the past two decades. The current review delineates the variety of theories forming this longstanding tradition of learning-based models of PTSD, details empirical evidence for such models, attempts an integrative account of results from this literature, and specifies limitations of, and future directions for, studies testing learning models of PTSD. PMID:25462219
Design theoretic analysis of three system modeling frameworks.
McDonald, Michael James
2007-05-01
This paper analyzes three simulation architectures from the context of modeling scalability to address System of System (SoS) and Complex System problems. The paper first provides an overview of the SoS problem domain and reviews past work in analyzing model and general system complexity issues. It then identifies and explores the issues of vertical and horizontal integration as well as coupling and hierarchical decomposition as the system characteristics and metrics against which the tools are evaluated. In addition, it applies Nam Suh's Axiomatic Design theory as a construct for understanding coupling and its relationship to system feasibility. Next it describes the application of MATLAB, Swarm, and Umbra (three modeling and simulation approaches) to modeling swarms of Unmanned Flying Vehicle (UAV) agents in relation to the chosen characteristics and metrics. Finally, it draws general conclusions for analyzing model architectures that go beyond those analyzed. In particular, it identifies decomposition along phenomena of interaction and modular system composition as enabling features for modeling large heterogeneous complex systems.
Pal, Saikat; Lindsey, Derek P.; Besier, Thor F.; Beaupre, Gary S.
2013-01-01
Cartilage material properties provide important insights into joint health, and cartilage material models are used in whole-joint finite element models. Although the biphasic model representing experimental creep indentation tests is commonly used to characterize cartilage, cartilage short-term response to loading is generally not characterized using the biphasic model. The purpose of this study was to determine the short-term and equilibrium material properties of human patella cartilage using a viscoelastic model representation of creep indentation tests. We performed 24 experimental creep indentation tests from 14 human patellar specimens ranging in age from 20 to 90 years (median age 61 years). We used a finite element model to reproduce the experimental tests and determined cartilage material properties from viscoelastic and biphasic representations of cartilage. The viscoelastic model consistently provided excellent representation of the short-term and equilibrium creep displacements. We determined initial elastic modulus, equilibrium elastic modulus, and equilibrium Poisson’s ratio using the viscoelastic model. The viscoelastic model can represent the short-term and equilibrium response of cartilage and may easily be implemented in whole-joint finite element models. PMID:23027200
Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3
NASA Astrophysics Data System (ADS)
Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.
2016-04-01
Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.
Theoretical and computational models of biological ion channels
NASA Astrophysics Data System (ADS)
Roux, Benoit
2004-03-01
A theoretical framework for describing ion conduction through biological molecular pores is established and explored. The framework is based on a statistical mechanical formulation of the transmembrane potential (1) and of the equilibrium multi-ion potential of mean forces through selective ion channels (2). On the basis of these developments, it is possible to define computational schemes to address questions about the non-equilibrium flow of ions through ion channels. In the case of narrow channels (gramicidin or KcsA), it is possible to characterize the ion conduction in terms of the potential of mean force of the ions along the channel axis (i.e., integrating out the off-axis motions). This has been used for gramicidin (3) and for KcsA (4,5). In the case of wide pores (i.e., OmpF porin), this is no longer a good idea, but it is possible to use a continuum solvent approximations. In this case, a grand canonical monte carlo brownian dynamics algorithm was constructed for simulating the non-equilibrium flow of ions through wide pores. The results were compared with those from the Poisson-Nernst-Planck mean-field electrodiffusion theory (6-8). References; 1. B. Roux, Biophys. J. 73:2980-2989 (1997); 2. B. Roux, Biophys. J. 77, 139-153 (1999); 3. Allen, Andersen and Roux, PNAS (2004, in press); 4. Berneche and Roux. Nature, 414:73-77 (2001); 5. Berneche and Roux. PNAS, 100:8644-8648 (2003); 6. W. Im and S. Seefeld and B. Roux, Biophys. J. 79:788-801 (2000); 7. W. Im and B. Roux, J. Chem. Phys. 115:4850-4861 (2001); 8. W. Im and B. Roux, J. Mol. Biol. 322:851-869 (2002).
Ray-theoretical modeling of secondary microseism P-waves
NASA Astrophysics Data System (ADS)
Farra, V.; Stutzmann, E.; Gualtieri, L.; Schimmel, M.; Ardhuin, F.
2016-06-01
Secondary microseism sources are pressure fluctuations close to the ocean surface. They generate acoustic P-waves that propagate in water down to the ocean bottom where they are partly reflected, and partly transmitted into the crust to continue their propagation through the Earth. We present the theory for computing the displacement power spectral density of secondary microseism P-waves recorded by receivers in the far field. In the frequency domain, the P-wave displacement can be modeled as the product of (1) the pressure source, (2) the source site effect that accounts for the constructive interference of multiply reflected P-waves in the ocean, (3) the propagation from the ocean bottom to the stations, (4) the receiver site effect. Secondary microseism P-waves have weak amplitudes, but they can be investigated by beamforming analysis. We validate our approach by analyzing the seismic signals generated by Typhoon Ioke (2006) and recorded by the Southern California Seismic Network. Back projecting the beam onto the ocean surface enables to follow the source motion. The observed beam centroid is in the vicinity of the pressure source derived from the ocean wave model WAVEWATCH IIIR. The pressure source is then used for modeling the beam and a good agreement is obtained between measured and modeled beam amplitude variation over time. This modeling approach can be used to invert P-wave noise data and retrieve the source intensity and lateral extent.
Ray-theoretical modeling of secondary microseism P waves
NASA Astrophysics Data System (ADS)
Farra, V.; Stutzmann, E.; Gualtieri, L.; Schimmel, M.; Ardhuin, F.
2016-09-01
Secondary microseism sources are pressure fluctuations close to the ocean surface. They generate acoustic P waves that propagate in water down to the ocean bottom where they are partly reflected and partly transmitted into the crust to continue their propagation through the Earth. We present the theory for computing the displacement power spectral density of secondary microseism P waves recorded by receivers in the far field. In the frequency domain, the P-wave displacement can be modeled as the product of (1) the pressure source, (2) the source site effect that accounts for the constructive interference of multiply reflected P waves in the ocean, (3) the propagation from the ocean bottom to the stations and (4) the receiver site effect. Secondary microseism P waves have weak amplitudes, but they can be investigated by beamforming analysis. We validate our approach by analysing the seismic signals generated by typhoon Ioke (2006) and recorded by the Southern California Seismic Network. Backprojecting the beam onto the ocean surface enables to follow the source motion. The observed beam centroid is in the vicinity of the pressure source derived from the ocean wave model WAVEWATCH IIIR. The pressure source is then used for modeling the beam and a good agreement is obtained between measured and modeled beam amplitude variation over time. This modeling approach can be used to invert P-wave noise data and retrieve the source intensity and lateral extent.
NASA Astrophysics Data System (ADS)
Zakrzewski, Jakub; Delande, Dominique
2008-11-01
The quantum phase transition point between the insulator and the superfluid phase at unit filling factor of the infinite one-dimensional Bose-Hubbard model is numerically computed with a high accuracy. The method uses the infinite system version of the time evolving block decimation algorithm, here tested in a challenging case. We provide also the accurate estimate of the phase transition point at double occupancy.
Theoretical modeling of electron mobility in superfluid 4He
NASA Astrophysics Data System (ADS)
Aitken, Frédéric; Bonifaci, Nelly; von Haeften, Klaus; Eloranta, Jussi
2016-07-01
The Orsay-Trento bosonic density functional theory model is extended to include dissipation due to the viscous response of superfluid 4He present at finite temperatures. The viscous functional is derived from the Navier-Stokes equation by using the Madelung transformation and includes the contribution of interfacial viscous response present at the gas-liquid boundaries. This contribution was obtained by calibrating the model against the experimentally determined electron mobilities from 1.2 K to 2.1 K along the saturated vapor pressure line, where the viscous response is dominated by thermal rotons. The temperature dependence of ion mobility was calculated for several different solvation cavity sizes and the data are rationalized in the context of roton scattering and Stokes limited mobility models. Results are compared to the experimentally observed "exotic ion" data, which provides estimates for the corresponding bubble sizes in the liquid. Possible sources of such ions are briefly discussed.
Theoretical modeling of electron mobility in superfluid (4)He.
Aitken, Frédéric; Bonifaci, Nelly; von Haeften, Klaus; Eloranta, Jussi
2016-07-28
The Orsay-Trento bosonic density functional theory model is extended to include dissipation due to the viscous response of superfluid (4)He present at finite temperatures. The viscous functional is derived from the Navier-Stokes equation by using the Madelung transformation and includes the contribution of interfacial viscous response present at the gas-liquid boundaries. This contribution was obtained by calibrating the model against the experimentally determined electron mobilities from 1.2 K to 2.1 K along the saturated vapor pressure line, where the viscous response is dominated by thermal rotons. The temperature dependence of ion mobility was calculated for several different solvation cavity sizes and the data are rationalized in the context of roton scattering and Stokes limited mobility models. Results are compared to the experimentally observed "exotic ion" data, which provides estimates for the corresponding bubble sizes in the liquid. Possible sources of such ions are briefly discussed. PMID:27475346
A control theoretic model of driver steering behavior
NASA Technical Reports Server (NTRS)
Donges, E.
1977-01-01
A quantitative description of driver steering behavior such as a mathematical model is presented. The steering task is divided into two levels: (1) the guidance level involving the perception of the instantaneous and future course of the forcing function provided by the forward view of the road, and the response to it in an anticipatory open-loop control mode; (2) the stabilization level whereby any occuring deviations from the forcing function are compensated for in a closed-loop control mode. This concept of the duality of the driver's steering activity led to a newly developed two-level model of driver steering behavior. Its parameters are identified on the basis of data measured in driving simulator experiments. The parameter estimates of both levels of the model show significant dependence on the experimental situation which can be characterized by variables such as vehicle speed and desired path curvature.
Flavor symmetry based MSSM: Theoretical models and phenomenological analysis
NASA Astrophysics Data System (ADS)
Babu, K. S.; Gogoladze, Ilia; Raza, Shabbar; Shafi, Qaisar
2014-09-01
We present a class of supersymmetric models in which symmetry considerations alone dictate the form of the soft SUSY breaking Lagrangian. We develop a class of minimal models, denoted as sMSSM—for flavor symmetry-based minimal supersymmetric standard model—that respect a grand unified symmetry such as SO(10) and a non-Abelian flavor symmetry H which suppresses SUSY-induced flavor violation. Explicit examples are constructed with the flavor symmetry being gauged SU(2)H and SO(3)H with the three families transforming as 2+1 and 3 representations, respectively. A simple solution is found in the case of SU(2)H for suppressing the flavor violating D-terms based on an exchange symmetry. Explicit models based on SO(3)H without the D-term problem are developed. In addition, models based on discrete non-Abelian flavor groups are presented which are automatically free from D-term issues. The permutation group S3 with a 2+1 family assignment, as well as the tetrahedral group A4 with a 3 assignment are studied. In all cases, a simple solution to the SUSY CP problem is found, based on spontaneous CP violation leading to a complex quark mixing matrix. We develop the phenomenology of the resulting sMSSM, which is controlled by seven soft SUSY breaking parameters for both the 2+1 assignment and the 3 assignment of fermion families. These models are special cases of the phenomenological MSSM (pMSSM), but with symmetry restrictions. We discuss the parameter space of sMSSM compatible with LHC searches, B-physics constraints and dark matter relic abundance. Fine-tuning in these models is relatively mild, since all SUSY particles can have masses below about 3 TeV.
Some theoretical and computational aspects of a simplified subchannel model
Neil, C.H.
1983-01-01
Some recently obtained results are presented concerning the qualitative behavior of solutions to equations governing a simplified subchannel model for reactor hydrodynamics. The model describes time-independent flow of an incompressible fluid in two parallel, interconnected channels, subject to axial and lateral pressure drops defined by a Darcy friction factor. The phase portrait for the system of ordinary differential equations is presented, a solution to a boundary-value problem describing flow blockage is discussed, and the effect of the qualitative behavior of solutions on their numerical approximation is examined. The study was undertaken to determine the cause of numerical difficulty in approximating solutions to problems.
Accurate analytical method for the extraction of solar cell model parameters
NASA Astrophysics Data System (ADS)
Phang, J. C. H.; Chan, D. S. H.; Phillips, J. R.
1984-05-01
Single diode solar cell model parameters are rapidly extracted from experimental data by means of the presently derived analytical expressions. The parameter values obtained have a less than 5 percent error for most solar cells, in light of the extraction of model parameters for two cells of differing quality which were compared with parameters extracted by means of the iterative method.
Active appearance model and deep learning for more accurate prostate segmentation on MRI
NASA Astrophysics Data System (ADS)
Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.
2016-03-01
Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.
Fast and accurate Monte Carlo sampling of first-passage times from Wiener diffusion models
Drugowitsch, Jan
2016-01-01
We present a new, fast approach for drawing boundary crossing samples from Wiener diffusion models. Diffusion models are widely applied to model choices and reaction times in two-choice decisions. Samples from these models can be used to simulate the choices and reaction times they predict. These samples, in turn, can be utilized to adjust the models’ parameters to match observed behavior from humans and other animals. Usually, such samples are drawn by simulating a stochastic differential equation in discrete time steps, which is slow and leads to biases in the reaction time estimates. Our method, instead, facilitates known expressions for first-passage time densities, which results in unbiased, exact samples and a hundred to thousand-fold speed increase in typical situations. In its most basic form it is restricted to diffusion models with symmetric boundaries and non-leaky accumulation, but our approach can be extended to also handle asymmetric boundaries or to approximate leaky accumulation. PMID:26864391
D’Adamo, Giuseppe; Pelissetto, Andrea; Pierleoni, Carlo
2014-12-28
A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmann inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.
Accurate calculation of binding energies for molecular clusters - Assessment of different models
NASA Astrophysics Data System (ADS)
Friedrich, Joachim; Fiedler, Benjamin
2016-06-01
In this work we test different strategies to compute high-level benchmark energies for medium-sized molecular clusters. We use the incremental scheme to obtain CCSD(T)/CBS energies for our test set and carefully validate the accuracy for binding energies by statistical measures. The local errors of the incremental scheme are <1 kJ/mol. Since they are smaller than the basis set errors, we obtain higher total accuracy due to the applicability of larger basis sets. The final CCSD(T)/CBS benchmark values are ΔE = - 278.01 kJ/mol for (H2O)10, ΔE = - 221.64 kJ/mol for (HF)10, ΔE = - 45.63 kJ/mol for (CH4)10, ΔE = - 19.52 kJ/mol for (H2)20 and ΔE = - 7.38 kJ/mol for (H2)10 . Furthermore we test state-of-the-art wave-function-based and DFT methods. Our benchmark data will be very useful for critical validations of new methods. We find focal-point-methods for estimating CCSD(T)/CBS energies to be highly accurate and efficient. For foQ-i3CCSD(T)-MP2/TZ we get a mean error of 0.34 kJ/mol and a standard deviation of 0.39 kJ/mol.
conSSert: Consensus SVM Model for Accurate Prediction of Ordered Secondary Structure.
Kieslich, Chris A; Smadbeck, James; Khoury, George A; Floudas, Christodoulos A
2016-03-28
Accurate prediction of protein secondary structure remains a crucial step in most approaches to the protein-folding problem, yet the prediction of ordered secondary structure, specifically beta-strands, remains a challenge. We developed a consensus secondary structure prediction method, conSSert, which is based on support vector machines (SVM) and provides exceptional accuracy for the prediction of beta-strands with QE accuracy of over 0.82 and a Q2-EH of 0.86. conSSert uses as input probabilities for the three types of secondary structure (helix, strand, and coil) that are predicted by four top performing methods: PSSpred, PSIPRED, SPINE-X, and RAPTOR. conSSert was trained/tested using 4261 protein chains from PDBSelect25, and 8632 chains from PISCES. Further validation was performed using targets from CASP9, CASP10, and CASP11. Our data suggest that poor performance in strand prediction is likely a result of training bias and not solely due to the nonlocal nature of beta-sheet contacts. conSSert is freely available for noncommercial use as a webservice: http://ares.tamu.edu/conSSert/ . PMID:26928531
Aging and Interdependence: A Theoretical Model for Close Relationships.
ERIC Educational Resources Information Center
Blieszner, Rosemary
This paper demonstrates the utility of interdependence theory for understanding older persons' social relationships. Using friendship as an exemplary case, a model of expectations for and reactions to social exchanges is described. Exchanges which are perceived to be motivated by obligation are distinguished from those which are perceived to…
Testing Theoretical Models of Magnetic Damping Using an Air Track
ERIC Educational Resources Information Center
Vidaurre, Ana; Riera, Jaime; Monsoriu, Juan A.; Gimenez, Marcos H.
2008-01-01
Magnetic braking is a long-established application of Lenz's law. A rigorous analysis of the laws governing this problem involves solving Maxwell's equations in a time-dependent situation. Approximate models have been developed to describe different experimental results related to this phenomenon. In this paper we present a new method for the…
Interpreting Unfamiliar Graphs: A Generative, Activity Theoretic Model
ERIC Educational Resources Information Center
Roth, Wolff-Michael; Lee, Yew Jin
2004-01-01
Research on graphing presents its results as if knowing and understanding were something stored in peoples' minds independent of the situation that they find themselves in. Thus, there are no models that situate interview responses to graphing tasks. How, then, we question, are the interview texts produced? How do respondents begin and end…
[Theoretical model for rocky desertification control in karst area].
Liang, Liang; Liu, Zhi-Xiao; Zhang, Dai-Gui; Deng, Kai-Dong; Zhang, You-Xiang
2007-03-01
Based on the basic principles of restoration ecology, the trigger-action model for rocky desertification control was proposed, i. e. , the ability that an ecosystem enables itself to develop was called dominant force, and the interfering factor resulting in the deviation of the climax of ecological succession from its preconcerted status was called trigger factor. The ultimate status of ecological succession was determined by the interaction of dominant force and trigger factor. Rocky desertification was the result of serious malignant triggers, and its control was the process of benign triggers in using the ecological restoration method of artificial designs to activate the natural designing ability of an ecosystem. The ecosystem of Karst rocky desertification in Fenghuang County with restoration measures was taken as a case to test the model, and the results showed that the restoration measures based on trigger-action model markedly improved the physical and chemical properties of soil and increased the diversity of plant. There was a benign trigger between the restoration measures and the Karst area. The rationality of the trigger-action model was primarily tested by the results in practice. PMID:17552199
SBS mitigation with 'two-tone' amplification: a theoretical model
NASA Astrophysics Data System (ADS)
Bronder, T. J.; Shay, T. M.; Dajani, I.; Gavrielides, A.; Robin, C. A.; Lu, C. A.
2008-02-01
A new technique for mitigating stimulated Brillouin scattering (SBS) effects in narrow-linewidth Yb-doped fiber amplifiers is demonstrated with a model that reduces to solving an 8×8 system of coupled nonlinear equations with the gain, SBS, and four-wave mixing (FMW) incorporated into the model. This technique uses two seed signals, or 'two-tones', with each tone reaching its SBS threshold almost independently and thus increasing the overall threshold for SBS in the fiber amplifier. The wavelength separation of these signals is also selected to avoid FWM, which in this case possesses the next lowest nonlinear effects threshold. This model predicts an output power increase of 86% (at SBS threshold with no signs of FWM) for a 'two-tone' amplifier with seed signals at 1064nm and 1068nm, compared to a conventional fiber amplifier with a single 1064nm seed. The model is also used to simulate an SBS-suppressing fiber amplifier to test the regime where FWM is the limiting factor. In this case, an optimum wavelength separation of 3nm to 10nm prevents FWM from reaching threshold. The optimum ratio of the input power for the two seed signals in 'two-tone' amplification is also tested. Future experimental verification of this 'two-tone' technique is discussed.
Photoabsorption spectrum of helium trimer cation—Theoretical modeling
NASA Astrophysics Data System (ADS)
Kalus, René; Karlický, František; Lepetit, Bruno; Paidarová, Ivana; Gadea, Florent Xavier
2013-11-01
The photoabsorption spectrum of He_3^+ is calculated for two semiempirical models of intracluster interactions and compared with available experimental data reported in the middle UV range [H. Haberland and B. von Issendorff, J. Chem. Phys. 102, 8773 (1995)]. Nuclear delocalization effects are investigated via several approaches comprising quantum samplings using either exact or approximate (harmonic) nuclear wavefunctions, as well as classical samplings based on the Monte Carlo methodology. Good agreement with the experiment is achieved for the model by Knowles et al., [Mol. Phys. 85, 243 (1995); Knowles et al., Mol. Phys. 87, 827 (1996)] whereas the model by Calvo et al., [J. Chem. Phys. 135, 124308 (2011)] exhibits non-negligible deviations from the experiment. Predictions of far UV absorption spectrum of He_3^+, for which no experimental data are presently available, are reported for both models and compared to each other as well as to the photoabsorption spectrum of He_2^+. A simple semiempirical point-charge approximation for calculating transition probabilities is shown to perform well for He_3^+.
Photoabsorption spectrum of helium trimer cation--theoretical modeling.
Kalus, René; Karlický, František; Lepetit, Bruno; Paidarová, Ivana; Gadea, Florent Xavier
2013-11-28
The photoabsorption spectrum of He3(+) is calculated for two semiempirical models of intracluster interactions and compared with available experimental data reported in the middle UV range [H. Haberland and B. von Issendorff, J. Chem. Phys. 102, 8773 (1995)]. Nuclear delocalization effects are investigated via several approaches comprising quantum samplings using either exact or approximate (harmonic) nuclear wavefunctions, as well as classical samplings based on the Monte Carlo methodology. Good agreement with the experiment is achieved for the model by Knowles et al., [Mol. Phys. 85, 243 (1995); Mol. Phys. 87, 827 (1996)] whereas the model by Calvo et al., [J. Chem. Phys. 135, 124308 (2011)] exhibits non-negligible deviations from the experiment. Predictions of far UV absorption spectrum of He3(+), for which no experimental data are presently available, are reported for both models and compared to each other as well as to the photoabsorption spectrum of He2(+). A simple semiempirical point-charge approximation for calculating transition probabilities is shown to perform well for He3(+). PMID:24289357
Photoabsorption spectrum of helium trimer cation—Theoretical modeling
Kalus, René; Karlický, František; Lepetit, Bruno; Paidarová, Ivana; Gadea, Florent Xavier
2013-11-28
The photoabsorption spectrum of He{sub 3}{sup +} is calculated for two semiempirical models of intracluster interactions and compared with available experimental data reported in the middle UV range [H. Haberland and B. von Issendorff, J. Chem. Phys. 102, 8773 (1995)]. Nuclear delocalization effects are investigated via several approaches comprising quantum samplings using either exact or approximate (harmonic) nuclear wavefunctions, as well as classical samplings based on the Monte Carlo methodology. Good agreement with the experiment is achieved for the model by Knowles et al., [Mol. Phys. 85, 243 (1995); Mol. Phys. 87, 827 (1996)] whereas the model by Calvo et al., [J. Chem. Phys. 135, 124308 (2011)] exhibits non-negligible deviations from the experiment. Predictions of far UV absorption spectrum of He{sub 3}{sup +}, for which no experimental data are presently available, are reported for both models and compared to each other as well as to the photoabsorption spectrum of He{sub 2}{sup +}. A simple semiempirical point-charge approximation for calculating transition probabilities is shown to perform well for He{sub 3}{sup +}.
Toward a Theoretical Model of Employee Turnover: A Human Resource Development Perspective
ERIC Educational Resources Information Center
Peterson, Shari L.
2004-01-01
This article sets forth the Organizational Model of Employee Persistence, influenced by traditional turnover models and a student attrition model. The model was developed to clarify the impact of organizational practices on employee turnover from a human resource development (HRD) perspective and provide a theoretical foundation for research on…
Theoretical transport modeling of Ohmic cold pulse experiments
NASA Astrophysics Data System (ADS)
Kinsey, J. E.; Waltz, R. E.; St. John, H. E.
1998-11-01
The response of several theory-based transport models in Ohmically heated tokamak discharges to rapid edge cooling due to trace impurity injection is studied. Results are presented for the Institute for Fusion Studies—Princeton Plasma Physics Laboratory (IFS/PPPL), gyro-Landau-fluid (GLF23), Multi-mode (MM), and the Itoh-Itoh-Fukuyama (IIF) transport models with an emphasis on results from the Texas Experimental Tokamak (TEXT) [K. W. Gentle, Nucl. Technol./Fusion 1, 479 (1981)]. It is found that critical gradient models containing a strong ion and electron temperature ratio dependence can exhibit behavior that is qualitatively consistent with experimental observation while depending solely on local parameters. The IFS/PPPL model yields the strongest response and demonstrates both rapid radial pulse propagation and a noticeable increase in the central electron temperature following a cold edge temperature pulse (amplitude reversal). Furthermore, the amplitude reversal effect is predicted to diminish with increasing electron density and auxiliary heating in agreement with experimental data. An Ohmic pulse heating effect due to rearrangement of the current profile is shown to contribute to the rise in the core electron temperature in TEXT, but not in the Joint European Tokamak (JET) [A. Tanga and the JET Team, in Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. 1, p. 65] and the Tokamak Fusion Test Reactor (TFTR) [R. J. Hawryluk, V. Arunsalam, M. G. Bell et al., in Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. 1, p. 51]. While this phenomenon is not necessarily a unique signature of a critical gradient, there is sufficient evidence suggesting that the apparent plasma response to edge cooling may not require any underlying nonlocal mechanism and may be explained within the context of the intrinsic properties of electrostatic drift
Faster and more accurate graphical model identification of tandem mass spectra using trellises
Wang, Shengjie; Halloran, John T.; Bilmes, Jeff A.; Noble, William S.
2016-01-01
Tandem mass spectrometry (MS/MS) is the dominant high throughput technology for identifying and quantifying proteins in complex biological samples. Analysis of the tens of thousands of fragmentation spectra produced by an MS/MS experiment begins by assigning to each observed spectrum the peptide that is hypothesized to be responsible for generating the spectrum. This assignment is typically done by searching each spectrum against a database of peptides. To our knowledge, all existing MS/MS search engines compute scores individually between a given observed spectrum and each possible candidate peptide from the database. In this work, we use a trellis, a data structure capable of jointly representing a large set of candidate peptides, to avoid redundantly recomputing common sub-computations among different candidates. We show how trellises may be used to significantly speed up existing scoring algorithms, and we theoretically quantify the expected speedup afforded by trellises. Furthermore, we demonstrate that compact trellis representations of whole sets of peptides enables efficient discriminative learning of a dynamic Bayesian network for spectrum identification, leading to greatly improved spectrum identification accuracy. Contact: bilmes@uw.edu or william-noble@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307634
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2016-01-25
Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. PMID:26708965
A new geometric-based model to accurately estimate arm and leg inertial estimates.
Wicke, Jason; Dumas, Geneviève A
2014-06-01
Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506
Cimpoesu, Dorin Stoleriu, Laurentiu; Stancu, Alexandru
2013-12-14
We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.
NASA Astrophysics Data System (ADS)
Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu
2011-05-01
Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.
A differential game theoretical analysis of mechanistic models for territoriality.
Hamelin, Frédéric M; Lewis, Mark A
2010-11-01
In this paper, elements of differential game theory are used to analyze a spatially explicit home range model for interacting wolf packs when movement behavior is uncertain. The model consists of a system of partial differential equations whose parameters reflect the movement behavior of individuals within each pack and whose steady-state solutions describe the patterns of space-use associated to each pack. By controlling the behavioral parameters in a spatially-dynamic fashion, packs adjust their patterns of movement so as to find a Nash-optimal balance between spreading their territory and avoiding conflict with hostile neighbors. On the mathematical side, we show that solving a nonzero-sum differential game corresponds to finding a non-invasible function-valued trait. From the ecological standpoint, when movement behavior is uncertain, the resulting evolutionarily stable equilibrium gives rise to a buffer-zone, or a no-wolf's land where deer are known to find refuge. PMID:20033174
Theoretical model for morphogenesis and cell sorting in Dictyostelium discoideum
NASA Astrophysics Data System (ADS)
Umeda, T.; Inouye, K.
1999-02-01
The morphogenetic movement and cell sorting in cell aggregates from the mound stage to the migrating slug stage of the cellular slime mold Dictyostelium discoideum were studied using a mathematical model. The model postulates that the motive force generated by the cells is in equilibrium with the internal pressure and mechanical resistance. The moving boundary problem derived from the force balance equation and the continuity equation has stationary solutions in which the aggregate takes the shape of a spheroid (or an ellipse in two-dimensional space) with the pacemaker at one of its foci, moving at a constant speed. Numerical calculations in two-dimensional space showed that an irregularly shaped aggregate changes its shape to become an ellipse as it moves. Cell aggregates consisting of two cell types differing in motive force exhibit cell sorting and become elongated, suggesting the importance of prestalk/prespore differentiation in the morphogenesis of Dictyostelium.
Automata-theoretic models of mutation and alignment
Searls, D.B.; Murphy, K.P.
1995-12-31
Finite-state automata called transducers, which have both input and output, can be used to model simple mechanisms of biological mutation. We present a methodology whereby numerically-weighted versions of such specifications can be mechanically adapted to create string edit machines that are essentially equivalent to recurrence relations of the sort that characterize dynamic programming alignment algorithms. Based on this, we have developed a visual programming system for designing new alignment algorithms in a rapid-prototyping fashion.
NASA Astrophysics Data System (ADS)
Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus
2016-04-01
The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time. PMID:26808380
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-01-01
Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-01-01
Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model. PMID:27077855
A Theoretical Model for the Associative Nature of Conference Participation.
Smiljanić, Jelena; Chatterjee, Arnab; Kauppinen, Tomi; Mitrović Dankulov, Marija
2016-01-01
Participation in conferences is an important part of every scientific career. Conferences provide an opportunity for a fast dissemination of latest results, discussion and exchange of ideas, and broadening of scientists' collaboration network. The decision to participate in a conference depends on several factors like the location, cost, popularity of keynote speakers, and the scientist's association with the community. Here we discuss and formulate the problem of discovering how a scientist's previous participation affects her/his future participations in the same conference series. We develop a stochastic model to examine scientists' participation patterns in conferences and compare our model with data from six conferences across various scientific fields and communities. Our model shows that the probability for a scientist to participate in a given conference series strongly depends on the balance between the number of participations and non-participations during his/her early connections with the community. An active participation in a conference series strengthens the scientist's association with that particular conference community and thus increases the probability of future participations. PMID:26859404
Modeling postpartum depression in rats: theoretic and methodological issues
Ming, LI; Shinn-Yi, CHOU
2016-01-01
The postpartum period is when a host of changes occur at molecular, cellular, physiological and behavioral levels to prepare female humans for the challenge of maternity. Alteration or prevention of these normal adaptions is thought to contribute to disruptions of emotion regulation, motivation and cognitive abilities that underlie postpartum mental disorders, such as postpartum depression. Despite the high incidence of this disorder, and the detrimental consequences for both mother and child, its etiology and related neurobiological mechanisms remain poorly understood, partially due to the lack of appropriate animal models. In recent decades, there have been a number of attempts to model postpartum depression disorder in rats. In the present review, we first describe clinical symptoms of postpartum depression and discuss known risk factors, including both genetic and environmental factors. Thereafter, we discuss various rat models that have been developed to capture various aspects of this disorder and knowledge gained from such attempts. In doing so, we focus on the theories behind each attempt and the methods used to achieve their goals. Finally, we point out several understudied areas in this field and make suggestions for future directions. PMID:27469254
A dynamic game-theoretic model of parental care.
Mcnamara, J M; Székely, T; Webb, J N; Houston, A I
2000-08-21
We present a model in which members of a mated pair decide whether to care for their offspring or desert them. There is a breeding season of finite length during which it is possible to produce and raise several batches of offspring. On deserting its offspring, an individual can search for a new mate. The probability of finding a mate depends on the number of individuals of each sex that are searching, which in turn depends upon the previous care and desertion decisions of all population members. We find the evolutionarily stable pattern of care over the breeding season. The feedback between behaviour and mating opportunity can result in a pattern of stable oscillations between different forms of care over the breeding season. Oscillations can also arise because the best thing for an individual to do at a particular time in the season depends on future behaviour of all population members. In the baseline model, a pair splits up after a breeding attempt, even if they both care for the offspring. In a version of the model in which a pair stays together if they both care, the feedback between behaviour and mating opportunity can lead to more than one evolutionarily stable form of care. PMID:10931755
Modeling postpartum depression in rats: theoretic and methodological issues.
Li, Ming; Chou, Shinn-Yi
2016-07-18
The postpartum period is when a host of changes occur at molecular, cellular, physiological and behavioral levels to prepare female humans for the challenge of maternity. Alteration or prevention of these normal adaptions is thought to contribute to disruptions of emotion regulation, motivation and cognitive abilities that underlie postpartum mental disorders, such as postpartum depression. Despite the high incidence of this disorder, and the detrimental consequences for both mother and child, its etiology and related neurobiological mechanisms remain poorly understood, partially due to the lack of appropriate animal models. In recent decades, there have been a number of attempts to model postpartum depression disorder in rats. In the present review, we first describe clinical symptoms of postpartum depression and discuss known risk factors, including both genetic and environmental factors. Thereafter, we discuss various rat models that have been developed to capture various aspects of this disorder and knowledge gained from such attempts. In doing so, we focus on the theories behind each attempt and the methods used to achieve their goals. Finally, we point out several understudied areas in this field and make suggestions for future directions. PMID:27469254
A Theoretical Model for the Associative Nature of Conference Participation
Smiljanić, Jelena; Chatterjee, Arnab; Kauppinen, Tomi; Mitrović Dankulov, Marija
2016-01-01
Participation in conferences is an important part of every scientific career. Conferences provide an opportunity for a fast dissemination of latest results, discussion and exchange of ideas, and broadening of scientists’ collaboration network. The decision to participate in a conference depends on several factors like the location, cost, popularity of keynote speakers, and the scientist’s association with the community. Here we discuss and formulate the problem of discovering how a scientist’s previous participation affects her/his future participations in the same conference series. We develop a stochastic model to examine scientists’ participation patterns in conferences and compare our model with data from six conferences across various scientific fields and communities. Our model shows that the probability for a scientist to participate in a given conference series strongly depends on the balance between the number of participations and non-participations during his/her early connections with the community. An active participation in a conference series strengthens the scientist’s association with that particular conference community and thus increases the probability of future participations. PMID:26859404
BL Herculis stars - Theoretical models for field variables
NASA Technical Reports Server (NTRS)
Carson, R.; Stothers, R.
1982-01-01
Type II Cepheids with periods between 1 and 3 days, commonly designated as Bl Herculis stars, have been modeled here with the aim of interpreting the wide variety of light curves observed among the field variables. Previously modeled globular cluster members are used as standard calibration objects. The major finding is that only a small range of luminosities is capable of generating a large variety of light curve types at a given period. For a mass of approximately 0.60 solar mass, the models are able to reproduce the observed mean luminosities, dispersion of mean luminosities, periods, light amplitudes, light asymmetries, and phases of secondary features in the light curves of known BL Her stars. It is possible that the metal-rich variables (which are found only in the field) have luminosities lower than those of most metal-poor variables. The present revised mass for BL Her, a metal-rich object, is not significantly different from the mean mass of the metal-poor variables.
Berger, Perrine; Alouini, Mehdi; Bourderionnet, Jérôme; Bretenaker, Fabien; Dolfi, Daniel
2010-01-18
We developed an improved model in order to predict the RF behavior and the slow light properties of the SOA valid for any experimental conditions. It takes into account the dynamic saturation of the SOA, which can be fully characterized by a simple measurement, and only relies on material fitting parameters, independent of the optical intensity and the injected current. The present model is validated by showing a good agreement with experiments for small and large modulation indices. PMID:20173888
NASA Astrophysics Data System (ADS)
Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.
2013-12-01
This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.
Imitative Modeling as a Theoretical Base for Instructing Language-Disordered Children
ERIC Educational Resources Information Center
Courtright, John A.; Courtright, Illene C.
1976-01-01
A modification of A. Bandura's social learning theory (imitative modeling) was employed as a theoretical base for language instruction with eight language disordered children (5 to 10 years old). (Author/SBH)
Technology Transfer Automated Retrieval System (TEKTRAN)
Selection for disease resistance is a contemporary topic with developing approaches for genetic improvement. Merging the sciences of genetic selection and epidemiology is essential to identify selection schemes to enhance disease resistance. Epidemiological models can identify theoretical opportuni...
NASA Technical Reports Server (NTRS)
Raj, S. V.
2011-01-01
Establishing the geometry of foam cells is useful in developing microstructure-based acoustic and structural models. Since experimental data on the geometry of the foam cells are limited, most modeling efforts use an idealized three-dimensional, space-filling Kelvin tetrakaidecahedron. The validity of this assumption is investigated in the present paper. Several FeCrAlY foams with relative densities varying between 3 and 15 percent and cells per mm (c.p.mm.) varying between 0.2 and 3.9 c.p.mm. were microstructurally evaluated. The number of edges per face for each foam specimen was counted by approximating the cell faces by regular polygons, where the number of cell faces measured varied between 207 and 745. The present observations revealed that 50 to 57 percent of the cell faces were pentagonal while 24 to 28 percent were quadrilateral and 15 to 22 percent were hexagonal. The present measurements are shown to be in excellent agreement with literature data. It is demonstrated that the Kelvin model, as well as other proposed theoretical models, cannot accurately describe the FeCrAlY foam cell structure. Instead, it is suggested that the ideal foam cell geometry consists of 11 faces with three quadrilateral, six pentagonal faces and two hexagonal faces consistent with the 3-6-2 Matzke cell. A compilation of 90 years of experimental data reveals that the average number of cell faces decreases linearly with the increasing ratio of quadrilateral to pentagonal faces. It is concluded that the Kelvin model is not supported by these experimental data.
Heuijerjans, Ashley; Matikainen, Marko K.; Julkunen, Petro; Eliasson, Pernilla; Aspenberg, Per; Isaksson, Hanna
2015-01-01
Background Computational models of Achilles tendons can help understanding how healthy tendons are affected by repetitive loading and how the different tissue constituents contribute to the tendon’s biomechanical response. However, available models of Achilles tendon are limited in their description of the hierarchical multi-structural composition of the tissue. This study hypothesised that a poroviscoelastic fibre-reinforced model, previously successful in capturing cartilage biomechanical behaviour, can depict the biomechanical behaviour of the rat Achilles tendon found experimentally. Materials and Methods We developed a new material model of the Achilles tendon, which considers the tendon’s main constituents namely: water, proteoglycan matrix and collagen fibres. A hyperelastic formulation of the proteoglycan matrix enabled computations of large deformations of the tendon, and collagen fibres were modelled as viscoelastic. Specimen-specific finite element models were created of 9 rat Achilles tendons from an animal experiment and simulations were carried out following a repetitive tensile loading protocol. The material model parameters were calibrated against data from the rats by minimising the root mean squared error (RMS) between experimental force data and model output. Results and Conclusions All specimen models were successfully fitted to experimental data with high accuracy (RMS 0.42-1.02). Additional simulations predicted more compliant and soft tendon behaviour at reduced strain-rates compared to higher strain-rates that produce a stiff and brittle tendon response. Stress-relaxation simulations exhibited strain-dependent stress-relaxation behaviour where larger strains produced slower relaxation rates compared to smaller strain levels. Our simulations showed that the collagen fibres in the Achilles tendon are the main load-bearing component during tensile loading, where the orientation of the collagen fibres plays an important role for the tendon
NASA Astrophysics Data System (ADS)
O'Brien, Edward P.; Morrison, Greg; Brooks, Bernard R.; Thirumalai, D.
2009-03-01
Single molecule Förster resonance energy transfer (FRET) experiments are used to infer the properties of the denatured state ensemble (DSE) of proteins. From the measured average FRET efficiency, ⟨E⟩, the distance distribution P(R ) is inferred by assuming that the DSE can be described as a polymer. The single parameter in the appropriate polymer model (Gaussian chain, wormlike chain, or self-avoiding walk) for P(R ) is determined by equating the calculated and measured ⟨E⟩. In order to assess the accuracy of this "standard procedure," we consider the generalized Rouse model (GRM), whose properties [⟨E⟩ and P(R )] can be analytically computed, and the Molecular Transfer Model for protein L for which accurate simulations can be carried out as a function of guanadinium hydrochloride (GdmCl) concentration. Using the precisely computed ⟨E⟩ for the GRM and protein L, we infer P(R ) using the standard procedure. We find that the mean end-to-end distance can be accurately inferred (less than 10% relative error) using ⟨E⟩ and polymer models for P(R ). However, the value extracted for the radius of gyration (Rg) and the persistence length (lp) are less accurate. For protein L, the errors in the inferred properties increase as the GdmCl concentration increases for all polymer models. The relative error in the inferred Rg and lp, with respect to the exact values, can be as large as 25% at the highest GdmCl concentration. We propose a self-consistency test, requiring measurements of ⟨E⟩ by attaching dyes to different residues in the protein, to assess the validity of describing DSE using the Gaussian model. Application of the self-consistency test to the GRM shows that even for this simple model, which exhibits an order→disorder transition, the Gaussian P(R ) is inadequate. Analysis of experimental data of FRET efficiencies with dyes at several locations for the cold shock protein, and simulations results for protein L, for which accurate FRET
NASA Astrophysics Data System (ADS)
Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.
2010-04-01
In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.
Hewitt, Nicola J; Edwards, Robert J; Fritsche, Ellen; Goebel, Carsten; Aeby, Pierre; Scheel, Julia; Reisinger, Kerstin; Ouédraogo, Gladys; Duche, Daniel; Eilstein, Joan; Latil, Alain; Kenny, Julia; Moore, Claire; Kuehnl, Jochen; Barroso, Joao; Fautz, Rolf; Pfuhler, Stefan
2013-06-01
Several human skin models employing primary cells and immortalized cell lines used as monocultures or combined to produce reconstituted 3D skin constructs have been developed. Furthermore, these models have been included in European genotoxicity and sensitization/irritation assay validation projects. In order to help interpret data, Cosmetics Europe (formerly COLIPA) facilitated research projects that measured a variety of defined phase I and II enzyme activities and created a complete proteomic profile of xenobiotic metabolizing enzymes (XMEs) in native human skin and compared them with data obtained from a number of in vitro models of human skin. Here, we have summarized our findings on the current knowledge of the metabolic capacity of native human skin and in vitro models and made an overall assessment of the metabolic capacity from gene expression, proteomic expression, and substrate metabolism data. The known low expression and function of phase I enzymes in native whole skin were reflected in the in vitro models. Some XMEs in whole skin were not detected in in vitro models and vice versa, and some major hepatic XMEs such as cytochrome P450-monooxygenases were absent or measured only at very low levels in the skin. Conversely, despite varying mRNA and protein levels of phase II enzymes, functional activity of glutathione S-transferases, N-acetyltransferase 1, and UDP-glucuronosyltransferases were all readily measurable in whole skin and in vitro skin models at activity levels similar to those measured in the liver. These projects have enabled a better understanding of the contribution of XMEs to toxicity endpoints. PMID:23539547
Theoretical models for the emergence of biomolecular homochirality
NASA Astrophysics Data System (ADS)
Walker, Sara Imari
Little is known about the emergence of life from nonliving precursors. A key missing-piece is the origin of homochirality: nearly all life is characterized by exclusively dextrorotary sugars and levorotary amino acids. The research presented in this thesis addresses the challenge of uncovering mechanisms for chiral symmetry breaking in a prebiotic environment and implications for the origin of life on Earth. Expanding on a well-known model for chiral selection through polymerization, and modeling the spatiotemporal dynamics starting from near-racemic initial conditions, it is demonstrated that the net chirality of molecular building blocks grows with the longest polymer in the reaction network (of length N) with critical behavior for the onset of chiral asymmetry determined by the value of N. This surprising result indicates that significant chiral asymmetry occurs only for systems which permit growth of long polymers. Expanding on this work, the effects of environmental disturbances on the evolution of chirality in prebiotic reaction-diffusion networks are studied via the implementation of a stochastic spatiotemporal Langevin equation. The results show that environmental interactions can have significant impact on the evolution of prebiotic chirality: the history of prebiotic chirality is therefore interwoven with the Earths early environmental history in a mechanism we call punctuated chirality. This result establishes that the onset of homochirality is not an isolated phenomenon: chiral selection must occur in tandem with the transition from chemistry to biology, otherwise the prebiotic soup is unstable to environmental events. Addressing the challenge of understanding the role of chirality in the transition from non-life to life, the diffusive slowdown of reaction networks induced, for example, through tidal cycles or evaporating pools, is modeled. The results of this study demonstrate that such diffusive slowdown leads to the stabilization of homochiral
GSTARS computer models and their applications, part I: theoretical development
Yang, C.T.; Simoes, F.J.M.
2008-01-01
GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two- dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.
A theoretical model of sheath fold morphology in simple shear
NASA Astrophysics Data System (ADS)
Reber, Jacqueline E.; Dabrowski, Marcin; Galland, Olivier; Schmid, Daniel W.
2013-04-01
Sheath folds are highly non-cylindrical structures often associated with shear zones. The geometry of sheath folds, especially cross-sections perpendicular to the stretching direction that display eye-patterns, have been used in the field to deduce kinematic information such as shear sense and bulk strain type. However, how sheath folds form and how they evolve with increasing strain is still a matter of debate. We investigate the formation of sheath folds around a weak inclusion acting as a slip surface in simple shear by means of an analytical model. We systematically vary the slip surface orientation and shape and evaluate the impact on the evolving eye-pattern. In addition we compare our results to existing classifications. Based on field observations it has been suggested that the shear sense of a shear zone can be determined by knowing the position of the center of an eye-pattern and the closing direction of the corresponding sheath fold. In our modeled sheath folds we can observe for a given strain that the center of the eye-structure is subject to change in height with respect to the upper edge of the outermost closed contour for different cross-sections perpendicular to the shear direction. This results in a large variability in layer thickness, questioning the usefulness of sheath folds as shear sense indicators. The location of the center of the eye structure, however, is largely invariant to the initial configurations of the slip surface as well as to strain. It has been suggested that the ratio of the aspect ratio of the innermost and outermost closed contour in eye-patterns could be linked to the bulk strain type based on filed observations. We apply this classification to our modeled sheath folds and we observe that the values of the aspect ratios of the closed contours within the eye-pattern are dependent on the strain and the cross-section location. The ratio (R') of the aspect ratios of the outermost closed contour (Ryz) and the innermost closed
A predictive theoretical model for electron tunneling pathways in proteins
NASA Technical Reports Server (NTRS)
Onuchic, Jose Nelson; Beratan, David N.
1990-01-01
A practical method is presented for calculating the dependence of electron transfer rates on details of the protein medium intervening between donor and acceptor. The method takes proper account of the relative energetics and mutual interactions of the donor, acceptor, and peptide groups. It also provides a quantitative search scheme for determining the important tunneling pathways (specific sequences of localized bonding and antibonding orbitals of the protein which dominate the donor-acceptor electronic coupling) in native and tailored proteins, a tool for designing new proteins with prescribed electron transfer rates, and a consistent description of observed electron transfer rates in existing redox labeled metalloproteins and small molecule model compounds.
Theoretical Modeling of Various Spectroscopies for Cuprates and Topological Insulators
NASA Astrophysics Data System (ADS)
Basak, Susmita
Spectroscopies resolved highly in momentum, energy and/or spatial dimensions are playing an important role in unraveling key properties of wide classes of novel materials. However, spectroscopies do not usually provide a direct map of the underlying electronic spectrum, but act as a complex 'filter' to produce a 'mapping' of the underlying energy levels, Fermi surfaces (FSs) and excitation spectra. The connection between the electronic spectrum and the measured spectra is described as a generalized 'matrix element effect'. The nature of the matrix element involved differs greatly between different spectroscopies. For example, in angle-resolved photoemission (ARPES) an incoming photon knocks out an electron from the sample and the energy and momentum of the photoemitted electron is measured. This is quite different from what happens in K-edge resonant inelastic X-ray scattering (RIXS), where an X-ray photon is scattered after inducing electronic transitions near the Fermi energy through an indirect second order process, or in Compton scattering where the incident X-ray photon is scattered inelastically from an electron transferring energy and momentum to the scattering electron. For any given spectroscopy, the matrix element is, in general, a complex function of the phase space of the experiment, e.g. energy/polarization of the incoming photon and the energy/momentum/spin of the photoemitted electron in the case of ARPES. The matrix element can enhance or suppress signals from specific states, or merge signals of groups of states, making a good understanding of the matrix element effects important for not only a robust interpretation of the spectra, but also for ascertaining optimal regions of the experimental phase space for zooming in on states of the greatest interest. In this thesis I discuss a comprehensive scheme for modeling various highly resolved spectroscopies of the cuprates and topological insulators (TIs) where effects of matrix element, crystal
Polarimetric Signatures of Sea Ice. Part 1; Theoretical Model
NASA Technical Reports Server (NTRS)
Nghiem, S. V.; Kwok, R.; Yueh, S. H.; Drinkwater, M. R.
1995-01-01
Physical, structural, and electromagnetic properties and interrelating processes in sea ice are used to develop a composite model for polarimetric backscattering signatures of sea ice. Physical properties of sea ice constituents such as ice, brine, air, and salt are presented in terms of their effects on electromagnetic wave interactions. Sea ice structure and geometry of scatterers are related to wave propagation, attenuation, and scattering. Temperature and salinity, which are determining factors for the thermodynamic phase distribution in sea ice, are consistently used to derive both effective permittivities and polarimetric scattering coefficients. Polarimetric signatures of sea ice depend on crystal sizes and brine volumes, which are affected by ice growth rates. Desalination by brine expulsion, drainage, or other mechanisms modifies wave penetration and scattering. Sea ice signatures are further complicated by surface conditions such as rough interfaces, hummocks, snow cover, brine skim, or slush layer. Based on the same set of geophysical parameters characterizing sea ice, a composite model is developed to calculate effective permittivities and backscattering covariance matrices at microwave frequencies for interpretation of sea ice polarimetric signatures.
Polarimetric signatures of sea ice. 1: Theoretical model
NASA Technical Reports Server (NTRS)
Nghiem, S. V.; Kwok, R.; Yueh, S. H.; Drinkwater, M. R.
1995-01-01
Physical, structral, and electromagnetic properties and interrelating processes in sea ice are used to develop a composite model for polarimetric backscattering signatures of sea ice. Physical properties of sea ice constituents such as ice, brine, air, and salt are presented in terms of their effects on electromagnetic wave interactions. Sea ice structure and geometry of scatterers are related to wave propagation, attenuation, and scattering. Temperature and salinity, which are determining factors for the thermodynamic phase distribution in sea ice, are consistently used to derive both effective permittivities and polarimetric scattering coefficients. Polarmetric signatures of sea ice depend on crystal sizes and brine volumes, which are affected by ice growth rates. Desalination by brine expulsion, drainage, or other mechanisms modifies wave penetration and scattering. Sea ice signatures are further complicated by surface conditions such as rough interfaces, hummocks, snow cover, brine skim, or slush layer. Based on the same set of geophysical parameters characterizing sea ice, a composite model is developed to calculate effective permittivities and backscattering covariance matrices at microwave frequencies to interpretation of sea ice polarimetric signatures.
Abdelnour, Farras; Voss, Henning U.; Raj, Ashish
2014-01-01
The relationship between anatomic connectivity of large-scale brain networks and their functional connectivity is of immense importance and an area of active research. Previous attempts have required complex simulations which model the dynamics of each cortical region, and explore the coupling between regions as derived by anatomic connections. While much insight is gained from these non-linear simulations, they can be computationally taxing tools for predicting functional from anatomic connectivities. Little attention has been paid to linear models. Here we show that a properly designed linear model appears to be superior to previous non-linear approaches in capturing the brain’s long-range second order correlation structure that governs the relationship between anatomic and functional connectivities. We derive a linear network of brain dynamics based on graph diffusion, whereby the diffusing quantity undergoes a random walk on a graph. We test our model using subjects who underwent diffusion MRI and resting state fMRI. The network diffusion model applied to the structural networks largely predicts the correlation structures derived from their fMRI data, to a greater extent than other approaches. The utility of the proposed approach is that it can routinely be used to infer functional correlation from anatomic connectivity. And since it is linear, anatomic connectivity can also be inferred from functional data. The success of our model confirms the linearity of ensemble average signals in the brain, and implies that their long-range correlation structure may percolate within the brain via purely mechanistic processes enacted on its structural connectivity pathways. PMID:24384152
Fast and accurate modeling of molecular atomization energies with machine learning.
Rupp, Matthias; Tkatchenko, Alexandre; Müller, Klaus-Robert; von Lilienfeld, O Anatole
2012-02-01
We introduce a machine learning model to predict atomization energies of a diverse set of organic molecules, based on nuclear charges and atomic positions only. The problem of solving the molecular Schrödinger equation is mapped onto a nonlinear statistical regression problem of reduced complexity. Regression models are trained on and compared to atomization energies computed with hybrid density-functional theory. Cross validation over more than seven thousand organic molecules yields a mean absolute error of ∼10 kcal/mol. Applicability is demonstrated for the prediction of molecular atomization potential energy curves. PMID:22400967
Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier
2015-02-15
Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery
ERIC Educational Resources Information Center
Vladescu, Jason C.; Carroll, Regina; Paden, Amber; Kodak, Tiffany M.
2012-01-01
The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The…
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2013-01-01
The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.
Accurate prediction of the refractive index of polymers using first principles and data modeling
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes
Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985
Sapsis, Themistoklis P; Majda, Andrew J
2013-08-20
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra. PMID:23918398
NASA Astrophysics Data System (ADS)
Veneziano, D.; Langousis, A.; Lepore, C.
2009-12-01
, none of the above asymptotic theories applies to Iyear(d). In practice, one is interested in the distribution of Iyear(d) over a finite range of averaging durations d and return periods T. Using multifractal representations of rainfall, we have numerically calculated the distribution of Iyear(d) and found that, although not GEV, the distribution can be accurately approximated by a GEV model. The best-fitting parameter k depends on d, but is insensitive to the scaling properties of rainfall and the range of return periods T used for fitting. We have obtained a default expression for k(d) and compared it with estimates from historical rainfall records. The theoretical function tracks well the empirical dependence on d, although it generally overestimates the empirical k values, possibly due to deviations of rainfall from perfect scaling. This issue is under investigation.
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-11-15
Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was
Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?
NASA Astrophysics Data System (ADS)
Ramarohetra, J.; Sultan, B.
2012-04-01
Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and
NASA Astrophysics Data System (ADS)
Yahja, A.; Kim, C.; Lin, Y.; Bajcsy, P.
2008-12-01
This paper addresses the problem of accurate estimation of geospatial models from a set of groundwater recharge & discharge (R&D) maps and from auxiliary remote sensing and terrestrial raster measurements. The motivation for our work is driven by the cost of field measurements, and by the limitations of currently available physics-based modeling techniques that do not include all relevant variables and allow accurate predictions only at coarse spatial scales. The goal is to improve our understanding of the underlying physical phenomena and increase the accuracy of geospatial models--with a combination of remote sensing, field measurements and physics-based modeling. Our approach is to process a set of R&D maps generated from interpolated sparse field measurements using existing physics-based models, and identify the R&D map that would be the most suitable for extracting a set of rules between the auxiliary variables of interest and the R&D map labels. We implemented this approach by ranking R&D maps using information entropy and mutual information criteria, and then by deriving a set of rules using a machine learning technique, such as the decision tree method. The novelty of our work is in developing a general framework for building geospatial models with the ultimate goal of minimizing cost and maximizing model accuracy. The framework is demonstrated for groundwater R&D rate models but could be applied to other similar studies, for instance, to understanding hypoxia based on physics-based models and remotely sensed variables. Furthermore, our key contribution is in designing a ranking method for R&D maps that allows us to analyze multiple plausible R&D maps with a different number of zones which was not possible in our earlier prototype of the framework called Spatial Pattern to Learn. We will present experimental results using examples R&D and other maps from an area in Wisconsin.
An Accurate In Vitro Model of the E. coli Envelope
Clifton, Luke A.; Holt, Stephen A.; Hughes, Arwel V.; Daulton, Emma L.; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R.; Webster, John R. P.; Kinane, Christian J.
2015-01-01
Abstract Gram‐negative bacteria are an increasingly serious source of antibiotic‐resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir–Blodgett and Langmuir–Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:27346898
An accurate in vitro model of the E. coli envelope.
Clifton, Luke A; Holt, Stephen A; Hughes, Arwel V; Daulton, Emma L; Arunmanee, Wanatchaporn; Heinrich, Frank; Khalid, Syma; Jefferies, Damien; Charlton, Timothy R; Webster, John R P; Kinane, Christian J; Lakey, Jeremy H
2015-10-01
Gram-negative bacteria are an increasingly serious source of antibiotic-resistant infections, partly owing to their characteristic protective envelope. This complex, 20 nm thick barrier includes a highly impermeable, asymmetric bilayer outer membrane (OM), which plays a pivotal role in resisting antibacterial chemotherapy. Nevertheless, the OM molecular structure and its dynamics are poorly understood because the structure is difficult to recreate or study in vitro. The successful formation and characterization of a fully asymmetric model envelope using Langmuir-Blodgett and Langmuir-Schaefer methods is now reported. Neutron reflectivity and isotopic labeling confirmed the expected structure and asymmetry and showed that experiments with antibacterial proteins reproduced published in vivo behavior. By closely recreating natural OM behavior, this model provides a much needed robust system for antibiotic development. PMID:26331292
An accurate two-phase approximate solution to the acute viral infection model
Perelson, Alan S
2009-01-01
During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.
Theoretical conditions for the stationary reproduction of model protocells.
Mavelli, Fabio; Ruiz-Mirazo, Kepa
2013-02-01
In previous works we have explored the dynamics of chemically reacting proto-cellular systems, under different experimental conditions and kinetic parameters, by means of our stochastic simulation platform 'ENVIRONMENT'. In this paper we, somehow, turn the question around: accepting some broad modeling assumptions, we investigate the conditions under which simple protocells will spontaneously settle into a stationary reproducing regime, characterized by a regular growth/division cycle and the maintenance of a certain standard size and chemical composition across generations. In the first part, starting from purely geometric considerations, the condition for stationary reproduction of a protocell will be expressed in terms of a growth control coefficient (γ). Then, an explicit relationship, the osmotic synchronization condition, will be analytically derived under a set of kinetic simplifications and taking into account the osmotic pressure balance operating across the protocell membrane. In the second part of the paper, this general condition that constrains different molecular/kinetic parameters and features of the system (reaction rates, permeability coefficients, metabolite concentrations, system volume) will be applied to different cases of self-producing vesicles, predicting the stationary protocell size or lifetime. Finally, in order to test the validity of our analytic results and predictions, the case study is contrasted with data obtained through both stochastic and deterministic computational algorithms. PMID:23233152
Features of creation of highly accurate models of triumphal pylons for archaeological reconstruction
NASA Astrophysics Data System (ADS)
Grishkanich, A. S.; Sidorov, I. S.; Redka, D. N.
2015-12-01
Cited a measuring operation for determining the geometric characteristics of objects in space and geodetic survey objects on the ground. In the course of the work, data were obtained on a relative positioning of the pylons in space. There are deviations from verticality. In comparison with traditional surveying this testing method is preferable because it allows you to get in semi-automated mode, the CAD model of the object is high for subsequent analysis that is more economical-ly advantageous.
Mathematical model accurately predicts protein release from an affinity-based delivery system.
Vulic, Katarina; Pakulska, Malgosia M; Sonthalia, Rohit; Ramachandran, Arun; Shoichet, Molly S
2015-01-10
Affinity-based controlled release modulates the delivery of protein or small molecule therapeutics through transient dissociation/association. To understand which parameters can be used to tune release, we used a mathematical model based on simple binding kinetics. A comprehensive asymptotic analysis revealed three characteristic regimes for therapeutic release from affinity-based systems. These regimes can be controlled by diffusion or unbinding kinetics, and can exhibit release over either a single stage or two stages. This analysis fundamentally changes the way we think of controlling release from affinity-based systems and thereby explains some of the discrepancies in the literature on which parameters influence affinity-based release. The rate of protein release from affinity-based systems is determined by the balance of diffusion of the therapeutic agent through the hydrogel and the dissociation kinetics of the affinity pair. Equations for tuning protein release rate by altering the strength (KD) of the affinity interaction, the concentration of binding ligand in the system, the rate of dissociation (koff) of the complex, and the hydrogel size and geometry, are provided. We validated our model by collapsing the model simulations and the experimental data from a recently described affinity release system, to a single master curve. Importantly, this mathematical analysis can be applied to any single species affinity-based system to determine the parameters required for a desired release profile. PMID:25449806
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data.
Pagán, Josué; De Orbe, M Irene; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L; Mora, J Vivancos; Moya, José M; Ayala, José L
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
Towards a More Accurate Solar Power Forecast By Improving NWP Model Physics
NASA Astrophysics Data System (ADS)
Köhler, C.; Lee, D.; Steiner, A.; Ritter, B.
2014-12-01
The growing importance and successive expansion of renewable energies raise new challenges for decision makers, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the uncertainties associated with the large share of weather-dependent power sources. Precise power forecast, well-timed energy trading on the stock market, and electrical grid stability can be maintained. The research project EWeLiNE is a collaboration of the German Weather Service (DWD), the Fraunhofer Institute (IWES) and three German transmission system operators (TSOs). Together, wind and photovoltaic (PV) power forecasts shall be improved by combining optimized NWP and enhanced power forecast models. The conducted work focuses on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. Not only the representation of the model cloud characteristics, but also special events like Sahara dust over Germany and the solar eclipse in 2015 are treated and their effect on solar power accounted for. An overview of the EWeLiNE project and results of the ongoing research will be presented.
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data
Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
NASA Astrophysics Data System (ADS)
Jiang, Yongfei; Zhang, Jun; Zhao, Wanhua
2015-05-01
Hemodynamics altered by stent implantation is well-known to be closely related to in-stent restenosis. Computational fluid dynamics (CFD) method has been used to investigate the hemodynamics in stented arteries in detail and help to analyze the performances of stents. In this study, blood models with Newtonian or non-Newtonian properties were numerically investigated for the hemodynamics at steady or pulsatile inlet conditions respectively employing CFD based on the finite volume method. The results showed that the blood model with non-Newtonian property decreased the area of low wall shear stress (WSS) compared with the blood model with Newtonian property and the magnitude of WSS varied with the magnitude and waveform of the inlet velocity. The study indicates that the inlet conditions and blood models are all important for accurately predicting the hemodynamics. This will be beneficial to estimate the performances of stents and also help clinicians to select the proper stents for the patients.
A Measurement-Theoretic Analysis of the Fuzzy Logic Model of Perception.
ERIC Educational Resources Information Center
Crowther, Court S.; And Others
1995-01-01
The fuzzy logic model of perception (FLMP) is analyzed from a measurement-theoretic perspective. The choice rule of FLMP is shown to be equivalent to a version of the Rasch model. In fact, FLMP can be reparameterized as a simple two-category logit model. (SLD)
Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum
NASA Astrophysics Data System (ADS)
Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.
2013-02-01
Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.
Key Issues for an Accurate Modelling of GaSb TPV Converters
NASA Astrophysics Data System (ADS)
Martín, Diego; Algora, Carlos
2003-01-01
GaSb TPV devices are commonly manufactured by Zn diffusion from the vapour phase on a n-type substrate, leading to very high doping concentrations in a narrow emitter. This fact emphasizes the need of a careful modelling that must include high doping effects to simulate the optoelectronic behaviour of devices. In this work the key parameters that have strong influence on the performance of GaSb TPV devices are underlined, more reliable values are suggested and our first results on the study of the absorption coefficient dependence with p-type high doping concentration are presented.
NASA Astrophysics Data System (ADS)
Freire, Hermann; Corrêa, Eberth
2012-02-01
We apply a functional implementation of the field-theoretical renormalization group (RG) method up to two loops to the single-impurity Anderson model. To achieve this, we follow a RG strategy similar to that proposed by Vojta et al. (in Phys. Rev. Lett. 85:4940, 2000), which consists of defining a soft ultraviolet regulator in the space of Matsubara frequencies for the renormalized Green's function. Then we proceed to derive analytically and solve numerically integro-differential flow equations for the effective couplings and the quasiparticle weight of the present model, which fully treat the interplay of particle-particle and particle-hole parquet diagrams and the effect of the two-loop self-energy feedback into them. We show that our results correctly reproduce accurate numerical renormalization group data for weak to slightly moderate interactions. These results are in excellent agreement with other functional Wilsonian RG works available in the literature. Since the field-theoretical RG method turns out to be easier to implement at higher loops than the Wilsonian approach, higher-order calculations within the present approach could improve further the results for this model at stronger couplings. We argue that the present RG scheme could thus offer a possible alternative to other functional RG methods to describe electronic correlations within this model.
Dorn, Jonas F; Zhang, Li; Phi, Tan-Trao; Lacroix, Benjamin; Maddox, Paul S; Liu, Jian; Maddox, Amy Shaub
2016-04-15
During cytokinesis, the cell undergoes a dramatic shape change as it divides into two daughter cells. Cell shape changes in cytokinesis are driven by a cortical ring rich in actin filaments and nonmuscle myosin II. The ring closes via actomyosin contraction coupled with actin depolymerization. Of interest, ring closure and hence the furrow ingression are nonconcentric (asymmetric) within the division plane across Metazoa. This nonconcentricity can occur and persist even without preexisting asymmetric cues, such as spindle placement or cellular adhesions. Cell-autonomous asymmetry is not explained by current models. We combined quantitative high-resolution live-cell microscopy with theoretical modeling to explore the mechanistic basis for asymmetric cytokinesis in theCaenorhabditis eleganszygote, with the goal of uncovering basic principles of ring closure. Our theoretical model suggests that feedback among membrane curvature, cytoskeletal alignment, and contractility is responsible for asymmetric cytokinetic furrowing. It also accurately predicts experimental perturbations of conserved ring proteins. The model further suggests that curvature-mediated filament alignment speeds up furrow closure while promoting energy efficiency. Collectively our work underscores the importance of membrane-cytoskeletal anchoring and suggests conserved molecular mechanisms for this activity. PMID:26912796
Dorn, Jonas F.; Zhang, Li; Phi, Tan-Trao; Lacroix, Benjamin; Maddox, Paul S.; Liu, Jian; Maddox, Amy Shaub
2016-01-01
During cytokinesis, the cell undergoes a dramatic shape change as it divides into two daughter cells. Cell shape changes in cytokinesis are driven by a cortical ring rich in actin filaments and nonmuscle myosin II. The ring closes via actomyosin contraction coupled with actin depolymerization. Of interest, ring closure and hence the furrow ingression are nonconcentric (asymmetric) within the division plane across Metazoa. This nonconcentricity can occur and persist even without preexisting asymmetric cues, such as spindle placement or cellular adhesions. Cell-autonomous asymmetry is not explained by current models. We combined quantitative high-resolution live-cell microscopy with theoretical modeling to explore the mechanistic basis for asymmetric cytokinesis in the Caenorhabditis elegans zygote, with the goal of uncovering basic principles of ring closure. Our theoretical model suggests that feedback among membrane curvature, cytoskeletal alignment, and contractility is responsible for asymmetric cytokinetic furrowing. It also accurately predicts experimental perturbations of conserved ring proteins. The model further suggests that curvature-mediated filament alignment speeds up furrow closure while promoting energy efficiency. Collectively our work underscores the importance of membrane–cytoskeletal anchoring and suggests conserved molecular mechanisms for this activity. PMID:26912796
Shentu, Nanying; Zhang, Hongjian; Li, Qing; Zhou, Hongliang; Tong, Renyuan; Li, Xiong
2012-01-01
Deep displacement observation is one basic means of landslide dynamic study and early warning monitoring and a key part of engineering geological investigation. In our previous work, we proposed a novel electromagnetic induction-based deep displacement sensor (I-type) to predict deep horizontal displacement and a theoretical model called equation-based equivalent loop approach (EELA) to describe its sensing characters. However in many landslide and related geological engineering cases, both horizontal displacement and vertical displacement vary apparently and dynamically so both may require monitoring. In this study, a II-type deep displacement sensor is designed by revising our I-type sensor to simultaneously monitor the deep horizontal displacement and vertical displacement variations at different depths within a sliding mass. Meanwhile, a new theoretical modeling called the numerical integration-based equivalent loop approach (NIELA) has been proposed to quantitatively depict II-type sensors’ mutual inductance properties with respect to predicted horizontal displacements and vertical displacements. After detailed examinations and comparative studies between measured mutual inductance voltage, NIELA-based mutual inductance and EELA-based mutual inductance, NIELA has verified to be an effective and quite accurate analytic model for characterization of II-type sensors. The NIELA model is widely applicable for II-type sensors’ monitoring on all kinds of landslides and other related geohazards with satisfactory estimation accuracy and calculation efficiency. PMID:22368467
Multiconjugate adaptive optics applied to an anatomically accurate human eye model.
Bedggood, P A; Ashman, R; Smith, G; Metha, A B
2006-09-01
Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics. PMID:19529172
Considering mask pellicle effect for more accurate OPC model at 45nm technology node
NASA Astrophysics Data System (ADS)
Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo
2008-11-01
Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effect's impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.
A beginner's guide to writing the nursing conceptual model-based theoretical rationale.
Gigliotti, Eileen; Manister, Nancy N
2012-10-01
Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar. PMID:23087334
Accurate modeling of light trapping in thin film silicon solar cells
Abouelsaood, A.A.; Ghannam, M.Y.; Poortmans, J.; Mertens, R.P.
1997-12-31
An attempt is made to assess the accuracy of the simplifying assumption of total retransmission of light inside the escape or loss cone which is made in many models of optical confinement in thin-film silicon solar cells. A closed form expression is derived for the absorption enhancement factor as a function of the refractive index in the low-absorption limit for a thin-film cell with a flat front surface and a lambertian back reflector. Numerical calculations are carried out to investigate similar systems with antireflection coatings, and the investigation of cells with a textured front surface is achieved using a modified version of the existing ray-tracing computer simulation program TEXTURE.
NASA Astrophysics Data System (ADS)
Chien Chang, Jia-Ren; Tai, Cheng-Chi
2006-07-01
This article reports on the design and development of a complete, programmable electrocardiogram (ECG) generator, which can be used for the testing, calibration and maintenance of electrocardiograph equipment. A modified mathematical model, developed from the three coupled ordinary differential equations of McSharry et al. [IEEE Trans. Biomed. Eng. 50, 289, (2003)], was used to locate precisely the positions of the onset, termination, angle, and duration of individual components in an ECG. Generator facilities are provided so the user can adjust the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave settings. The heart rate can be adjusted in increments of 1BPM (beats per minute), from 20to176BPM, while the amplitude of the ECG signal can be set from 0.1to400mV with a 0.1mV resolution. Experimental results show that the proposed concept and the resulting system are feasible.
TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow
Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.
1993-01-01
A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
Biomechanical modeling provides more accurate data for neuronavigation than rigid registration
Garlapati, Revanth Reddy; Roy, Aditi; Joldes, Grand Roman; Wittek, Adam; Mostayed, Ahmed; Doyle, Barry; Warfield, Simon Keith; Kikinis, Ron; Knuckey, Neville; Bunt, Stuart; Miller, Karol
2015-01-01
It is possible to improve neuronavigation during image-guided surgery by warping the high-quality preoperative brain images so that they correspond with the current intraoperative configuration of the brain. In this work, the accuracy of registration results obtained using comprehensive biomechanical models is compared to the accuracy of rigid registration, the technology currently available to patients. This comparison allows us to investigate whether biomechanical modeling provides good quality image data for neuronavigation for a larger proportion of patients than rigid registration. Preoperative images for 33 cases of neurosurgery were warped onto their respective intraoperative configurations using both biomechanics-based method and rigid registration. We used a Hausdorff distance-based evaluation process that measures the difference between images to quantify the performance of both methods of registration. A statistical test for difference in proportions was conducted to evaluate the null hypothesis that the proportion of patients for whom improved neuronavigation can be achieved, is the same for rigid and biomechanics-based registration. The null hypothesis was confidently rejected (p-value<10−4). Even the modified hypothesis that less than 25% of patients would benefit from the use of biomechanics-based registration was rejected at a significance level of 5% (p-value = 0.02). The biomechanics-based method proved particularly effective for cases experiencing large craniotomy-induced brain deformations. The outcome of this analysis suggests that our nonlinear biomechanics-based methods are beneficial to a large proportion of patients and can be considered for use in the operating theatre as one possible method of improving neuronavigation and surgical outcomes. PMID:24460486
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
NASA Astrophysics Data System (ADS)
Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.
2015-08-01
We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.
NASA Astrophysics Data System (ADS)
Tao, Jianmin; Rappe, Andrew M.
2016-01-01
Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.
Lupaşcu, Carmen Alina; Tegolo, Domenico; Trucco, Emanuele
2013-12-01
We present an algorithm estimating the width of retinal vessels in fundus camera images. The algorithm uses a novel parametric surface model of the cross-sectional intensities of vessels, and ensembles of bagged decision trees to estimate the local width from the parameters of the best-fit surface. We report comparative tests with REVIEW, currently the public database of reference for retinal width estimation, containing 16 images with 193 annotated vessel segments and 5066 profile points annotated manually by three independent experts. Comparative tests are reported also with our own set of 378 vessel widths selected sparsely in 38 images from the Tayside Scotland diabetic retinopathy screening programme and annotated manually by two clinicians. We obtain considerably better accuracies compared to leading methods in REVIEW tests and in Tayside tests. An important advantage of our method is its stability (success rate, i.e., meaningful measurement returned, of 100% on all REVIEW data sets and on the Tayside data set) compared to a variety of methods from the literature. We also find that results depend crucially on testing data and conditions, and discuss criteria for selecting a training set yielding optimal accuracy. PMID:24001930
Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S
2009-04-01
The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin. PMID:19054059
Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.
Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M
2016-08-01
Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835
Single Droplet on Micro Square-Post Patterned Surfaces – Theoretical Model and Numerical Simulation
Zu, Y. Q.; Yan, Y. Y.
2016-01-01
In this study, the wetting behaviors of single droplet on a micro square-post patterned surface with different geometrical parameters are investigated theoretically and numerically. A theoretical model is proposed for the prediction of wetting transition from the Cassie to Wenzel regimes. In addition, due to the limitation of theoretical method, a numerical simulation is performed, which helps get a view of dynamic contact lines, detailed velocity fields, etc., even if the droplet size is comparable with the scale of the surface micro-structures. It is found that the numerical results of the liquid drop behaviours on the square-post patterned surface are in good agreement with the predicted values by the theoretical model. PMID:26775561
Nurses' self-relation--becoming theoretically competent: the SAUC model for confirming nursing.
Gustafsson, Barbro; Willman, Ania M
2003-07-01
The purpose of this study was to acquire an understanding of how nurses' self-relation (view of themselves as nurses) was influenced in connection with implementation of a nursing theory, the sympathy-acceptance-understanding-competence model for confirming nursing. This model was developed by Gustafsson and Pörn. Twenty-two nurses' written statements evaluating mentoring during the six-month implementation process in elder care, were analyzed hermeneutically with the hypothetic-deductive method. An action-theoretic and confirmatory approach was used for facilitating theoretically specified hypotheses. The nurses increased their ability to describe nursing theoretically and gained a foundation of common nursing values. The results provided an understanding of how nurses' self-relation was strengthened by becoming theoretically competent. PMID:12876885
NASA Astrophysics Data System (ADS)
Weber, Tobias K. D.; Riedel, Thomas
2015-04-01
Free water is a prerequesite to chemical reactions and biological activity in earth's upper crust essential to life. The void volume between the solid compounds provides space for water, air, and organisms that thrive on the consumption of minerals and organic matter thereby regulating soil carbon turnover. However, not all water in the pore space in soils and sediments is in its liquid state. This is a result of the adhesive forces which reduce the water activity in small pores and charged mineral surfaces. This water has a lower tendency to react chemically in solution as this additional binding energy lowers its activity. In this work, we estimated the amount of soil pore water that is thermodynamically different from a simple aqueous solution. The quantity of soil pore water with properties different to liquid water was found to systematically increase with increasing clay content. The significance of this is that the grain size and surface area apparently affects the thermodynamic state of water. This implies that current methods to determine the amount of water content, traditionally determined from bulk density or gravimetric water content after drying at 105°C overestimates the amount of free water in a soil especially at higher clay content. Our findings have consequences for biogeochemical processes in soils, e.g. nutrients may be contained in water which is not free which could enhance preservation. From water activity measurements on a set of various soils with 0 to 100 wt-% clay, we can show that 5 to 130 mg H2O per g of soil can generally be considered as unsuitable for microbial respiration. These results may therefore provide a unifying explanation for the grain size dependency of organic matter preservation in sedimentary environments and call for a revised view on the biogeochemical environment in soils and sediments. This could allow a different type of process oriented modelling.
van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C
2005-09-01
International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. PMID:15931680
Experimental and Simulational Studies on the Theoretical Model of the Plasma Absorption Probe
NASA Astrophysics Data System (ADS)
Li, Bin; Li, Hong; Chen, Zhipeng; Xie, Jinlin; Feng, Guangyao; Liu, Wandong
2010-10-01
Plasma absorption probe (PAP) was developed for measuring the electron density in plasmas processing based on the surface-wave characteristics. In order to diagnose the plasma with lower density and higher pressure, a sensitive PAP was also developed. Both types of PAP were analyzed theoretically under the quasi-static approximation, which is highly problematic when a conductor exists in the resonance region of the probe. For this reason, a theoretical model for the PAP is presented in this paper. The model is derived from the electromagnetic wave equation. Its principle is then verified via experiments and numerical simulations. Both experimental and numerical results show that the electromagnetic theoretical model is valid as compared with the quasi-static model. Consequently, a new type of PAP, named as the electromagnetic PAP, is thus proposed for the measurement of electron density.
Panagiotopoulou, O; Wilshin, S D; Rayfield, E J; Shefelbine, S J; Hutchinson, J R
2012-02-01
Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form-function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reliable model. We detail how choices of element size, type and material properties in FEA influence the results of simulations. We also present an empirical model for estimating heterogeneous material properties throughout an elephant femur (but of broad applicability to FEA). We then use an ex vivo experimental validation test of a cadaveric femur to check our FEA results and find that the heterogeneous model matches the experimental results extremely well, and far better than the homogeneous model. We emphasize how considering heterogeneous material properties in FEA may be critical, so this should become standard practice in comparative FEA studies along with convergence analyses, consideration of element size, type and experimental validation. These steps may be required to obtain accurate models and derive reliable conclusions from them. PMID:21752810
Simple control-theoretic models of human steering activity in visually guided vehicle control
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1991-01-01
A simple control theoretic model of human steering or control activity in the lateral-directional control of vehicles such as automobiles and rotorcraft is discussed. The term 'control theoretic' is used to emphasize the fact that the model is derived from a consideration of well-known control system design principles as opposed to psychological theories regarding egomotion, etc. The model is employed to emphasize the 'closed-loop' nature of tasks involving the visually guided control of vehicles upon, or in close proximity to, the earth and to hypothesize how changes in vehicle dynamics can significantly alter the nature of the visual cues which a human might use in such tasks.
Leite, Fabio L.; Bueno, Carolina C.; Da Róz, Alessandra L.; Ziemath, Ervino C.; Oliveira, Osvaldo N.
2012-01-01
The increasing importance of studies on soft matter and their impact on new technologies, including those associated with nanotechnology, has brought intermolecular and surface forces to the forefront of physics and materials science, for these are the prevailing forces in micro and nanosystems. With experimental methods such as the atomic force spectroscopy (AFS), it is now possible to measure these forces accurately, in addition to providing information on local material properties such as elasticity, hardness and adhesion. This review provides the theoretical and experimental background of AFS, adhesion forces, intermolecular interactions and surface forces in air, vacuum and in solution. PMID:23202925
NASA Astrophysics Data System (ADS)
de Natale, Giuseppe; Crippa, Bruno; Troise, Claudia; Pingue, Folco; Audia, Karim; Dalla Via, Giorgio
2010-05-01
The seismic sequence occurred in the Abruzzo Apennines near L'Aquila (Italy) in April 2009 caused extensive damage and a large number of casualties (close to 300). The earthquake struck an area in the Italian Apennines chain where several faults, belonging to adjacent seismotectonic domains, create a complex tectonic regime resulting from the interaction among regional stress build-up, local stress changes caused by individual earthquakes and visco-elastic stress relaxation. Understanding such complex interaction in the Apennines can lead to a large step forward in the seismic risk mitigation in Italy. The Abruzzo earthquake has been exceptionally well recorded by InSAR data, much better than the first Italian earthquake ever recorded by satellites, namely the 1997 Umbria-Marche one. Envisat data for the Abruzzo earthquake are in fact very clear and allow an accurate reconstruction of the faulting mechanism. We present here an accurate inversion of vertical deformation data obtained by ENVISAT images, aimed to give a detailed reconstruction of the fault geometry and slip distribution. The resulting faulting models are then used to compute, by a suitable theoretical model based on elastic dislocation theory, the stress changes induced on the neighbouring faults. The study of the subsequent mainshocks of the Abruzzo sequence clearly evidence the effect of static stress changes consecutively triggering the subsequent mainshocks. Furthermore, this analysis put in evidence the seismotectonic domains that have been more heavily charged by stress released by the Abruzzo mainshocks. The most important faults significantly charged by the Abruzzo sequence include the Sulmona and Avezzano tectonic domains, including also the area, West-Southwest to the Avezzano domain, where a large earthquake occurred in 1394. Taking into account the average regional stress build-up in the area, the positive Coulomb stress changes caused by this earthquake can be seen as anticipating the
Achievement Goals and Discrete Achievement Emotions: A Theoretical Model and Prospective Test
ERIC Educational Resources Information Center
Pekrun, Reinhard; Elliot, Andrew J.; Maier, Markus A.
2006-01-01
A theoretical model linking achievement goals to discrete achievement emotions is proposed. The model posits relations between the goals of the trichotomous achievement goal framework and 8 commonly experienced achievement emotions organized in a 2 (activity/outcome focus) x 2 (positive/negative valence) taxonomy. Two prospective studies tested…
A Theoretical Model for Thin Film Ferroelectric Coupled Microstripline Phase Shifters
NASA Technical Reports Server (NTRS)
Romanofsky, R. R.; Quereshi, A. H.
2000-01-01
Novel microwave phase shifters consisting of coupled microstriplines on thin ferroelectric films have been demonstrated recently. A theoretical model useful for predicting the propagation characteristics (insertion phase shift, dielectric loss, impedance, and bandwidth) is presented here. The model is based on a variational solution for line capacitance and coupled strip transmission line theory.
REGIONAL SCALE (1000 KM) MODEL OF PHOTOCHEMICAL AIR POLLUTION. PART 1. THEORETICAL FORMULATION
A theoretical framework for a multi-day 1000 km scale simulation model of photochemical oxidant is developed. It is structured in a highly modular form so that eventually the model can be applied through straightforward modifications to simulations of particulates, visibility and...
Game Object Model Version II: A Theoretical Framework for Educational Game Development
ERIC Educational Resources Information Center
Amory, Alan
2007-01-01
Complex computer and video games may provide a vehicle, based on appropriate theoretical concepts, to transform the educational landscape. Building on the original game object model (GOM) a new more detailed model is developed to support concepts that educational computer games should: be relevant, explorative, emotive, engaging, and include…
ERIC Educational Resources Information Center
Hsieh, Pei-Hsuan; Sullivan, Jeremy R.; Sass, Daniel A.; Guerra, Norma S.
2012-01-01
Research has identified factors associated with academic success by evaluating relations among psychological and academic variables, although few studies have examined theoretical models to understand the complex links. This study used structural equation modeling to investigate whether the relation between test anxiety and final course grades was…
Engaging Dialogue in Our Diverse Social Work Student Body: A Multilevel Theoretical Process Model
ERIC Educational Resources Information Center
Rozas, Lisa Werkmeister
2007-01-01
This article presents a theoretical process model for students engaging in dialogic learning about issues of race and anti-oppression. The model identifies conditions present in the dialogue process and demonstrates how these conditions, when coordinated with certain interventions and strategies, help to create particular outcomes for…
The theoretical basis, physical structure, and preliminary evaluation of the U.S. Environmental Protection Agency's Complex Terrain Dispersion Model (CTDM) are described. CTDM is a point-source plume model designed primarily to estimate windward-side surface concentrations on dis...
NASA Astrophysics Data System (ADS)
Xin, Cui; Di-Yu, Zhang; Gao, Chen; Ji-Gen, Chen; Si-Liang, Zeng; Fu-Ming, Guo; Yu-Jun, Yang
2016-03-01
We demonstrate that the interference minima in the linear molecular harmonic spectra can be accurately predicted by a modified two-center model. Based on systematically investigating the interference minima in the linear molecular harmonic spectra by the strong-field approximation (SFA), it is found that the locations of the harmonic minima are related not only to the nuclear distance between the two main atoms contributing to the harmonic generation, but also to the symmetry of the molecular orbital. Therefore, we modify the initial phase difference between the double wave sources in the two-center model, and predict the harmonic minimum positions consistent with those simulated by SFA. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant Nos. 11274001, 11274141, 11304116, 11247024, and 11034003), and the Jilin Provincial Research Foundation for Basic Research, China (Grant Nos. 20130101012JC and 20140101168JC).
NASA Astrophysics Data System (ADS)
Rong, Y. M.; Chang, Y.; Huang, Y.; Zhang, G. J.; Shao, X. Y.
2015-12-01
There are few researches that concentrate on the prediction of the bead geometry for laser brazing with crimping butt. This paper addressed the accurate prediction of the bead profile by developing a generalized regression neural network (GRNN) algorithm. Firstly GRNN model was developed and trained to decrease the prediction error that may be influenced by the sample size. Then the prediction accuracy was demonstrated by comparing with other articles and back propagation artificial neural network (BPNN) algorithm. Eventually the reliability and stability of GRNN model were discussed from the points of average relative error (ARE), mean square error (MSE) and root mean square error (RMSE), while the maximum ARE and MSE were 6.94% and 0.0303 that were clearly less than those (14.28% and 0.0832) predicted by BPNN. Obviously, it was proved that the prediction accuracy was improved at least 2 times, and the stability was also increased much more.
NASA Astrophysics Data System (ADS)
Grünkorn, Juliane; Belzen, Annette Upmeier zu; Krüger, Dirk
2014-07-01
Research in the field of students' understandings of models and their use in science describes different frameworks concerning these understandings. Currently, there is no conjoint framework that combines these structures and so far, no investigation has focused on whether it reflects students' understandings sufficiently (empirical evaluation). Therefore, the purpose of this article is to present the results of an empirical evaluation of a conjoint theoretical framework. The theoretical framework integrates relevant research findings and comprises five aspects which are subdivided into three levels each: nature of models, multiple models, purpose of models, testing, and changing models. The study was conducted with a sample of 1,177 seventh to tenth graders (aged 11-19 years) using open-ended items. The data were analysed by identifying students' understandings of models (nature of models and multiple models) and their use in science (purpose of models, testing, and changing models), and comparing as well as assigning them to the content of the theoretical framework. A comprehensive category system of students' understandings was thus developed. Regarding the empirical evaluation, the students' understandings of the nature and the purpose of models were sufficiently described by the theoretical framework. Concerning the understandings of multiple, testing, and changing models, additional initial understandings (only one model possible, no testing of models, and no change of models) need to be considered. This conjoint and now empirically tested framework for students' understandings can provide a common basis for future science education research. Furthermore, evidence-based indications can be provided for teachers and their instructional practice.
NASA Astrophysics Data System (ADS)
Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning
2016-02-01
The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.
Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S.; Breen, Lauren J.; Witt, Regina R.; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin
2016-01-01
Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care. PMID:27242567
Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin
2016-01-01
Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care. PMID:27242567
A simple theoretical model for ⁶³Ni betavoltaic battery.
Zuo, Guoping; Zhou, Jianliang; Ke, Guotu
2013-12-01
A numerical simulation of the energy deposition distribution in semiconductors is performed for ⁶³Ni beta particles. Results show that the energy deposition distribution exhibits an approximate exponential decay law. A simple theoretical model is developed for ⁶³Ni betavoltaic battery based on the distribution characteristics. The correctness of the model is validated by two literature experiments. Results show that the theoretical short-circuit current agrees well with the experimental results, and the open-circuit voltage deviates from the experimental results in terms of the influence of the PN junction defects and the simplification of the source. The theoretical model can be applied to ⁶³Ni and ¹⁴⁷Pm betavoltaic batteries. PMID:23974307
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
NASA Technical Reports Server (NTRS)
Kory, Carol L.
2001-01-01
The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be
Cramer, Christopher J; Włoch, Marta; Piecuch, Piotr; Puzzarini, Cristina; Gagliardi, Laura
2006-02-01
Accurately describing the relative energetics of alternative bis(mu-oxo) and mu-eta2:eta2 peroxo isomers of Cu2O2 cores supported by 0, 2, 4, and 6 ammonia ligands is remarkably challenging for a wide variety of theoretical models, primarily owing to the difficulty of maintaining a balanced description of rapidly changing dynamical and nondynamical electron correlation effects and a varying degree of biradical character along the isomerization coordinate. The completely renormalized coupled-cluster level of theory including triple excitations and extremely efficient pure density functional levels of theory quantitatively agree with one another and also agree qualitatively with experimental results for Cu2O2 cores supported by analogous but larger ligands. Standard coupled-cluster methods, such as CCSD(T), are in most cases considerably less accurate and exhibit poor convergence in predicted relative energies. Hybrid density functionals significantly underestimate the stability of the bis(mu-oxo) form, with the magnitude of the error being directly proportional to the percentage Hartree-Fock exchange in the functional. Single-root CASPT2 multireference second-order perturbation theory, by contrast, significantly overestimates the stability of bis(mu-oxo) isomers. Implications of these results for modeling the mechanism of C-H bond activation by supported Cu2O2 cores, like that found in the active site of oxytyrosinase, are discussed. PMID:16451035
ERIC Educational Resources Information Center
Johnson, Marcus L.; Taasoobshirazi, Gita; Kestler, Jessica L.; Cordova, Jackie R.
2015-01-01
We tested a theoretical model of college students' ratings of messengers of resilience and models of resilience, students' own perceived resilience, regulatory strategy use and achievement. A total of 116 undergraduates participated in this study. The results of a path analysis indicated that ratings of models of resilience had a direct effect on…
Gao, Yaozong; Shao, Yeqin; Lian, Jun; Wang, Andrew Z; Chen, Ronald C; Shen, Dinggang
2016-06-01
Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation. PMID:26800531
Chen, Y; Mo, X; Chen, M; Olivera, G; Parnell, D; Key, S; Lu, W; Reeher, M; Galmarini, D
2014-06-01
Purpose: An accurate leaf fluence model can be used in applications such as patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is known that the total fluence is not a linear combination of individual leaf fluence due to leakage-transmission, tongue-and-groove, and source occlusion effect. Here we propose a method to model the nonlinear effects as linear terms thus making the MLC-detector system a linear system. Methods: A leaf pattern basis (LPB) consisting of no-leaf-open, single-leaf-open, double-leaf-open and triple-leaf-open patterns are chosen to represent linear and major nonlinear effects of leaf fluence as a linear system. An arbitrary leaf pattern can be expressed as (or decomposed to) a linear combination of the LPB either pulse by pulse or weighted by dwelling time. The exit detector responses to the LPB are obtained by processing returned detector signals resulting from the predefined leaf patterns for each jaw setting. Through forward transformation, detector signal can be predicted given a delivery plan. An equivalent leaf open time (LOT) sinogram containing output variation information can also be inversely calculated from the measured detector signals. Twelve patient plans were delivered in air. The equivalent LOT sinograms were compared with their planned sinograms. Results: The whole calibration process was done in 20 minutes. For two randomly generated leaf patterns, 98.5% of the active channels showed differences within 0.5% of the local maximum between the predicted and measured signals. Averaged over the twelve plans, 90% of LOT errors were within +/−10 ms. The LOT systematic error increases and shows an oscillating pattern when LOT is shorter than 50 ms. Conclusion: The LPB method models the MLC-detector response accurately, which improves patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is sensitive enough to detect systematic LOT errors as small as 10 ms.
NASA Technical Reports Server (NTRS)
Shimazaki, T.; Wuebbles, D. J.
1973-01-01
Calculations based on an improved, time-dependent theoretical model for the vertical ozone density distribution in the upper atmosphere are shown to clarify the cause and determine the appearance precondition for the depression at the 70-85 km altitude region in the ozone density distribution suggested by several theoretical models and only sometimes experimentally observed. It is concluded that the depression develops at night through the effects of hydrogen-oxygen and nitrogen-oxygen reactions, as well as those of eddy diffusion transports.
Haeufle, D F B; Günther, M; Blickhan, R; Schmitt, S
2011-01-01
Recently, the hyperbolic Hill-type force-velocity relation was derived from basic physical components. It was shown that a contractile element CE consisting of a mechanical energy source (active element AE), a parallel damper element (PDE), and a serial element (SE) exhibits operating points with hyperbolic force-velocity dependency. In this paper, the contraction dynamics of this CE concept were analyzed in a numerical simulation of quick release experiments against different loads. A hyperbolic force-velocity relation was found. The results correspond to measurements of the contraction dynamics of a technical prototype. Deviations from the theoretical prediction could partly be explained by the low stiffness of the SE, which was modeled analog to the metal spring in the hardware prototype. The numerical model and hardware prototype together, are a proof of this CE concept and can be seen as a well-founded starting point for the development of Hill-type artificial muscles. This opens up new vistas for the technical realization of natural movements with rehabilitation devices. PMID:22275541
Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy
2014-07-01
With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. PMID:24375512
Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu
2015-01-01
Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement
Theoretical Modeling and Experimental High-Speed Imaging of Elongated Vocal Folds
Zhang, Yu; Regner, Michael F.; Jiang, Jack J.
2014-01-01
In this paper, the role of vocal fold elongation in governing glottal movement dynamics was theoretically and experimentally investigated. A theoretical model was first proposed to incorporate vocal fold elongation into the two-mass model. This model predicted the direct and nondirect components of the glottal time series as a function of vocal fold elongation. Furthermore, high-speed digital imaging was applied in excised larynx experiments to visualize vocal fold vibrations with variable vocal fold elongation from –10% to 50% and subglottal pressures of 18- and 24-cm H2O. Comparison between theoretical model simulations and experimental observations showed good agreement. A relative maximum was seen in the nondirect component of glottal area, suggesting that an optimal elongation could maximize the vocal fold vibratory power. However, sufficiently large vocal fold elongations caused the nondirect component to approach zero and the direct component to approach a constant. These results showed that vocal fold elongation plays an important role in governing the dynamics of glottal area movement and validated the applicability of the proposed theoretical model and high-speed imaging to investigate laryngeal activity. PMID:21118763
Theoretical modeling and experimental high-speed imaging of elongated vocal folds.
Zhang, Yu; Regner, Michael F; Jiang, Jack J
2011-10-01
In this paper, the role of vocal fold elongation in governing glottal movement dynamics was theoretically and experimentally investigated. A theoretical model was first proposed to incorporate vocal fold elongation into the two-mass model. This model predicted the direct and nondirect components of the glottal time series as a function of vocal fold elongation. Furthermore, high-speed digital imaging was applied in excised larynx experiments to visualize vocal fold vibrations with variable vocal fold elongation from -10% to 50% and subglottal pressures of 18- and 24-cm H(2)O. Comparison between theoretical model simulations and experimental observations showed good agreement. A relative maximum was seen in the nondirect component of glottal area, suggesting that an optimal elongation could maximize the vocal fold vibratory power. However, sufficiently large vocal fold elongations caused the nondirect component to approach zero and the direct component to approach a constant. These results showed that vocal fold elongation plays an important role in governing the dynamics of glottal area movement and validated the applicability of the proposed theoretical model and high-speed imaging to investigate laryngeal activity. PMID:21118763
Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model
NASA Technical Reports Server (NTRS)
Goradia, C.; Vaughn, J.; Baraona, C. R.
1980-01-01
A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).
How parents choose to use CAM: a systematic review of theoretical models
Lorenc, Ava; Ilan-Clarke, Yael; Robinson, Nicola; Blair, Mitch
2009-01-01
Background Complementary and Alternative Medicine (CAM) is widely used throughout the UK and the Western world. CAM is commonly used for children and the decision-making process to use CAM is affected by numerous factors. Most research on CAM use lacks a theoretical framework and is largely based on bivariate statistics. The aim of this review was to identify a conceptual model which could be used to explain the decision-making process in parental choice of CAM. Methods A systematic search of the literature was carried out. A two-stage selection process with predetermined inclusion/exclusion criteria identified studies using a theoretical framework depicting the interaction of psychological factors involved in the CAM decision process. Papers were critically appraised and findings summarised. Results Twenty two studies using a theoretical model to predict CAM use were included in the final review; only one examined child use. Seven different models were identified. The most commonly used and successful model was Andersen's Sociobehavioural Model (SBM). Two papers proposed modifications to the SBM for CAM use. Six qualitative studies developed their own model. Conclusion The SBM modified for CAM use, which incorporates both psychological and pragmatic determinants, was identified as the best conceptual model of CAM use. This model provides a valuable framework for future research, and could be used to explain child CAM use. An understanding of the decision making process is crucial in promoting shared decision making between healthcare practitioners and parents and could inform service delivery, guidance and policy. PMID:19386106
A Physically Based Theoretical Model of Spore Deposition for Predicting Spread of Plant Diseases.
Isard, Scott A; Chamecki, Marcelo
2016-03-01
A physically based theory for predicting spore deposition downwind from an area source of inoculum is presented. The modeling framework is based on theories of turbulence dispersion in the atmospheric boundary layer and applies only to spores that escape from plant canopies. A "disease resistance" coefficient is introduced to convert the theoretical spore deposition model into a simple tool for predicting disease spread at the field scale. Results from the model agree well with published measurements of Uromyces phaseoli spore deposition and measurements of wheat leaf rust disease severity. The theoretical model has the advantage over empirical models in that it can be used to assess the influence of source distribution and geometry, spore characteristics, and meteorological conditions on spore deposition and disease spread. The modeling framework is refined to predict the detailed two-dimensional spatial pattern of disease spread from an infection focus. Accounting for the time variations of wind speed and direction in the refined modeling procedure improves predictions, especially near the inoculum source, and enables application of the theoretical modeling framework to field experiment design. PMID:26595112
Theoretical models of diffraction efficiencies of arc profiled bar transmission grating
NASA Astrophysics Data System (ADS)
Shang, Wanli; Mei, Lusheng; Yang, Jiamin
2012-06-01
Diffraction efficiencies of transmission grating play important roles in accurate measurement of soft X-ray and grating applications. Circular, horizontal elliptical, and vertical elliptical arc profiled bar models were established from Kirchhoff diffraction theory with the Fraunhofer diffraction approximation to calculate the diffraction efficiencies of arc profiled bar transmission grating. The calculated results were compared with the available data from X-ray Optics web site of Lawrence Berkeley Laboratory's Center. Excellent agreements have been obtained. With these modes the diffraction efficiencies of the particular type of transmission grating having arc profiled bars can be simulated accurately and the best fit bar profiles can be obtained.
NASA Astrophysics Data System (ADS)
Sahin, O. K.; Asci, M.
2014-12-01
At this study, determination of theoretical parameters for inversion process of Trabzon-Sürmene-Kutlular ore bed anomalies was examined. Making a decision of which model equation can be used for inversion is the most important step for the beginning. It is thought that will give a chance to get more accurate results. So, sections were evaluated with sphere-cylinder nomogram. After that, same sections were analyzed with cylinder-dike nomogram to determine the theoretical parameters for inversion process for every single model equations. After comparison of results, we saw that only one of them was more close to parameters of nomogram evaluations. But, other inversion result parameters were different from their nomogram parameters.
NASA Astrophysics Data System (ADS)
Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph
Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.
Scott, Serena J.; Prakash, Punit; Salgaonkar, Vasant; Jones, Peter D.; Cam, Richard N.; Han, Misung; Rieke, Viola; Burdette, E. Clif; Diederich, Chris J.
2014-01-01
Purpose The objectives of this study were to develop numerical models of interstitial ultrasound ablation of tumors within or adjacent to bone, to evaluate model performance through theoretical analysis, and to validate the models and approximations used through comparison to experiments. Methods 3D transient biothermal and acoustic finite element models were developed, employing four approximations of 7 MHz ultrasound propagation at bone/soft tissue interfaces. The various approximations considered or excluded reflection, refraction, angle-dependence of transmission coefficients, shear mode conversion, and volumetric heat deposition. Simulations were performed for parametric and comparative studies. Experiments within ex vivo tissues and phantoms were performed to validate the models by comparison to simulations. Temperature measurements were conducted using needle thermocouples or MR temperature imaging (MRTI). Finite element models representing heterogeneous tissue geometries were created based on segmented MR images. Results High ultrasound absorption at bone/soft tissue interfaces increased the volumes of target tissue that could be ablated. Models using simplified approximations produced temperature profiles closely matching both more comprehensive models and experimental results, with good agreement between 3D calculations and MRTI. The correlation coefficients between simulated and measured temperature profiles in phantoms ranged from 0.852 to 0.967 (p-value < 0.01) for the four models. Conclusions Models using approximations of interstitial ultrasound energy deposition around bone/soft tissue interfaces produced temperature distributions in close agreement with comprehensive simulations and experimental measurements. These models may be applied to accurately predict temperatures produced by interstitial ultrasound ablation of tumors near and within bone, with applications toward treatment planning. PMID:24102393
Chang, Chih-Hao . E-mail: chchang@engineering.ucsb.edu; Liou, Meng-Sing . E-mail: meng-sing.liou@grc.nasa.gov
2007-07-01
In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM{sup +} scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM{sup +}-up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion.
E-Learning Systems Support of Collaborative Agreements: A Theoretical Model
ERIC Educational Resources Information Center
Aguirre, Sandra; Quemada, Juan
2012-01-01
This paper introduces a theoretical model for developing integrated degree programmes through e-learning systems as stipulated by a collaboration agreement signed by two universities. We have analysed several collaboration agreements between universities at the national, European, and transatlantic level as well as various e-learning frameworks. A…
ERIC Educational Resources Information Center
Briggs, Michele Kielty; Shoffner, Marie F.
2006-01-01
Overall spiritual wellness, as well as 4 individual components of spiritual wellness, has been theoretically and empirically linked with depression. Prior to this investigation, no study has examined the relationship between spiritual wellness and depression by using a 4-component measurement model of spiritual wellness. In this study of older…
Unconscious Determinants of Career Choice and Burnout: Theoretical Model and Counseling Strategy.
ERIC Educational Resources Information Center
Malach-Pines, Ayala; Yafe-Yanai, Oreniya
2001-01-01
Proposes a psychodynamic-existential perspective as a theoretical model that explains career burnout and serves as a basis for a counseling strategy. According to existential theory, the root of career burnout lies in people's need to find existential significance in their life and their sense that their work does not provide it. (Contains 40…
A Game-Theoretic Model of Grounding for Referential Communication Tasks
ERIC Educational Resources Information Center
Thompson, William
2009-01-01
Conversational grounding theory proposes that language use is a form of rational joint action, by which dialog participants systematically and collaboratively add to their common ground of shared knowledge and beliefs. Following recent work applying "game theory" to pragmatics, this thesis develops a game-theoretic model of grounding that…
Suggestion for a Theoretical Model for Secondary-Tertiary Transition in Mathematics
ERIC Educational Resources Information Center
Clark, Megan; Lovric, Miroslav
2008-01-01
One of most notable features of existing body of research in transition seems to be the absence of a theoretical model. The suggestion we present in this paper--to view and understand the high school to university transition in mathematics as a modern-day rite of passage--is an attempt at defining such framework. Although dominantly reflecting…
Validation of a Theoretical Model of Diagnostic Classroom Assessment: A Mixed Methods Study
ERIC Educational Resources Information Center
Koh, Nancy
2012-01-01
The purpose of the study was to validate a theoretical model of diagnostic, formative classroom assessment called, "Proximal Assessment for Learner Diagnosis" (PALD). To achieve its purpose, the study employed a two-stage, mixed-methods design. The study utilized multiple data sources from 11 elementary level mathematics teachers who…
ERIC Educational Resources Information Center
Newman, Tim A.
2012-01-01
This study described the current state of principal salaries in South Carolina and compared the salaries of similar size schools by specific report card performance and demographic variables. Based on the findings, theoretical models were proposed, and comparisons were made with current salary data. School boards, human resource personnel and…
Falling Chains as Variable-Mass Systems: Theoretical Model and Experimental Analysis
ERIC Educational Resources Information Center
de Sousa, Celia A.; Gordo, Paulo M.; Costa, Pedro
2012-01-01
In this paper, we revisit, theoretically and experimentally, the fall of a folded U-chain and of a pile-chain. The model calculation implies the division of the whole system into two subsystems of variable mass, allowing us to explore the role of tensional contact forces at the boundary of the subsystems. This justifies, for instance, that the…
The Practice-Theory-Practice Model: The Establishment of the Theoretical Bases of a Case Study.
ERIC Educational Resources Information Center
Michael, Robert O.; Barbe, Richard H.
The Practice-Theory-Practice Model (PTPM), a method designed to infuse theoretical perspectives into case study materials and to serve as a guide for examining chance processes in institutions of higher education, is described. The PTPM considers the historical and experiential environment that acts upon an institution, its practices and its…
Models of the Bilingual Lexicon and Their Theoretical Implications for CLIL
ERIC Educational Resources Information Center
Heine, Lena
2014-01-01
Although many advances have been made in recent years concerning the theoretical dimensions of content and language integrated learning (CLIL), research still has to meet the necessity to come up with integrative models that adequately map the interrelation between content and language learning in CLIL contexts. This article will suggest that…
ERIC Educational Resources Information Center
Balmer, Dorene F.; Richards, Boyd F.; Varpio, Lara
2015-01-01
Using Bourdieu's theoretical model as a lens for analysis, we sought to understand how students experience the undergraduate medical education (UME) milieu, focusing on how they navigate transitions from the preclinical phase, to the major clinical year (MCY), and to the preparation for residency phase. Twenty-two medical students participated in…
Factors that Contribute to Talented Performance: A Theoretical Model from a Chinese Perspective
ERIC Educational Resources Information Center
Wu, Echo H.
2005-01-01
This paper examines the Chinese literature on giftedness and talented performance (TP) and compares its dominant theoretical features with some influential models to be found in the North American literature. One significant feature to emerge from the Chinese literature is a deemphasis on giftedness as an innate ability and an emphasis on the…
A Study of the Model of Mastery as a Theoretical Framework for Coaching Teachers Writing Workshop
ERIC Educational Resources Information Center
Kimbrell, Jennifer L.
2010-01-01
The study investigated a coach's use of a theoretical framework called the Model of Mastery to assist three teachers in becoming self-regulated in the teaching of writing workshop by moving them through three settings: acquisition, consolidation, and consultation. The goal of the coach was to assist teachers in developing expertise in procedural,…
Characterization of Titan 3-D acoustic pressure spectra by least-squares fit to theoretical model
NASA Astrophysics Data System (ADS)
Hartnett, E. B.; Carleen, E.
1980-01-01
A theoretical model for the acoustic spectra of undeflected rocket plumes is fitted to computed spectra of a Titan III-D at varying times after ignition, by a least-squares method. Tests for the goodness of the fit are made.
Theoretical values of various parameters in the Gummel-Poon model of a bipolar junction transistor
NASA Technical Reports Server (NTRS)
Benumof, R.; Zoutendyk, J.
1986-01-01
Various parameters in the Gummel-Poon model of a bipolar junction transistor are expressed in terms of the basic structure of a transistor. A consistent theoretical approach is used which facilitates an understanding of the foundations and limitations of the derived formulas. The results enable one to predict how changes in the geometry and composition of a transistor would affect performance.